This program is built for practitioners who are already building on Eliza, already using Windsurf, Claude, and Codex, and already operating at the frontier. BNY's top 500 engineers don't need foundational AI training. They need the architecture, evaluation, and governance skills that separate engineers who use AI from engineers who build and govern production AI systems at scale.
This is not a product management program repurposed for engineers. It's advanced AI engineering built from the ground up for practitioners already operating at the frontier.
BNY's top engineers are highly resourceful and self-motivated. Many are already consuming frontier content independently from sources like Hugging Face on their own time. They're not waiting to be taught. What they need is live instruction from practitioners solving the same problems, in a collaborative environment where engineers push each other and produce real artifacts against real BNY use cases.
The curriculum goes directly into advanced agent architecture, multi-agent system design, evaluation frameworks, and AI governance in regulated environments, structured around the specific capability gap between Builder and Pioneer. Every session produces something deployable.
The one day of Product for Engineers is designed specifically for engineers, giving them the context to build across the product boundary: validating use cases, designing for real user outcomes, and handing off cleanly from prototype to production.
Two to four days of advanced AI engineering instruction, an optional Product for Engineers day, and a 3-day Buildathon where engineers build and evaluate real AI systems against BNY's own use cases. Every session produces a tangible artifact.
Senior practitioners from OpenAI, Google, Meta, Intuit, and Amazon: people who have built and shipped production AI systems. Not academic curriculum. Not generic content. Instruction from people doing this work today.
Engineers who can architect multi-agent systems, govern agent ecosystems at scale, and engineer organizational knowledge layers, operating at BNY's Level 3 Pioneer across the engineering organization.
This is not a foundational AI training program. The curriculum moves directly into advanced agent design, system architecture, evaluation, and governance, structured around the specific capability gap BNY's top-tier engineers need to close. The Buildathon is where they prove it on real BNY use cases.
| Audience | 500 tier-1 software engineers + select product owners |
| Format | Option A: 2 days · Option B: 4 days + Buildathon · Option C: 5 days + Buildathon + On-Demand |
| Delivery | Live, instructor-led (hybrid options available) |
| Cohort size | 25–30 engineers per cohort |
| Cohorts | 20 cohorts across all options |
| Level of entry | Builder — engineers already using AI tools in production |
| Target outcome | Pioneer — architects of production AI systems |
| Certification | Product School AI Engineering |
Build agent-based systems (RAG, agentic patterns, tool use) · Design multi-step workflows with orchestration · Integrate LLMs with APIs and internal data · Debug failure modes · Define production readiness · Apply systems to real BNY use cases.
Everything in Option A, plus: build evaluation pipelines (LLM-as-judge, programmatic evals) · Define performance metrics (accuracy, latency, cost, hallucination) · Implement monitoring and feedback loops · Establish governance and deployment controls · 3-day Buildathon where teams build and evaluate AI systems on real internal BNY use cases with live demos.
Everything in Option B, plus: align engineering with product and design using shared frameworks · Define validation briefs, PRDs, and sprint-ready plans · Connect AI systems to real user and business outcomes · Accelerate prototype-to-production handoff · On-demand core library covering product and execution fundamentals for engineers.
Product School owns end-to-end delivery across all cohorts running in parallel, not just curriculum design. BNY has one point of contact throughout.
Dedicated Product School lead owns end-to-end delivery. Defined escalation paths, weekly program health reporting, and single point of contact for BNY stakeholders throughout.
Full cohort schedule owned and managed by PS with BNY. Instructor allocation across parallel cohorts, fallback protocols for last-minute changes, and completion tracking across all ~500 engineers.
Regular course pulse surveys, monthly program reviews with BNY stakeholders, curriculum revision cycles built into the roadmap, and outcome metrics tied to engineering KPIs.
The program is structured around production outputs, not passive learning. Every module ends with an engineer holding something they built: a functional system, a governance spec, an eval suite. Content is developed for engineers already operating at the Builder level in BNY's environment.
Advanced AI Engineering: Frontier — Days 1–422 hours of live, instructor-led technical training delivered across four days. Builds from agent architecture through multi-agent governance, production deployment, and evaluation infrastructure.
The agentic shift: from AI-assisted coding to agents that plan, act, and verify. Covers the Agent Spectrum (Levels 0–3), autonomy decision frameworks, and the Agent Workflow Spec: actors, plan, memory, and tool boundaries. Engineers build their first functional agent.
Core architectural building blocks: state & memory, tooling fabric, multi-agent runtimes, governance hooks. Six agentic design patterns: sequential, parallel, critic-executor, delegation tree, negotiation, hierarchical. Engineers develop pattern-matching judgment and integrate a tooling fabric into their existing agent.
Six multi-agent archetypes, PM blueprint for agent teams, collaboration spec design, and governing and scaling multi-agent workflows in a regulated financial environment. Key reliability metrics for agent-to-agent coordination at BNY scale.
Agent failures are silent, confident, and wrong. This module covers the failure modes unique to production agents — tool misuse, reasoning loops, memory drift, permission escalation, overconfidence — and the Agent Debugging Stack. Four eval types as quality gates.
Five strategic value drivers and trade-offs, four launch-readiness questions (value, ownership, trust, scale), risk-gated rollout strategies, and post-launch governance and feedback loops, including the PRD handoff spec for engineering-to-product handoff at BNY.
Three eval types: code-based, human, and LLM-as-Judge. Five trust-centric metrics: latency, hallucination, fairness, robustness, UX. The 95% accuracy trap. Engineers run their first LLM-as-Judge eval against their own agent.
Four-step error analysis framework, Failure Taxonomy Canvas, and P0–P3 prioritization using a severity-frequency matrix. Engineers translate failure patterns into business risk language appropriate for BNY's regulatory and audit context.
Eval Suite Pyramid for full-risk coverage, gold dataset construction, TPR/TNR analysis, and evaluator evaluation (hallucination in LLM-as-Judge). Eval Spec anatomy and stakeholder framing for engineering, UX, and exec audiences.
Evaluators vs. eval gates. Advisory, soft, and hard gate types. Enforcement across hallucination, latency, bias, tone, and robustness. Codifying gates and thresholds in PRDs. PM as legislator, engineering as enforcer. Directly relevant to BNY's AI governance pipeline.
From siloed eval tasks to a centralized eval platform. The eval maturity curve, centralized vs. decentralized models, and continuous evaluation stack: code evals, drift monitors, periodic audits. Portfolio coverage matrix and cost vs. coverage vs. velocity trade-offs at BNY's scale.
Three models of trust governance, ownership vs. execution, embedding responsible AI bottom-up and top-down. Eval Playbook and KPI dashboard. Pyramid Principle for high-stakes executive communication. Engineers deliver a ship/hold decision with eval-backed rationale to a stakeholder panel.
As AI dissolves the boundary between product and engineering, engineers who can translate technical capability into product decisions have an outsized impact. This day covers UX fundamentals, validation, PRD design, and high-velocity prototyping, taught from an engineering perspective.
User research, Design Thinking, Customer Journey Maps, and the Value Proposition Canvas as an engineering scope-framing tool.
Effort vs. User Value matrix, A/B testing mechanics, MVP types, and customer interview frameworks for hypothesis-testing without confirmation bias.
PRD anatomy and lifecycle, roadmap types, product prioritization methods, and how to run effective engineering ↔ product relationships.
High-velocity prototyping cycle, Skill Markdowns for injecting PRD context into build prompts, and The Confidence Line: criteria for knowing when a prototype is worth shipping.
Module deliverables aren't just homework. They feed directly into the Buildathon. Teams build, evaluate, and defend real AI systems against BNY's own use cases.
Curriculum WalkthroughShewit walks through the complete curriculum module by module, covering delivery approach, the level of technical depth, and how each session connects to the Pioneer capability target.
Every instructor is currently working in AI, building production systems, shipping models, and solving the same types of problems BNY's engineers are being asked to solve. No academics. No retired executives. People doing this work now.
The instructors below are a representative sample. Product School draws from a global network of 200+ practitioner instructors, matched to each cohort's stack and domain. View the full instructor network →

Leads AI product systems at Meta across Facebook Ads — engineering ML optimization pipelines that serve billions of impressions. Co-founded Explorer, an AI-powered autonomous vehicle mapping platform, and led product at Standard Cognition building computer vision systems for autonomous retail at scale. Previously Head of Product Innovation at Tata Group. Advisor at Carnegie Mellon University.

Leads design strategy for generative AI interfaces at OpenAI — responsible for the interaction paradigms used by hundreds of millions of users. Pioneers the patterns for conversational agents, multimodal interfaces, and AI-native development environments.

Leads Google's ML Fairness and Responsible AI effort — developing the governance standards used across Google's AI products. BS in Symbolic Systems and MS in Computer Science from Stanford. Specializes in production AI governance at regulated scale.

12+ years building AI products at scale, including Instagram and Messenger at Meta. Former Lead PM for AI at omni:us and Senior PM for Fraud & AI at Klarna — direct experience with AI governance in regulated financial services. Degree in Mathematics and Computer Science.
Launched Android Studio — Google's official developer environment — and the Android Developer Preview program. Deep background in enterprise developer tooling and go-to-market for AI platforms. MS in IT from University of Maryland, MBA from Harvard Business School.

AI systems lead at Slack with a career spanning Google, Twitter, Airbnb, and Facebook — shipping complex AI-integrated systems at each. MIT master's in Technology & Policy and Engineering & Management; Cornell BS in Mechanical Engineering. Brings the systems-engineering discipline that turns AI prototypes into production infrastructure at FAANG scale.
BNY's engineers are not starting from the beginning. They are already at the Builder level, using AI tools daily, integrating APIs, and working within Eliza. This program is designed for the specific gap between where they are and what Level 3 Pioneer requires.
All three options cover BNY's top-tier 500 engineers across 20 cohorts. The difference is program depth, from AI system fundamentals through full production readiness, product integration, and a 3-day Buildathon.
All options cover 500 engineers across 20 cohorts.
About Product SchoolThis proposal reflects the full scope of what we've discussed. Three steps from internal alignment to cohort launch.
The proposal and program overview have been shared. This is an opportunity to walk Jeremy and his team through the curriculum directly, answer any outstanding questions, and ensure full alignment before moving forward.
Schedule a Call →Based on our last conversation, BNY is leaning toward Option C, the full program including AI Engineering Foundations, Advanced AI Systems, the 3-day Buildathon, and Product for Engineers. Once the internal review is complete, we confirm scope and begin curriculum customization.
Product School owns cohort scheduling end-to-end across all 20 cohorts. Once scope is locked, we move directly into scheduling and mobilize quickly. Parallel cohort delivery begins on BNY's timeline.