From the Front Row

AI Adoption Is Just Modernization in Fast Forward.
And You Already Know How to Win.

You don't need a new playbook. You need to apply the hard-won wisdom you already have. Here is the optimistic, achievable path to AI integration.

→ 10+ Years of Patterns
→ Real Trade-offs Included
→ Tactical to Strategic
"You'll also learn new words along the way — words you didn't know existed. I still remember hearing idempotent for the first time and thinking I had no frame of reference for what it even described. But it would become a governing principle of infrastructure as code and every discipline that followed."
— Jamil Jadallah

The Last 10 Years Taught Us Something We Can't Afford to Forget

Over the past decade, we saw that infrastructure and organization need to work together, whether it was DevOps, Cloud Native, Kubernetes, or Agile. These changes took years. Some teams failed, companies reorganized, and valuable knowledge slipped away. The teams that succeeded were the ones who took a step back, thought carefully about where new patterns fit, and figured out what needed to be in place first.

We learned that lesson at scale.

AI is speeding up this process. New capabilities are arriving faster, and the competition is intense. Companies that took their time with past modernization efforts can't afford to move that slowly again—not with AI and not with the current opportunity.

But here's what they can do: apply what they've learned faster.

The same ideas that made DevOps successful—clear structure, the right infrastructure, and focused use—also apply to AI. The difference now is that we can start by diagnosing what we need, instead of learning through mistakes. We can ask tough questions before making big changes.

The question isn't "How do we adopt AI?" It's "Where does AI actually create value for us, and are we ready for it?"

The Pattern Across a Decade (Four Lessons You Already Know)

DevOps (2010-2014): The First Structural Lesson

DevOps transformed how fast teams could ship. The organizations that really unlocked it recognized it as a structural imperative: if you want velocity, developers and operations can't be separate tribes. Infrastructure is part of the product delivery process.

The lesson: Align structure with capability from the start, and velocity follows.

Organizations that succeeded:

  • Had engineering leadership that understood infrastructure mattered
  • Were willing to break up functional silos
  • Invested in automation before splitting teams
  • Saw deployment frequency increase 10x within 18 months

What didn't work:

  • Hiring a DevOps engineer and hoping they'd magic away organizational friction
  • Keeping the old structure (Eng → DevOps → Ops) and expecting it to deliver velocity
  • Treating it as an IT initiative instead of a product delivery transformation

Here's the key point: The companies that succeeded didn't just move faster. They changed how their teams could innovate. That's the benefit of being intentional.

Cloud Native (2014-2018): The Architectural Constraint

Cloud Native unlocked massive scalability. Organizations rethought how applications were built—stateless, distributed, resilient, observable. The ones that thrived didn't just move to the cloud; they rethought their entire architecture around it.

The lesson: A new way of working requires a new way of thinking and new foundational capabilities. Get both right, and scale becomes possible.

The organizations that thrived:

  • Started with the data problem first (how do we scale state?)
  • Built observability into the platform before teams needed it
  • Were intentional about which monoliths to decompose first
  • Ended up with systems that could scale 100x without rewriting

What they avoided:

  • Decomposing monoliths without solving the data consistency problem
  • Splitting teams into microservices but ignoring observability
  • Assuming moving to the cloud would automatically change how they built applications

The reward: Companies that did this well didn't just get faster. They became able to scale almost without limits. That's a competitive advantage that keeps growing.

Kubernetes (2015-2020): The Maturity Tax

Kubernetes unlocked operational scale that wasn't possible before. The organizations that mastered it didn't just run containers—they fundamentally changed how they deployed and scaled software.

The lesson: More powerful tools require higher baseline maturity. But when you get the maturity right, the tool unlocks possibilities you couldn't access before.

The organizations that thrived:

  • Built platform teams first, then gave application teams Kubernetes
  • Invested in observability and incident response before complexity
  • Were ruthless about "is this application ready for this level of operational complexity?"
  • Ended up with systems they could deploy hundreds of times per day

What they avoided:

  • Running Kubernetes without a platform team (leads to burnout, not speed)
  • Migrating to Kubernetes without updating deployment pipelines
  • Expecting Kubernetes to fix organizational chaos (it amplifies it instead)

The payoff: Organizations with mature platforms could make changes quickly and confidently. This speed advantage grows even more important in competitive markets.

Agile (2005-2019): The Organizational Alignment Problem

Agile transformed how teams think about shipping value. The organizations that really won with Agile didn't just adopt ceremonies—they redesigned their entire delivery system around small, independent teams.

The lesson: Structure is only as effective as its supporting systems. Get them aligned, and velocity compounds.

The organizations that won:

  • Updated their CI/CD to support frequent deployments before splitting teams
  • Redesigned data governance to give teams autonomy without chaos
  • Were honest about what would break and fixed it proactively
  • Realized that ceremonies matter only if the infrastructure supports them
  • Went from quarterly releases to weekly shipping within 12 months

What derailed teams:

  • Splitting into 12 small teams but keeping quarterly deployment cycles
  • Adopting scrum ceremonies without updating CI/CD
  • Restructuring without rethinking data governance
  • Treating Agile adoption as a team-level change instead of a systems-level transformation

The payoff: Companies that succeeded could respond to customer feedback faster than anyone else. That advantage lasted for five to ten years.

The Common Thread

Across all of these—DevOps, Cloud Native, Kubernetes, Agile—there's a pattern:

  1. A new capability emerges (continuous integration, distributed systems, orchestration, iterative delivery)
  2. Organizations want it (faster deployments, better scale, responsiveness, velocity)
  3. Organizations either rush adoption or do it deliberately

The ones that rushed said:

  • "Let's adopt the practice" (without updating the underlying structure)
  • "Let's hire experts in the new thing" (and ignore the systemic gaps)
  • "Let's declare ourselves transformed" (and wonder why nothing changed)

The ones that succeeded said:

  • "Where does this actually belong in our operating model?"
  • "What maturity do we need first?"
  • "What breaks if we do this, and can we fix it?"

They focused on what mattered, acted with intention, and succeeded.

Why AI Is Different (And Why That Matters Now)

AI isn't like the technologies of the last decade in one critical way: the capability curve is accelerating, and competitive pressure is immediate.

With DevOps, you had 4-5 years before it mattered. With Kubernetes, you had maybe 2-3 years. With AI, the organizations making decisive moves now will set the trajectory for the next 5-10 years. Waiting isn't an option.

But moving too quickly without a plan is also risky.

This is where the lesson from the last decade becomes crucial. You have a playbook. You know what questions to ask. You know where alignment breaks. You know how to diagnose readiness.

The opportunity is this: compress the learning cycle.

Instead of spending 10 years discovering where AI fits, you can ask deliberately from the start.

This framework isn't a step-by-step plan where you must do everything. Instead, it's a tool to help you diagnose your situation and:

  • Identify which team types you actually need
  • Audit whether your maturity supports them
  • Spot where your structure and capability are misaligned
  • Move intentionally instead of frantically

Three Paths Forward (Pick One and Win)

What's different now compared to the last decade is that you don't need to take on everything at once. You don't have to overhaul your whole company or try to do everything.

You can be precise and focused.

There are three distinct roles AI plays in product teams, each with a clear prerequisite, a clear value unlock, and a clear path to success. The question isn't "should we do all three?" It's "which one makes sense for where we are right now?" And critically: which one matches both your mandate and the infrastructure that's actually ready — not the infrastructure you're planning to have.

What it is: AI-capable product managers embedded with engineering teams, integrating AI into existing workflows and products.

The opportunity: This is the most achievable starting point. You're not reorganizing. You're adding capability to teams that already work.

Why Start Here

  • Shortest path to value (3-6 months for first win)
  • Lowest organizational friction
  • Fastest learning (you'll know what you don't know)
  • Teams stay intact—no reorganization risk

What You Actually Need

Prerequisite Checklist:
✓ Stable CI/CD
Deploy code weekly without ceremony
✓ Basic Data Hygiene
You can query user data without chaos; not perfect, just honest
✓ AI-Capable People
PMs or engineers who understand both product and how to work with LLMs

The Value Unlock (Real Examples)

  • AI-powered search (semantic search, not keyword matching)
  • Automated customer support (questions answered before support ticket created)
  • Content generation at scale (personalized emails, product descriptions)
  • Faster feature iteration (AI handles grunt work, engineers focus on UX)

Real Costs (Be Honest)

Time Investment
3-6 months to first win; 6-12 months to prove meaningful impact
Team Expansion
2-4 people (PM + engineers with AI experience)
Infrastructure Cost
$5K-$15K/month in API calls (LLMs, embeddings)
Organizational Friction
Low—you're adding capability, not reorganizing

The Real Trade-off

✓ You Gain:

Fast feedback. Shipped features. Team learning. Political capital for bigger moves.

✗ You Lose:

Some engineering velocity (AI integration takes time). Potential for AI features to feel bolted-on if not well-integrated.

The common way this fails: The AI work gets appended to an existing team's backlog rather than treated as a distinct mandate. Two sets of priorities, one team. The mandate collision slows both — AI features ship late, the existing roadmap slips, and leadership loses confidence in the whole effort. If you're going to embed AI-capable PMs with engineering teams, they need a clear mandate that doesn't compete with everything else those teams are already accountable for.

The real-world win: A company shipped AI-powered search in Q2, improved time-to-value for customers by 40%, proved the model to leadership, and had the political capital to fund AI Experience work in Q4. That's momentum.

What it is: A dedicated AI product and data team building AI-powered user experiences—recommendation engines, personalization, predictive features. This is where you start competing on experience, not just features.

When to Consider This

  • You've shipped Path 1 and learned what works
  • User experience differentiation is a competitive lever
  • You have 18+ months and budget for a dedicated team
  • Your data foundations are clean enough to build on

What You Actually Need

Prerequisite Checklist:
✓ Clean Data Foundations
You can't personalize with bad data. Period. You need data you trust.
✓ Consent & Privacy Frameworks
If any users are in the EU or UK (GDPR) or California (CCPA), compliance applies — regardless of how many countries you operate in. Build compliant from the start; retrofitting is expensive and slow.
✓ Platform Team
Or strong data engineering. You can't do this solo.
✓ Experimentation Culture
A/B testing, multivariate, understanding statistical significance

The Value Unlock (Real Examples)

  • Recommendation engines (Netflix model: "next thing to watch")
  • Personalization at scale (content, pricing, layout)
  • Predictive features (churn prediction, demand forecasting)

The scale of this: Amazon attributes ~35% of revenue to its recommendation engine. Netflix estimates personalization saves ~$1B/year in avoided churn. These aren't edge cases — they're the business model.

Real Costs (Don't Underestimate This)

Time to First Win
12-18 months (data cleanup + model training + deployment)
Team Size
6-10 people (PM, engineers, data scientists, platform engineers)
Budget
$500K-$2M/year (headcount + infrastructure + compute)
Organizational Friction
High—separate team, different pace, different success metrics

The Real Trade-off

✓ You Gain:

Competitive moat in user experience. Engagement metrics that move. Customer data becomes an asset, not a liability.

✗ You Lose:

Speed on feature work (team is focused on AI). Months of data work before shipping. High organizational complexity.

The risk: I've seen this fail two ways. The first is trying it without clean data or without a dedicated team — the AI Experience team gets blocked waiting for data infrastructure, feature teams feel abandoned, and by month 12 there's real tension. The second is mixing this team in with existing feature teams instead of separating them. The paces don't match. AI Experience works on longer cycles with different success metrics. When you mix them, the feature team pulls the AI team into sprint reviews that don't fit, and the friction compounds until both teams are underperforming. Separation isn't politics — it's operational necessity.

What it is: A foundational model team building core AI infrastructure. You're not just using AI, you're building the platform that other teams use. This is for companies where competitive advantage is AI.

When to Consider This

  • AI is core to your business thesis, not an add-on
  • You have 3-5 years and serious budget commitment
  • You have world-class ML infrastructure and talent (or can hire it)
  • Your data is a strategic asset, not an operational byproduct

What You Actually Need

Prerequisite Checklist:
✓ Mature Data Infrastructure
Your own, not just API wrappers. You need to own the data pipeline.
✓ Dedicated ML Platform Team
Not part-time. Full-time engineering leadership on infrastructure.
✓ Distributed Systems Experience
ML at scale is hard. You need engineers who've done this before.
✓ Organizational Patience
3-5 year horizon with budget committed upfront

The Value Unlock (Real Examples)

  • Proprietary models trained on your data
  • Competitive moat that's hard to replicate
  • Speed at scale once platform is built
  • Industry leadership position

Real Costs (This Is Serious Commitment)

Time to ROI
3-5 years. Maybe longer depending on domain complexity.
Team Size
15-30+ people (ML engineers, platform engineers, research, ops)
Budget
$2M-$10M/year (people + compute + infrastructure + research)
Organizational Friction
Very high—new team, new culture, new success metrics, different pace

The Real Trade-off

✓ You Gain:

Real competitive moat. Models no one else has. Speed and scale at a level competitors can't match.

✗ You Lose:

Years of runway before proving ROI. Focus on other products. Speed on non-AI initiatives. Cultural cohesion.

The risk (and I've seen this fail): A company tries this without mature data infrastructure or without a dedicated platform team. The AI team becomes isolated. They ship something in year 2, but it doesn't connect to the rest of the organization. By year 3, there's skepticism. By year 4, the initiative gets killed. The specific failure mechanism is a governance gap: what the AI builders assume the platform can do and what the platform team actually prioritizes become two different realities. By the time that surfaces, you've lost a year — sometimes two. The builder team and the platform team have to be in continuous collaboration, not occasional check-ins. Be really honest about prerequisites before you commit.

These team archetypes are informed by Building AI-Powered Products by Marily Nika, adapted here with additional context on failure modes, prerequisites, and organizational sequencing.

The Quick Reference Matrix

This matrix is your at-a-glance guide for understanding the three paths. Use it to find yourself in the row that matches your constraints and ambitions. This isn't a menu to pick all three. It's a diagnostic tool to help you pick the one that makes sense right now.

Read across each row to understand the full picture of what commitment you're making. Pay attention to the "Time to Win" and "Budget" columns — these are the constraints that will determine what's actually feasible for your organization.

Team Type Structure What You're Building Time to Win Budget Org Friction Common Failure Mode
AI Enhanced PMs Embedded with existing teams AI-powered features in existing products 3-6 months $100-300K/year Low Mandate collision — AI work gets appended to existing team load, slowing both
AI Experience PMs Dedicated, separate team Recommendations, personalization, prediction 12-18 months $500K-2M/year High Pace mismatch — mixed with feature teams, different success cycles create compounding friction
AI Builder PMs Foundational model + platform team Core AI infrastructure (models, platform) 3-5 years $2M-10M/year Very High Platform isolation — governance gaps emerge when ML team operates separately from infrastructure

How to read this:

  • Time to Win: When you'll see meaningful traction. Measure this from commit date to first real user impact, not from theoretical launch.
  • Budget: Total fully-loaded annual cost: headcount (salary + benefits + overhead), infrastructure (cloud compute, data storage), and API costs (LLM calls, embeddings). These are realistic numbers from actual organizations, not best-case. If you're below these ranges, you're probably underfunding. If you're significantly above, you're either over-staffed or doing something more ambitious than the path suggests. Budget for the high end and adjust down if reality allows.
  • Org Friction: How disruptive this is to your existing structure. Low friction means existing teams stay intact. High friction means restructuring and cultural change.
  • Structure: Where the work lives. "Embedded" means it's part of normal operations. "Separate team" means new hiring and new culture. "Platform team" means foundational investment that other teams depend on.
The key insight: This is not a progression. You don't have to do Path 1 before Path 2, or Path 2 before Path 3. You pick based on where your organization has the most pain and the most budget. Some companies are better served by doing only Path 1, forever. Others have the muscle to jump straight to Path 3. The matrix helps you understand what you're committing to, whichever path you choose.

The Grounding Principle: AI Augments the Staff, Never Replaces Them

Before you decide which path to take, lock in this principle. It's the one I've seen organizations violate most often — and the one that causes the most damage when they do.

AI is very good at finding inefficiencies. It will surface redundancies. It will show you where three weeks of work can now be done in hours. And when that happens, there's a reflex — in finance, in leadership, sometimes in the board deck — to read that as a headcount reduction opportunity. Cut the team. Capture the savings. Show the ROI.

That reflex is usually wrong. The efficiency isn't a mandate to replace people — it's an opportunity to redeploy them. The staff who used to spend three weeks on pattern analysis now have three weeks to spend on the work that actually requires human judgment: the relationships, the edge cases, the contextual decisions that no model gets right the first time. Organizations that treat AI savings as a way to do more with the same people consistently outperform the ones that treat it as a way to do the same with fewer.

Human involvement is integral by design. AI enabled staff remain central to every decision. AI accelerates the analytical work. People own what the analysis means and what happens next.

Capability What It Actually Does
Recommendation Engines Surface relevant patterns from past work so teams begin with stronger hypotheses — instead of starting from zero on every engagement.
Pattern Matching at Scale Analyze org structures, team setups, and delivery models across your data. What used to take three weeks of interviews now takes hours. The insight is faster; the judgment is still yours.
Simulations & Scenarios Run what-if analyses on technology stack changes and org design decisions, backed by data from similar organizations — and validated by the people who understand the context.
The bottom line: Higher margins, faster delivery, stronger value — not by cutting the people who built the institutional knowledge, but by giving them better tools and freeing them to use it. The organizations that figure this out don't just move faster. They keep the people who make speed meaningful.

— Jamil Jadallah

Three Design Patterns for Success

Here are three patterns I've seen work repeatedly. Not theories — things I've watched teams do under real pressure.

Pattern 1: Create Shared Language

The Challenge: "Personalization" means something different to Sales, Product, and Engineering. If you don't fix this, you build three different things.

The Fix:

  • Monthly ceremony: Sales, Product, Eng, Legal sit down for 90 minutes and audit shared definitions. What does "personalization" mean to us? Right now.
  • Living document: Everything goes in Confluence or Notion. Not Slack. Not emails. Searchable, version-controlled, findable.
  • One person owns semantics: Sounds weird, but assign one person to be the "meaning keeper" across the org. When there's drift, they force a conversation.
  • Reward clarity over speed: In the retro, celebrate when someone caught semantic drift. Make it safe to say "we're building different things."

The upside: When teams share meaning, they move at speed. I've seen teams compress what should be 18 months into 9 months just because they were building the same thing and could move in parallel.

Pattern 2: Fund Innovation with Efficiency

The Challenge: You can't build the future when 70% of your budget goes to maintaining the past.

The Fix:

  • Audit first: Calculate: what % of engineering capacity is maintenance vs. innovation? Right now.
  • Reinvest efficiency: Path 1 (AI Enhanced) often creates 20-30% efficiency gains in drafting or support. Don't bank that as savings. Reinvest that freed capacity directly into retiring technical debt.
  • Level Up: Once you've reclaimed capacity, you can afford the bigger bets.

The math that actually works: A company with a 2-year modernization roadmap can compress it to 1 year if they use AI Enhanced wins to fund legacy cleanup. That's not just faster—it's exponentially more valuable. You ship AI features, you modernize infrastructure, and you prove you can execute. And the people who created those efficiency gains? They're the ones best positioned to build what comes next. Don't cut them. Redirect them.

Pattern 3: Map the Human Network

The Challenge: Institutional knowledge lives in people's heads. Restructuring breaks those networks.

The Fix:

  • Before restructuring: Map collaboration networks. Who talks to whom? How often? That's your informal knowledge transfer.
  • Documentation sprint: Have senior people spend time documenting gotchas, constraints, rules that live in their head. Do this before restructuring.
  • Access to context: Give the new team access to those people for at least 6 months. Make knowledge transfer explicit, not assumed.

The payoff: Organizations that preserve institutional knowledge can restructure without losing momentum. Teams move faster because they're not rediscovering constraints the hard way. That's a structural advantage worth years of speed.

The Decision Framework (Get Honest Answers)

Strategy meets reality here. You have three paths. You have the prerequisites. But this isn't a checklist you work through alone. This is a conversation you need to have with yourself and your leadership. Get honest about the answers. If your team is telling a different story than you are, that's the real work.

1

Where's the bottleneck that hurts most?

Product velocity? Customer experience? Competitive differentiation? If you could solve one thing, what would unlock the most value? Not what leadership wants—what's actually true.

2

Which path directly addresses that bottleneck?

Does it have to be all three, or is one path enough? Most companies are better off with one path done really well than three paths done mediocrely.

3

Do we actually have the prerequisites?

Not "could we eventually." Not "we're close." Do we have this right now? If it's 80%+ ready, you're good. If it's less, you have something to work on first.

4

Can the whole org say the same thing about what we're doing?

Engineering, Product, Data, Leadership—are they all telling the same story? If not, why not? What's the misalignment? Fix that before you move.

5

What's our time horizon and budget?

Quick wins in 6 months? 18-month transformation? 3-year platform bet? Your honest answer determines your path. It's not better or worse—it's just honest.

If you can answer all five clearly—truly clearly, not hopefully—you have your roadmap. You know what to do next.

If you can't answer one cleanly, don't force it. That's your work for the next week. Get the team in a room. Have the conversation. Find the alignment. Clarity is speed.

The Sequence, Not the Destination

Here's the thing: some companies will do all three. Some will do one. Both are fine.

What matters is this: you don't do them simultaneously, and you don't do them before you're ready.

The organizations that succeeded with DevOps, Cloud Native, Kubernetes, and Agile didn't try to boil the ocean. They started with Path 1 (incremental improvement), learned what worked, then built on it.

The sequence looks like:

1

Pick one path

Based on where your biggest bottleneck actually is — not where leadership wants to be, but where you genuinely are today.

2

Get the prerequisites right

Not optional. The organizations that skipped this step are the cautionary tales in every post-mortem I've read.

3

Ship something and learn from it

Not from strategy documents. A shipped feature teaches you more than six months of planning.

4

Then consider the next path — if it makes sense

You might stay at AI Enhanced forever and be completely successful. You might move to AI Experience. You might skip to AI Builder. All valid — if the prerequisites are there.

Intentionality, not inevitability. You're not on a conveyor belt toward all three. You're making deliberate choices about where the value actually is.

The data backs this up, uncomfortably: Gartner projected that 30% of GenAI proof-of-concepts would be abandoned by the end of 2025 — not because the technology failed, but because of poor data quality, escalating costs, and unclear business value. That deadline has passed. The organizations that skipped the prerequisites are already writing the post-mortems.

The Landscape Is Shifting (As You Execute)

This framework was built on the lessons of the past five years. But the ground is moving in real time, especially for teams on the AI Builder path.

In 2025, most PMs using AI were using copilots — autocomplete for decisions, research, writing. The paradigm was: you prompt, it responds. That's still happening. But in 2026, the leading edge teams are doing something different: they're managing agents. Not features. Not prompts. Autonomous workflows that run, decide, and act.

The shift: From "I use AI to help me think" to "I orchestrate AI systems that act." McKinsey calls this the "agentic organization." The PMs who navigate it well aren't the ones who know the most about LLMs — they're the ones who know how to design for trust, failure, and feedback at scale.

If you're building toward Path 3 (AI Builder), agents are no longer optional — they're the next prerequisite to understand. The framework above still applies. The sequence still matters. But the destination itself is evolving faster than the path you're walking.

Coherence Is Achievable

The last 10 years gave you a playbook. You've seen what happens when organizations rush transformations without the right prerequisites — and you've seen what happens when they don't. You already know the questions to ask.

AI adoption doesn't have to take 10 years. It doesn't have to be reckless. But it doesn't happen by accident either. It happens when you:

01
Pick one path — not three.

Precision beats ambition. One path done well beats three paths done mediocrely.

02
Audit your prerequisites ruthlessly.

Not "we're close." Not "eventually." Do you have this right now? Be honest.

03
Ship something and learn from reality.

A shipped feature teaches you more than six months of planning. Strategy documents don't tell you what's true — your users do.

04
Let your wins inform what's next.

You might stay at AI Enhanced forever and be completely successful. Or your wins give you the capital and confidence to go further. Both are fine.

That's coherence on purpose. And it's fully within your reach.

From Someone Who Watched This Unfold