"You'll also learn new words along the way — words you didn't know existed. I still remember hearing idempotent for the first time and thinking I had no frame of reference for what it even described. But it would become a governing principle of infrastructure as code and every discipline that followed."
The Last 10 Years Taught Us Something We Can't Afford to Forget
Over the past decade, we saw that infrastructure and organization need to work together, whether it was DevOps, Cloud Native, Kubernetes, or Agile. These changes took years. Some teams failed, companies reorganized, and valuable knowledge slipped away. The teams that succeeded were the ones who took a step back, thought carefully about where new patterns fit, and figured out what needed to be in place first.
We learned that lesson at scale.
AI is speeding up this process. New capabilities are arriving faster, and the competition is intense. Companies that took their time with past modernization efforts can't afford to move that slowly again—not with AI and not with the current opportunity.
But here's what they can do: apply what they've learned faster.
The same ideas that made DevOps successful—clear structure, the right infrastructure, and focused use—also apply to AI. The difference now is that we can start by diagnosing what we need, instead of learning through mistakes. We can ask tough questions before making big changes.
The question isn't "How do we adopt AI?" It's "Where does AI actually create value for us, and are we ready for it?"
The Pattern Across a Decade (Four Lessons You Already Know)
DevOps (2010-2014): The First Structural Lesson
DevOps transformed how fast teams could ship. The organizations that really unlocked it recognized it as a structural imperative: if you want velocity, developers and operations can't be separate tribes. Infrastructure is part of the product delivery process.
The lesson: Align structure with capability from the start, and velocity follows.
Organizations that succeeded:
- Had engineering leadership that understood infrastructure mattered
- Were willing to break up functional silos
- Invested in automation before splitting teams
- Saw deployment frequency increase 10x within 18 months
What didn't work:
- Hiring a DevOps engineer and hoping they'd magic away organizational friction
- Keeping the old structure (Eng → DevOps → Ops) and expecting it to deliver velocity
- Treating it as an IT initiative instead of a product delivery transformation
Here's the key point: The companies that succeeded didn't just move faster. They changed how their teams could innovate. That's the benefit of being intentional.
Cloud Native (2014-2018): The Architectural Constraint
Cloud Native unlocked massive scalability. Organizations rethought how applications were built—stateless, distributed, resilient, observable. The ones that thrived didn't just move to the cloud; they rethought their entire architecture around it.
The lesson: A new way of working requires a new way of thinking and new foundational capabilities. Get both right, and scale becomes possible.
The organizations that thrived:
- Started with the data problem first (how do we scale state?)
- Built observability into the platform before teams needed it
- Were intentional about which monoliths to decompose first
- Ended up with systems that could scale 100x without rewriting
What they avoided:
- Decomposing monoliths without solving the data consistency problem
- Splitting teams into microservices but ignoring observability
- Assuming moving to the cloud would automatically change how they built applications
The reward: Companies that did this well didn't just get faster. They became able to scale almost without limits. That's a competitive advantage that keeps growing.
Kubernetes (2015-2020): The Maturity Tax
Kubernetes unlocked operational scale that wasn't possible before. The organizations that mastered it didn't just run containers—they fundamentally changed how they deployed and scaled software.
The lesson: More powerful tools require higher baseline maturity. But when you get the maturity right, the tool unlocks possibilities you couldn't access before.
The organizations that thrived:
- Built platform teams first, then gave application teams Kubernetes
- Invested in observability and incident response before complexity
- Were ruthless about "is this application ready for this level of operational complexity?"
- Ended up with systems they could deploy hundreds of times per day
What they avoided:
- Running Kubernetes without a platform team (leads to burnout, not speed)
- Migrating to Kubernetes without updating deployment pipelines
- Expecting Kubernetes to fix organizational chaos (it amplifies it instead)
The payoff: Organizations with mature platforms could make changes quickly and confidently. This speed advantage grows even more important in competitive markets.
Agile (2005-2019): The Organizational Alignment Problem
Agile transformed how teams think about shipping value. The organizations that really won with Agile didn't just adopt ceremonies—they redesigned their entire delivery system around small, independent teams.
The lesson: Structure is only as effective as its supporting systems. Get them aligned, and velocity compounds.
The organizations that won:
- Updated their CI/CD to support frequent deployments before splitting teams
- Redesigned data governance to give teams autonomy without chaos
- Were honest about what would break and fixed it proactively
- Realized that ceremonies matter only if the infrastructure supports them
- Went from quarterly releases to weekly shipping within 12 months
What derailed teams:
- Splitting into 12 small teams but keeping quarterly deployment cycles
- Adopting scrum ceremonies without updating CI/CD
- Restructuring without rethinking data governance
- Treating Agile adoption as a team-level change instead of a systems-level transformation
The payoff: Companies that succeeded could respond to customer feedback faster than anyone else. That advantage lasted for five to ten years.
The Common Thread
Across all of these—DevOps, Cloud Native, Kubernetes, Agile—there's a pattern:
- A new capability emerges (continuous integration, distributed systems, orchestration, iterative delivery)
- Organizations want it (faster deployments, better scale, responsiveness, velocity)
- Organizations either rush adoption or do it deliberately
The ones that rushed said:
- "Let's adopt the practice" (without updating the underlying structure)
- "Let's hire experts in the new thing" (and ignore the systemic gaps)
- "Let's declare ourselves transformed" (and wonder why nothing changed)
The ones that succeeded said:
- "Where does this actually belong in our operating model?"
- "What maturity do we need first?"
- "What breaks if we do this, and can we fix it?"
They focused on what mattered, acted with intention, and succeeded.
Why AI Is Different (And Why That Matters Now)
AI isn't like the technologies of the last decade in one critical way: the capability curve is accelerating, and competitive pressure is immediate.
With DevOps, you had 4-5 years before it mattered. With Kubernetes, you had maybe 2-3 years. With AI, the organizations making decisive moves now will set the trajectory for the next 5-10 years. Waiting isn't an option.
But moving too quickly without a plan is also risky.
This is where the lesson from the last decade becomes crucial. You have a playbook. You know what questions to ask. You know where alignment breaks. You know how to diagnose readiness.
The opportunity is this: compress the learning cycle.
Instead of spending 10 years discovering where AI fits, you can ask deliberately from the start.
This framework isn't a step-by-step plan where you must do everything. Instead, it's a tool to help you diagnose your situation and:
- Identify which team types you actually need
- Audit whether your maturity supports them
- Spot where your structure and capability are misaligned
- Move intentionally instead of frantically
Three Paths Forward (Pick One and Win)
What's different now compared to the last decade is that you don't need to take on everything at once. You don't have to overhaul your whole company or try to do everything.
You can be precise and focused.
There are three distinct roles AI plays in product teams, each with a clear prerequisite, a clear value unlock, and a clear path to success. The question isn't "should we do all three?" It's "which one makes sense for where we are right now?" And critically: which one matches both your mandate and the infrastructure that's actually ready — not the infrastructure you're planning to have.
What it is: AI-capable product managers embedded with engineering teams, integrating AI into existing workflows and products.
The opportunity: This is the most achievable starting point. You're not reorganizing. You're adding capability to teams that already work.
Why Start Here
- Shortest path to value (3-6 months for first win)
- Lowest organizational friction
- Fastest learning (you'll know what you don't know)
- Teams stay intact—no reorganization risk
What You Actually Need
The Value Unlock (Real Examples)
- AI-powered search (semantic search, not keyword matching)
- Automated customer support (questions answered before support ticket created)
- Content generation at scale (personalized emails, product descriptions)
- Faster feature iteration (AI handles grunt work, engineers focus on UX)
Real Costs (Be Honest)
The Real Trade-off
Fast feedback. Shipped features. Team learning. Political capital for bigger moves.
Some engineering velocity (AI integration takes time). Potential for AI features to feel bolted-on if not well-integrated.
The real-world win: A company shipped AI-powered search in Q2, improved time-to-value for customers by 40%, proved the model to leadership, and had the political capital to fund AI Experience work in Q4. That's momentum.
What it is: A dedicated AI product and data team building AI-powered user experiences—recommendation engines, personalization, predictive features. This is where you start competing on experience, not just features.
When to Consider This
- You've shipped Path 1 and learned what works
- User experience differentiation is a competitive lever
- You have 18+ months and budget for a dedicated team
- Your data foundations are clean enough to build on
What You Actually Need
The Value Unlock (Real Examples)
- Recommendation engines (Netflix model: "next thing to watch")
- Personalization at scale (content, pricing, layout)
- Predictive features (churn prediction, demand forecasting)
The scale of this: Amazon attributes ~35% of revenue to its recommendation engine. Netflix estimates personalization saves ~$1B/year in avoided churn. These aren't edge cases — they're the business model.
Real Costs (Don't Underestimate This)
The Real Trade-off
Competitive moat in user experience. Engagement metrics that move. Customer data becomes an asset, not a liability.
Speed on feature work (team is focused on AI). Months of data work before shipping. High organizational complexity.
What it is: A foundational model team building core AI infrastructure. You're not just using AI, you're building the platform that other teams use. This is for companies where competitive advantage is AI.
When to Consider This
- AI is core to your business thesis, not an add-on
- You have 3-5 years and serious budget commitment
- You have world-class ML infrastructure and talent (or can hire it)
- Your data is a strategic asset, not an operational byproduct
What You Actually Need
The Value Unlock (Real Examples)
- Proprietary models trained on your data
- Competitive moat that's hard to replicate
- Speed at scale once platform is built
- Industry leadership position
Real Costs (This Is Serious Commitment)
The Real Trade-off
Real competitive moat. Models no one else has. Speed and scale at a level competitors can't match.
Years of runway before proving ROI. Focus on other products. Speed on non-AI initiatives. Cultural cohesion.
These team archetypes are informed by Building AI-Powered Products by Marily Nika, adapted here with additional context on failure modes, prerequisites, and organizational sequencing.
The Quick Reference Matrix
This matrix is your at-a-glance guide for understanding the three paths. Use it to find yourself in the row that matches your constraints and ambitions. This isn't a menu to pick all three. It's a diagnostic tool to help you pick the one that makes sense right now.
Read across each row to understand the full picture of what commitment you're making. Pay attention to the "Time to Win" and "Budget" columns — these are the constraints that will determine what's actually feasible for your organization.
How to read this:
- Time to Win: When you'll see meaningful traction. Measure this from commit date to first real user impact, not from theoretical launch.
- Budget: Total fully-loaded annual cost: headcount (salary + benefits + overhead), infrastructure (cloud compute, data storage), and API costs (LLM calls, embeddings). These are realistic numbers from actual organizations, not best-case. If you're below these ranges, you're probably underfunding. If you're significantly above, you're either over-staffed or doing something more ambitious than the path suggests. Budget for the high end and adjust down if reality allows.
- Org Friction: How disruptive this is to your existing structure. Low friction means existing teams stay intact. High friction means restructuring and cultural change.
- Structure: Where the work lives. "Embedded" means it's part of normal operations. "Separate team" means new hiring and new culture. "Platform team" means foundational investment that other teams depend on.
The Grounding Principle: AI Augments the Staff, Never Replaces Them
Before you decide which path to take, lock in this principle. It's the one I've seen organizations violate most often — and the one that causes the most damage when they do.
AI is very good at finding inefficiencies. It will surface redundancies. It will show you where three weeks of work can now be done in hours. And when that happens, there's a reflex — in finance, in leadership, sometimes in the board deck — to read that as a headcount reduction opportunity. Cut the team. Capture the savings. Show the ROI.
That reflex is usually wrong. The efficiency isn't a mandate to replace people — it's an opportunity to redeploy them. The staff who used to spend three weeks on pattern analysis now have three weeks to spend on the work that actually requires human judgment: the relationships, the edge cases, the contextual decisions that no model gets right the first time. Organizations that treat AI savings as a way to do more with the same people consistently outperform the ones that treat it as a way to do the same with fewer.
Human involvement is integral by design. AI enabled staff remain central to every decision. AI accelerates the analytical work. People own what the analysis means and what happens next.
— Jamil Jadallah
Three Design Patterns for Success
Here are three patterns I've seen work repeatedly. Not theories — things I've watched teams do under real pressure.
Pattern 1: Create Shared Language
The Challenge: "Personalization" means something different to Sales, Product, and Engineering. If you don't fix this, you build three different things.
The Fix:
- Monthly ceremony: Sales, Product, Eng, Legal sit down for 90 minutes and audit shared definitions. What does "personalization" mean to us? Right now.
- Living document: Everything goes in Confluence or Notion. Not Slack. Not emails. Searchable, version-controlled, findable.
- One person owns semantics: Sounds weird, but assign one person to be the "meaning keeper" across the org. When there's drift, they force a conversation.
- Reward clarity over speed: In the retro, celebrate when someone caught semantic drift. Make it safe to say "we're building different things."
The upside: When teams share meaning, they move at speed. I've seen teams compress what should be 18 months into 9 months just because they were building the same thing and could move in parallel.
Pattern 2: Fund Innovation with Efficiency
The Challenge: You can't build the future when 70% of your budget goes to maintaining the past.
The Fix:
- Audit first: Calculate: what % of engineering capacity is maintenance vs. innovation? Right now.
- Reinvest efficiency: Path 1 (AI Enhanced) often creates 20-30% efficiency gains in drafting or support. Don't bank that as savings. Reinvest that freed capacity directly into retiring technical debt.
- Level Up: Once you've reclaimed capacity, you can afford the bigger bets.
The math that actually works: A company with a 2-year modernization roadmap can compress it to 1 year if they use AI Enhanced wins to fund legacy cleanup. That's not just faster—it's exponentially more valuable. You ship AI features, you modernize infrastructure, and you prove you can execute. And the people who created those efficiency gains? They're the ones best positioned to build what comes next. Don't cut them. Redirect them.
Pattern 3: Map the Human Network
The Challenge: Institutional knowledge lives in people's heads. Restructuring breaks those networks.
The Fix:
- Before restructuring: Map collaboration networks. Who talks to whom? How often? That's your informal knowledge transfer.
- Documentation sprint: Have senior people spend time documenting gotchas, constraints, rules that live in their head. Do this before restructuring.
- Access to context: Give the new team access to those people for at least 6 months. Make knowledge transfer explicit, not assumed.
The payoff: Organizations that preserve institutional knowledge can restructure without losing momentum. Teams move faster because they're not rediscovering constraints the hard way. That's a structural advantage worth years of speed.
The Decision Framework (Get Honest Answers)
Strategy meets reality here. You have three paths. You have the prerequisites. But this isn't a checklist you work through alone. This is a conversation you need to have with yourself and your leadership. Get honest about the answers. If your team is telling a different story than you are, that's the real work.
Where's the bottleneck that hurts most?
Product velocity? Customer experience? Competitive differentiation? If you could solve one thing, what would unlock the most value? Not what leadership wants—what's actually true.
Which path directly addresses that bottleneck?
Does it have to be all three, or is one path enough? Most companies are better off with one path done really well than three paths done mediocrely.
Do we actually have the prerequisites?
Not "could we eventually." Not "we're close." Do we have this right now? If it's 80%+ ready, you're good. If it's less, you have something to work on first.
Can the whole org say the same thing about what we're doing?
Engineering, Product, Data, Leadership—are they all telling the same story? If not, why not? What's the misalignment? Fix that before you move.
What's our time horizon and budget?
Quick wins in 6 months? 18-month transformation? 3-year platform bet? Your honest answer determines your path. It's not better or worse—it's just honest.
If you can answer all five clearly—truly clearly, not hopefully—you have your roadmap. You know what to do next.
If you can't answer one cleanly, don't force it. That's your work for the next week. Get the team in a room. Have the conversation. Find the alignment. Clarity is speed.
The Sequence, Not the Destination
Here's the thing: some companies will do all three. Some will do one. Both are fine.
What matters is this: you don't do them simultaneously, and you don't do them before you're ready.
The organizations that succeeded with DevOps, Cloud Native, Kubernetes, and Agile didn't try to boil the ocean. They started with Path 1 (incremental improvement), learned what worked, then built on it.
The sequence looks like:
Pick one path
Based on where your biggest bottleneck actually is — not where leadership wants to be, but where you genuinely are today.
Get the prerequisites right
Not optional. The organizations that skipped this step are the cautionary tales in every post-mortem I've read.
Ship something and learn from it
Not from strategy documents. A shipped feature teaches you more than six months of planning.
Then consider the next path — if it makes sense
You might stay at AI Enhanced forever and be completely successful. You might move to AI Experience. You might skip to AI Builder. All valid — if the prerequisites are there.
Intentionality, not inevitability. You're not on a conveyor belt toward all three. You're making deliberate choices about where the value actually is.
The Landscape Is Shifting (As You Execute)
This framework was built on the lessons of the past five years. But the ground is moving in real time, especially for teams on the AI Builder path.
In 2025, most PMs using AI were using copilots — autocomplete for decisions, research, writing. The paradigm was: you prompt, it responds. That's still happening. But in 2026, the leading edge teams are doing something different: they're managing agents. Not features. Not prompts. Autonomous workflows that run, decide, and act.
If you're building toward Path 3 (AI Builder), agents are no longer optional — they're the next prerequisite to understand. The framework above still applies. The sequence still matters. But the destination itself is evolving faster than the path you're walking.
Coherence Is Achievable
The last 10 years gave you a playbook. You've seen what happens when organizations rush transformations without the right prerequisites — and you've seen what happens when they don't. You already know the questions to ask.
AI adoption doesn't have to take 10 years. It doesn't have to be reckless. But it doesn't happen by accident either. It happens when you:
Precision beats ambition. One path done well beats three paths done mediocrely.
Not "we're close." Not "eventually." Do you have this right now? Be honest.
A shipped feature teaches you more than six months of planning. Strategy documents don't tell you what's true — your users do.
You might stay at AI Enhanced forever and be completely successful. Or your wins give you the capital and confidence to go further. Both are fine.
That's coherence on purpose. And it's fully within your reach.