Executive Summary
We looked at 57,793 GitHub issues from Next.js to find out what developers really struggle with. Instead of relying on survey answers, we focused on the problems that actually lead people to file issues.
Key Findings:
- Server-side rendering complexity: 19,277 issues (33% of total)
- Configuration and integration pain: 14,450 issues (25% of total)
- Resolution rate: Only 29% of issues get resolved; 52% of discussed issues remain unresolved
- Build performance concerns: 8,511 issues (15% of total) despite being a core promise
Why this matters: Your Jira board shows what your team delivers, but it doesn't explain why progress slows down. Framework friction, dependency issues, and architectural choices can all add up and slow your team over time.
The Three Big Pain Points
1. Server-Side Rendering (19,277 issues)
SSR is here to stay, but it's tough to get right in production. Developers face challenges like error handling, server components, hydration bugs, and the ongoing struggle to manage server and client boundaries.
Here's what your workflow tools miss: Your sprint velocity might look fine, but if developers spend hours debugging hydration issues that never become tickets, they aren't building new features. Instead, they're dealing with architectural debt.
2. Configuration Hell (14,450 issues)
React version conflicts, webpack configs, and TypeScript integration issues. When your framework touches the entire ecosystem, every version bump is a potential minefield.
There's a hidden cost: Developers often solve problems on their own without creating tickets. Your Jira board shows closed stories, but it doesn't show the 40 hours spent last quarter fixing webpack configurations.
3. Build Performance (8,511 issues)
Turbopack, babel, webpack, optimization — the word cloud screams it. Developers are obsessed with build speed, but they're also frustrated by experimental features that break existing setups.
Here's the problem: Slow builds affect every feature delivery. Your CI metrics show longer build times, and your sprint board shows completed points, but neither tells you the true cost.
Why Your Current Tools Can't See This
Jira and Linear are excellent at tracking work. They answer:
- What are we building?
- How fast are we shipping?
- Where are process bottlenecks?
But they can't track why complexity builds up. They miss:
- Why velocity dropped after the framework upgrade
- Which architectural decisions create a support burden
- Where developers burn hours that never become tickets
I'll be honest — this gap is where most engineering orgs live. Good people, solid process, metrics look healthy. But somehow, you need more headcount every year to maintain the same output.
What Different Leaders Should See Here
Engineering Leaders: The Velocity Tax You Can't See
Here's a quick example: If each of your 10 engineers loses 2 hours a week to framework issues (and that's a conservative estimate), it adds up to over $100,000 a year.
Your Jira shows consistent velocity. Your budget shows rising headcount. Neither explains why you need more people for the same output.
The answer is often hidden in GitHub issues, Slack threads, and casual conversations. Complexity builds up faster than your team can handle.
Product Leaders: When Roadmaps Meet Reality
You've seen this before: Engineering commits to a timeline in planning. Two sprints in, velocity drops. The team is working hard, but "technical complexity" keeps eating estimates.
What your workflow tools show:
- Story points completed: Consistent
- Sprint commitments: Met (mostly)
- Team utilization: High
What they don't show:
- The Next.js upgrade that seemed routine but triggered 40 hours of debugging
- The "simple" integration that surfaced configuration issues across three repos
- The experimental feature that promised performance gains but added a maintenance burden
Here's the disconnect: Your roadmap assumes stable velocity. But when your engineering team is navigating 14,450 configuration issues worth of framework complexity, that velocity isn't stable — it's slowly eroding.
The planning problem: You can't account for what you can't measure. When engineering says "this will take longer than expected," is it because they're being conservative, there's legitimate technical complexity, or framework debt is compounding?
Without visibility into where complexity accumulates, every planning conversation becomes a negotiation instead of a data-driven discussion.
What this type of analysis shows you: If Next.js's Build and Runtime Optimization cluster is showing "attention" status, that's a signal. Before you commit to adopting Turbopack or other experimental features, you'd know: "This will likely generate support burden. Budget accordingly."
That's the conversation most PM-Engineering relationships are missing — not "can we build it?" but "what's the total cost of building and maintaining it?"
OSS Foundation Leaders: Early Warning Signals
Next.js looks healthy by traditional metrics: 1,502 active contributors, consistent commits, and an engaged community with 136,863 stars. But that Build and Runtime Optimization topic cluster is showing "attention" status? That's an early warning invisible to stars and forks.
When you're managing dozens of projects, you need to know which ones are accumulating a maintenance burden faster than contributor capacity can keep up. Activity metrics measure throughput, not sustainability.
Enterprise Platform Teams: Beyond CVEs and Licenses
The Configuration and Integration cluster (14,450 issues, or 25% of all issues) should worry anyone managing a dependency portfolio. Each issue could become a production problem during upgrades.
Your security team tracks CVEs, and legal tracks licenses. But who is watching whether your dependencies can handle their maintenance load? Log4j's warning signs showed up months early in these same patterns.
The Innovation Tax Pattern
Watch the cycle:
- New feature ships (Turbopack, better optimization)
- Issues flood in (build failures, config problems)
- Team triages, documents, and patches
- Meanwhile, more features ship
- Previous features are still generating issues
- Repeat
Each feature promises improvement. Each adds complexity. Each generates hundreds of issues. This isn't a bug in Next.js — it's the cost of pushing boundaries.
Smart teams budget for this. They track it. They make explicit trade-offs.
Most teams just feel slower over time and can't articulate why.
What Makes GitHub Issues Different
Developers don't file issues for minor annoyances. They file issues when they're blocked.
So when 14,450 issues cluster around configuration, that's 14,450 moments when:
- Documentation wasn't enough
- Stack Overflow didn't have the answer
- Someone needed help
Traditional survey approach: "How satisfied are you with our documentation?" → 7.5/10
Issue analysis reveals: "Here are the 47 specific gaps causing the most friction, ranked by frequency and resolution time."
Opinion vs. evidence. Sentiment vs. revealed behavior.
What We Built and Why
This analysis represents weeks of building the right tooling to process:
- 57,793 issues ingested and analyzed
- Topic modeling and text analysis across issue titles and bodies
- Resolution funnel tracking showing 81% get discussion, but only 29% reach resolution
- Health scoring across multiple dimensions
Here's the thing: your workflow tools are built for execution. Jira and Linear are excellent at what they do.
We built Beyond The Alignment for insight. To answer questions your workflow tools can't:
- Where is complexity accumulating faster than capacity?
- Which decisions generate disproportionate support burden?
- What's the true maintenance cost of that "innovative" feature?
- Which dependencies are sustainable vs. drowning?
The signal exists in GitHub issues, documentation gaps, Stack Overflow questions, and community discussions. You're just not systematically extracting it.
The Real Story
Next.js isn't broken. The 57,793 issues represent an engaged community holding maintainers accountable to high standards. With 3,194 active pull requests and 1,502 contributors, this is a healthy, vibrant project.
But here's what the analysis reveals: Innovation's cost isn't just shipping features. It's the sustained maintenance burden those features create.
Your workflow tools track outputs. They don't track costs.
That gap between what you deliver and what it takes to keep delivering is where velocity drops, budgets grow, and good engineers burn out, even when the metrics look healthy.
Look, every engineering leader has felt this:
- Metrics look fine
- The team is talented
- Process is solid
- But somehow we're slower than last year
Usually, the answer isn't people or process. It's the accumulated complexity you can't see clearly enough to manage intentionally.