In a recent episode of Deep Questions, Cal Newport explored a fascinating productivity paradox: experienced developers were 20% slower when using AI tools, despite everyone predicting significant speedups. Cal's analysis introduced the concept of "cybernetic collaboration" to explain why splitting cognitive effort with AI systems can actually reduce the focus intensity that drives deep work productivity.
This analysis sparked questions about whether there might be different ways to integrate AI into cognitively demanding work — approaches that preserve rather than dilute focus intensity.
The METR Study: Understanding the Paradox
The study tested 16 experienced open-source developers, randomly assigned to use AI tools or work without them on real repository issues, averaging two hours each. The result challenged conventional wisdom: developers were 20% slower with AI, despite everyone predicting 20-40% speedups.
Cal's exploration centered on what he called "cybernetic collaboration" where developers were "spending time reviewing AI outputs, prompting AI systems, and waiting for AI generations" rather than "actively coding, reading, searching for information." This interactive back-and-forth reduced focus intensity, which deep work research suggests is the key ingredient for productive cognitive effort.
A Norwegian Summer Counter-Experiment
This summer, I conducted a real-world test of AI-assisted development productivity. Starting with basic HTML skills from 2003 — not a professional developer by any definition — I set out to see what was possible with AI as a development partner.
8 apps launched on the App Store between June and September 2025, with 2 more in review and 1 in development. Plus a complete website, multiple Mac utilities, and even a board game prototype.
This productivity explosion was so intense that I built an app called "AppStorm" just to manage the portfolio of 40+ app concepts we were simultaneously exploring.
Exploring an Alternative: "Cybernetic Amplification"
Rather than following the "cybernetic collaboration" pattern Cal described, my approach evolved into something different: "Cybernetic Amplification" — where AI handles mechanical tasks so humans can focus more intensely on higher-order problems.
Cybernetic Collaboration
Back-and-forth with AI during thinking. Attention scattered across prompting, reviewing, and debugging AI outputs.
Cybernetic Amplification
AI eliminates mechanical interruptions. Human attention stays focused on higher-order product and design problems.
The METR study developers were caught in what Cal accurately described as a "back and forth dance" of prompting, reviewing, and debugging AI outputs. This scattered their attention across multiple cognitive domains simultaneously.
The amplification approach took a different path. Instead of using AI for thinking or conversing, AI served as a force multiplier that enabled deeper focus. This wasn't the "pleasant" experience of reduced cognitive load. It was intensely demanding work — where I used AI to eliminate the cognitive interruptions that traditionally break deep work flow:
- No more "wait, what's the syntax for this SwiftUI component?" (context switch eliminated)
- No more "let me Google this API documentation" (flow preserved)
- No more "is this crash a logic error or a typo?" (AI handles the obvious stuff)
This meant I could maintain "peak intensity of focus" for 2-3 hour blocks on pure product problems: What constraint makes this app valuable? How does the user experience serve their actual needs? What's the minimal viable feature set?
Why the METR Study Measured the Wrong Thing
The METR study measured time to complete specific programming tasks. But that's like measuring how fast mathematicians can solve individual equations rather than how fast they can prove theorems.
The real productivity question is: How much user value can you create per unit of deep work time?
By that measure, my summer was extraordinarily productive: 8 apps solving distinct user problems, each refined to its essential function through rapid iteration, with user feedback integrated across multiple development cycles.
The Speed-to-Insight Loop
The most powerful aspect wasn't coding faster — though that felt like magic! — it was thinking faster through implementation.
AI-collaboration model might lead to split mental energy, while AI-amplification redirects energy to focus on [fill in the blank].
This speed created something I wouldn't have predicted: the ability to test 30+ concepts in a single summer and identify 8 that truly resonated. Without AI amplification, none of this would have been possible.
The Meta-Productivity Problem
Perhaps most tellingly, the productivity gains were so significant that they created new cognitive challenges:
- How do you manage double-digit simultaneous app concepts?
- How do you maintain quality across rapid development cycles?
- How do you prevent feature creep when development friction is so low?
These are fundamentally different problems than "How do I remember this Swift syntax?" They're higher-order challenges that required systematic thinking.
Interestingly, this productivity explosion aligned perfectly with DigTek's philosophy: constraint-driven design. AI made it possible to build focused tools quickly enough to resist feature creep. Instead of spending months on each app and being tempted to add "just one more feature," I could ship viable products and let them evolve in the world.
Different Paths, Different Outcomes
The METR study provides valuable insight into one way AI integration can backfire. The cybernetic collaboration pattern — constant back-and-forth with AI systems during cognitive work — appears to fragment attention in ways that reduce overall productivity, even when the experience feels more pleasant.
But my summer experiment suggests there may be alternative integration patterns worth exploring. When AI eliminates mechanical interruptions rather than creating collaborative loops, it might preserve the focus intensity that drives productive deep work while redirecting that intensity toward higher-order problems.
The question isn't whether AI makes work "easier" — both approaches involve intense cognitive effort. The question is whether AI can help us focus on creating more value for the world — if for nothing else than supporting one helluva joyride for a summer.
In a landscape where focus is increasingly rare and valuable, this distinction may matter more than the specific productivity metrics of individual programming tasks.
All 8 apps embody a "less, but better" philosophy — they solve one problem beautifully rather than trying to be everything to everyone. Perhaps the real lesson isn't about productivity tools at all, but about using constraints to create more focused, intentional work — whether the constraint comes from design philosophy or from how we choose to integrate AI into our cognitive processes.
See thedeeplife.com for all of Cal's episodes — episode 370 specifically explored the METR study findings and introduced the cybernetic collaboration concept. You'll also find other contributions to thoughtful discourse on technology, productivity, and the deep life.