When iA Writer released their updated authorship tracking system in November 2025, they made AI-generated text visible in a way that's impossible to ignore: rainbow colors bursting across your document. Every paste from ChatGPT, every Apple Intelligence suggestion, every Claude collaboration shows up like a rainbow parade on your screen.

The feature is genuinely useful. Visibility matters. But the design philosophy embedded in those rainbow colors deserves closer examination.

"Don't let the machine speak for you," their announcement declares. "Make it better, make it real, let the text speak in your voice." The implicit message: AI text is artificial until you transform it into something authentic. The rainbow is a problem to be solved, colors to be eliminated, foreign material to be transmuted into something genuinely yours.

But what if this framing—however well-intentioned—misses something essential about how intellectual work actually happens?

The Categorical Trap

We're witnessing a curious convergence of seemingly opposite advice about AI:

Seth Godin writes "Delegate Everything," arguing we should use AI and outsourcing to handle all delegatable work, reserving our energy for what only we can do.

iA Writer argues the opposite: make AI text yours by rewriting it, ensure your voice dominates, eliminate the rainbow until only your words remain.

Both positions share an assumption: there's a clear line between "yours" and "not yours," and staying on the right side of that line is a moral and practical imperative.

Both are reacting to the same legitimate fear: abdication. The person who types "AI, write my article about climate change" and publishes without reading. The executive who delegates everything including their thinking. The race to the bottom where authentic human input vanishes entirely.

But in their urgency to prevent abdication, both risk creating a different problem: categorical thinking that can't distinguish between thoughtless delegation and intentional collaboration.

Where Ownership Actually Lives

This summer, I built twenty apps and shipped eleven of them to the App Store. I'm not a professional developer—my coding experience before this was basic HTML from 2003. Without AI assistance, none of these apps would exist.

Did I "own" the code that shipped? By iA Writer's rainbow logic, maybe not—much of it was AI-generated. By Godin's delegation framework, perhaps not—I outsourced implementation to AI systems. Yet I'm completely responsible for what those apps do and don't do. I chose the problems. I defined the constraints. I made thousands of decisions about what features would serve users and which would be feature creep. I tested, rejected entire approaches, redesigned from scratch multiple times.

The question "who typed the code?" feels increasingly irrelevant compared to "who directed the intention and accepts responsibility for the outcome?"

Consider how intellectual work has always functioned:

A composer doesn't personally perform every instrument in the orchestra, yet we don't question whether the symphony is "theirs." The composer directed the intention, evaluated the performance, and accepts credit or criticism for the result.

A researcher works with assistants who conduct experiments, gather data, and even draft sections. But the researcher directs the inquiry, interprets findings, and publishes under their name with full responsibility.

An author works with editors who restructure arguments, rewrite sentences, and delete entire sections. The better the editor, the more the final text differs from what the author originally wrote. Yet it remains the author's book.

Ownership doesn't reside in who physically created each component, but in who directed the intention, evaluated the material, and accepted responsibility for the impact.

The Real Distinction: Amplification vs Abdication

In my article on cybernetic amplification, I distinguished between two ways AI can integrate into cognitively demanding work:

Cybernetic Collaboration: Back-and-forth with AI during thinking. Attention scattered across prompting, reviewing, and debugging AI outputs. This fragments focus and, as Cal Newport notes when discussing programmer productivity studies, can actually make people slower.

Cybernetic Amplification: AI eliminates mechanical interruptions so human attention stays focused on higher-order problems. Not easier work—more intense work, with friction removed from lower-level concerns.

The productivity explosion I experienced came from amplification: AI handled "what's the syntax for this SwiftUI component?" so I could maintain focus on "what constraint makes this app valuable?"

Building FocusAnchor: An example of both cybernetic amplification (AI handling syntax and boilerplate) and collaboration (exploring approaches together)

But here's what I didn't fully articulate in that piece: sometimes collaboration produces something valuable too. Sometimes AI suggests an approach I hadn't considered. Sometimes its formulation is clearer than mine would have been. Sometimes the back-and-forth generates insight neither human nor AI would reach alone.

The mistake is treating these as the same thing as abdication.

Abdication: "AI, write this article. I'll publish without reading."

Mindless delegation: "AI, write this. Looks fine. Ship it."

Amplification: "AI, eliminate this friction so I can focus on the problem that matters."

Collaboration: "AI, show me alternatives. I'll evaluate and choose."

Curation: "This AI formulation is clearer than anything I'd write. I'm keeping it."

The last three are legitimate. The first two are genuinely problematic. But categorical frameworks—"delegate everything" or "eliminate all rainbow text"—can't distinguish between them.

A Better Framework

Instead of asking "how much AI is acceptable?", we should ask four questions:

1. Did I set the direction?

Not "did I type the words?" but "did I determine what problem we're solving, what question we're asking, what argument we're making?"

When I created that Socratic dialogue about AI safety, I chose the source material, framed the philosophical tension, and defined what made the conversation worth having. That Claude drafted the actual dialogue doesn't diminish my directorial role.

2. Did I evaluate the material critically?

Not "did I rewrite everything?" but "did I assess whether the output serves the reader, makes valid arguments, and represents thinking I'm willing to defend?"

Some AI formulations I keep because they're clearer than I'd write. Some I modify. Some I delete entirely. The evaluation is what matters, not the revision percentage.

3. Am I transparent about the process?

This one isn't negotiable. If we're discussing AI systems that might deceive or pursue misaligned goals, hiding AI involvement in content about AI risk would be deeply hypocritical.

Within a few years, most intellectual content will involve AI collaboration to some degree. We need norms and practices for transparency now, while we can still draw clear lines.

4. Do I accept responsibility for the result?

If there are errors, weak arguments, or harmful implications—I'm accountable. Not the AI. Me.

This is where ownership ultimately resides. Not in the typing, but in the taking of responsibility.

If you answer yes to all four questions, the percentage of AI-generated sentences becomes irrelevant. If you answer no to any of them, you have a problem—regardless of how much you personally typed.

What the Rainbow Should Mean

iA Writer's rainbow visualization is valuable, but perhaps for different reasons than they emphasize.

The rainbow shouldn't mean: "This is foreign material that must be eliminated."

The rainbow should mean: "This hasn't been evaluated yet."

The problem isn't AI-generated text. The problem is unevaluated text. Text that got published without human judgment about whether it serves the reader, makes valid arguments, and represents thinking worth defending.

Some AI text will deserve to stay rainbow—placeholder material, brainstorming fragments, drafts awaiting revision. Some will deserve to become "yours" through the act of critical evaluation and conscious choice to keep it, even if you don't change a word.

The goal isn't eliminating color. The goal is ensuring everything in your document—regardless of origin—has passed through your critical judgment.

The Constraint That Matters

DigTek's philosophy centers on constraint-driven design: solving one problem beautifully rather than trying to be everything to everyone.

This applies to how we think about AI collaboration.

The constraint isn't "minimize AI involvement" or "maximize delegation." The constraint is: maintain the capacity for critical evaluation while using tools that enhance rather than replace your judgment. If AI eliminates friction from lower-level concerns so you can focus on higher-order problems—use it. If AI suggests approaches you can evaluate and choose between—collaborate with it. If AI formulates something clearly and you've verified it's accurate—keep it.

But if AI is doing your thinking for you, if you're publishing without evaluation, if you can't explain or defend what's in your document—you've violated the constraint that actually matters.

A Modest Proposal

Perhaps people most worried about AI-generated text are making the same mistake as those who delegate thoughtlessly: focusing on mechanism rather than meaning.

The question was never "who typed it?" The question has always been "does it serve the reader, and are you responsible for it?"

The rainbow problem isn't that AI-generated text exists. The rainbow problem is thinking we can solve the accountability challenge through color-coding and categorical rules rather than developing better judgment about when to use tools and when to think for ourselves.

iA Writer's visualization is valuable not because it tells you to eliminate AI text, but because it makes invisible choices visible. What you do with that visibility—whether you rigidly rewrite everything or thoughtfully curate—depends on judgment that no color scheme can provide.

Similarly, Godin's "delegate everything" framework is valuable not as literal prescription but as "permission" to use available tools strategically. The wisdom lies in his caveat about hiring yourself for work that builds insight or skill, in his warning against racing to the bottom.

Both tools. Both useful. Neither sufficient without the judgment to know when categorical thinking helps and when it obscures what matters.

The future of intellectual work won't be defined by how much we delegate or how thoroughly we eliminate rainbow colors. It will be defined by whether we maintain the capacity for critical evaluation, transparent process, and genuine responsibility—regardless of which tools we use or whose fingers typed the words.


This article emerged from human-AI collaboration. I (the human) set the direction, structured the argument, and evaluated all claims. Some formulations originated from Claude, others from me, most from genuine back-and-forth where the boundary blurred. All have been evaluated and contextualized by me. The ideas build on previous articles about cybernetic amplification and AI transparency, extending rather than contradicting earlier positions. If you find errors or disagree with arguments, I'm accountable.