The Poetry of Code: Part 1 - Building Your Digital Self

The Poetry of Code Series

  1. Part 1: Why AI tools are extensions of you, not replacements. A practitioner's guide to AI adoption for the uncertain and skeptical.

This is Part 1 of “The Poetry of Code” series - a practitioner’s guide to AI adoption for the uncertain and skeptical.


Recently, I sat in a meeting between product and engineering teams. We were discussing Kiro, an AI coding assistant that’s gaining adoption across our organization. I expected enthusiasm. What I got was resistance.

Three sentiments emerged in the room.

The first: AI tools can’t be trusted to produce quality. The second: if you use AI tools to create something, how can it possibly be supported? And underneath both, the broader fear: AI is coming for our jobs.

I asked a simple question: what AI tools are you using?

The answer was essentially none.

Judgment had been passed without trial. Resistance had formed without experience. These practitioners had decided AI tools weren’t trustworthy without ever seriously trying them.

This confused me. Not because I think AI is perfect. Not because I dismiss the concerns. But because I’ve lived on both sides of this divide, and I know what these tools actually are when you use them deliberately.

They’re not replacements. They’re extensions. And the distinction matters.

I Was the Skeptic

I need to be honest about where I started.

Not long ago, someone at work challenged me: why aren’t you using AI more in your daily job? My response was defensive. I don’t see where it fits. This doesn’t apply to what I do. I’d made assumptions without testing them.

Then I actually tried.

I started with small experiments. Using AI to draft documentation I would have written anyway. Asking it to help me think through architecture decisions. Letting it generate code I could review and learn from.

The shift was immediate. Tasks that consumed hours took minutes. Concepts I would have struggled to articulate came out clearer because I had something to react to. Code I never would have attempted became possible because I had a collaborator that could handle the syntax while I focused on the logic.

I went from skeptic to daily user in weeks. Not because AI is magic. Because I finally understood what it actually is.

The Co-Pilot Frame

Here’s the mental model that changed everything for me: AI is a co-pilot, not the pilot.

Microsoft named their AI assistant appropriately. A co-pilot doesn’t fly the plane. The pilot flies the plane. The co-pilot assists, handles specific tasks, provides another set of eyes, and enables the pilot to perform at a higher level than they could alone.

When I use AI tools, I’m still flying. The ideas are mine. The direction is mine. The judgment about what’s good and what needs work is mine. The AI accelerates my ability to execute on those ideas, but it doesn’t generate the ideas or evaluate the outcomes.

This is the part the skeptics miss. They imagine AI as autonomous. A black box that produces output independent of human input. Something that replaces thinking rather than enhancing it.

That’s not how it works. Not when you use it deliberately.

The Assembly Line Moment

Think about what Henry Ford did with the assembly line.

Before Ford, craftsmen built cars individually. Each vehicle was a complete project handled by skilled workers who did everything. The process was slow, expensive, and limited in scale.

Ford didn’t replace the workers. He reorganized how they worked. The assembly line let each person focus on what they did best while the system handled the coordination. Output increased dramatically. Quality became more consistent. Costs dropped.

The craftsmen who saw the assembly line as a threat missed the point. It wasn’t replacing their skills. It was amplifying them.

AI is having the same effect on knowledge work. The practitioners who see it as a threat are making the same mistake. These tools don’t replace your thinking. They amplify your ability to execute on your thinking.

The question isn’t whether AI will change how you work. It will. The question is whether you’ll be the one directing that change or reacting to it.

How I Actually Use These Tools

I use multiple AI tools, and I use them very differently. This isn’t about picking one winner. It’s about understanding what each tool does well and directing it accordingly.

Kiro is my hands-on learning environment. I use it to build things I never would have attempted before. Code projects that would have taken me weeks of learning now come together in focused sessions. The AI handles syntax and implementation details while I focus on architecture and logic. I’m learning faster because I have a collaborator that can bridge my knowledge gaps in real time.

I wrote about my Kiro setup in a previous post. The key insight there was configuration: these tools perform radically better when you give them context about who you are, how you work, and what you’re trying to accomplish.

Claude serves a different purpose entirely. Claude is my mirror. My digital self, as much as that’s possible today.

I record my thoughts, my ideas, my half-formed concepts. Claude helps me rationalize them, organize them, and make sense of them faster than I could alone. When I’m working through a complex problem, I think out loud with Claude. When I’m trying to articulate something I feel but can’t quite express, Claude helps me find the words.

This isn’t Claude thinking for me. It’s Claude accelerating my own thinking. The output is still me. The ideas are still mine. But the path from foggy intuition to clear articulation is shorter.

Microsoft Copilot fills the team dimension. Our product team uses Copilot to build collective memory and enable collaboration at scale. Meeting notes, shared context, institutional knowledge that would otherwise live in someone’s head or get lost in email threads. Copilot captures and organizes the team’s thinking so it becomes accessible to everyone.

We also use Copilot to create personal toil agents. Repetitive tasks that consume time but don’t require judgment. The kind of work that has to get done but doesn’t benefit from deep human attention. Copilot handles the toil so we can focus on the work that actually requires us.

Three tools, three distinct purposes. Kiro for development and learning. Claude for deep ideation and creation. Copilot for team cohesion and collective thought. None of them replace what the others do. Each amplifies a different dimension of how I work.

How each tool maintains context differs. Claude and Copilot build memory over time. Each interaction strengthens the model, and the collaboration becomes more fluid because the tools accumulate understanding of who you are and how you work.

Kiro takes a different approach. It doesn’t have memory yet, though it’s on the roadmap. Instead, Kiro relies on steering documentation and what I think of as “digital self design.” You tell Kiro who you are upfront. Your background, your preferences, how you approach problems, what you’re trying to accomplish. That context shapes every interaction. It’s deliberate configuration rather than accumulated memory, but the effect is similar: the tool understands your context and responds accordingly.

This is worth calling out because it’s a key insight for getting value from any AI tool. Whether through memory or configuration, these tools perform radically better when they have context about you. The practitioners who dump prompts into a blank slate get mediocre results. The practitioners who invest in building context, however the tool supports it, get dramatically better outcomes.

One more thing about these specific tools: this is how I work today. Kiro, Claude, Copilot. Tomorrow it might look different. These tools are evolving rapidly, and new ones emerge constantly. What I’ve described isn’t a permanent setup. It’s a snapshot. The tools will change. My use of them will change. That’s not something to fear. That’s something to embrace. The skill isn’t mastering any single tool. It’s developing the ability to evaluate, adopt, and direct whatever tools serve you best as the landscape evolves.

The Digital Self

Here’s the concept I’ve landed on: AI tools, used deliberately, become extensions of your self.

What goes into these tools is me. My thoughts, my ideas, my direction, my judgment. What comes out is also me, processed and accelerated. The output reflects my thinking because I shaped the input and evaluated the output.

This is fundamentally different from automation that replaces human involvement. It’s collaboration that amplifies human capability.

My digital self isn’t a replacement for my actual self. It’s a projection of my actual self into tools that can operate faster than I can alone. When I use Claude to organize my thoughts, those are still my thoughts. When I use Kiro to build code, that’s still my architecture. The tools are the medium. I’m still the source.

This framing changes everything about how you approach AI adoption. You’re not handing over your work to a machine. You’re extending your reach through a machine. The responsibility stays with you. The judgment stays with you. The tools just help you execute faster.


This is Part 1 of “The Poetry of Code” series. Part 2: Addressing the Fear takes on the real objections - quality, supportability, and what happens when the craft you’ve spent years mastering becomes something anyone with vision can do.

For a practical walkthrough of setting up AI-assisted development, see my post on configuring Kiro.

Photo by Thought Catalog on Unsplash