I Stopped Writing Code and Started Building Software: The Agentic Coding Revolution
hatch

Here’s what nobody tells you about AI coding assistants: you’re still coding the old way.
Quick roadmap:
- Why “copilot mode” is already obsolete
- Two real projects built with agentic coding (native iOS app + retro game)
- The three shifts that separate traditional coding from agentic development
- How AI collaborates on architecture and PRDs using plan mode
- What actually happens when AI owns the implementation layer
The Uncomfortable Truth About Copilot Mode
For the past two years, we’ve been using AI wrong. GitHub Copilot autocompletes your code. ChatGPT writes functions you copy-paste. Claude suggests implementations you review line by line.
You’re still the one writing code. The AI is just a fancy autocomplete.
Translation: you’re optimizing the wrong part of the workflow.
The bottleneck in software development was never typing speed. It was never even knowing what to type. The real bottleneck is holding the entire system architecture in your head while simultaneously thinking about:
- Implementation details (syntax, APIs, edge cases)
- Testing strategy (unit tests, integration tests, mocks)
- Error handling (validation, recovery, user feedback)
- Documentation (comments, READMEs, setup guides)
- Security concerns (XSS prevention, input validation, certificate pinning)
Traditional coding forces you to context-switch between architectural thinking and implementation details every 30 seconds. Your brain isn’t built for that. Which is why senior developers are constantly telling juniors to “think before you code” - but nobody can actually sustain that mental model while debugging a React render cycle.
Here’s the thing: AI doesn’t have that limitation.
Holy Shit Moment #1: The Architecture Layer Just Separated from Implementation
I built two complete production apps in the past month without writing a single line of implementation code:
Project 1: Hatcher News - A native iOS Hacker News reader
- SwiftUI frontend with Core Data + CloudKit sync
- Zero external dependencies (100% native frameworks)
- HTML sanitization, certificate pinning, URL validation
- Infinite scroll pagination with 1-second load times
- 25 Swift files, ~5,000 lines of production code with full unit test coverage
- Comprehensive test suite with mocks and test doubles
- Automated security hardening and Apple App Store readiness with remediations
Project 2: Robotron 2084 Web Remake - Browser-based arcade game
- TypeScript + PixiJS rendering engine after a complete monolith refactor
- Fixed timestep game loop (60 FPS physics)
- Progressive difficulty scaling with 5 levels
- Multi-input support (keyboard, gamepad, touch)
- Particle effects system, camera shake, screen flash
- Save/load with localStorage persistence
- 43 automated tests (88% pass rate)
My role in both projects: architect and reviewer.
I specified what I wanted. Claude Code built it. I reviewed the results. Claude Code fixed issues. I provided feedback on architecture. Claude Code refactored. We iterated until it worked.
Note: I plan to open source both projects soon after human-in-the-loop sanity checks.
Time to working MVP:
- Hatcher News: ~6 hours (including App Store security hardening)
- Robotron: ~8 hours (first playable version)
Lines of code I personally typed: maybe 200 total, mostly architectural comments and specifications.
What Is Agentic Coding?
Traditional AI coding assistance: you ask, AI answers, you implement.
Agentic coding: you specify, AI implements, you approve.
The difference is who owns the implementation layer.
With GitHub Copilot, you write:
func fetchStories() {
// AI suggests next line
let url = URL(string: "https://...")
// You accept or reject
// You write the next line
// AI suggests again
}
With Claude Code (agentic mode), you write:
"Build a NetworkService that fetches Hacker News stories
using the Firebase API with async/await, implements
1-hour file-based caching, handles offline mode, respects device dark mode, and
includes proper error handling."
Claude writes the entire implementation. All 150 lines. With proper error handling. With cache validation. With offline support. With documentation comments.
You review it. You might say “use certificate pinning for security and Apple App Store readiness.” Claude refactors the entire networking layer. You approve. Done.
The Three Shifts That Change Everything
Shift 1: From Syntax to Semantics
Old way: “How do I decode HTML entities in Swift?” New way: “Sanitize user-generated HTML to prevent XSS attacks.”
You stopped thinking in code. You started thinking in requirements and constraints.
When I built the comment system for Hatcher News, I didn’t research Swift string manipulation APIs. I didn’t look up regex patterns for HTML entity decoding. I didn’t debug edge cases with nested tags.
From my requirements, Claude asked: “Should I build an HTML sanitizer that handles named entities, decimal entities, hex entities, strips dangerous tags using an allowlist approach, and prevents XSS attacks?”
Claude Code:
- Created
HTMLSanitizer.swiftwith proper entity decoding - Implemented tag stripping with allowlist validation
- Added double-encoding prevention
- Wrote unit tests with malicious input examples
- Generated documentation with usage examples
Total time: 3 minutes, including review.
The same task the old way: 45 minutes minimum (research + implement + debug + test).
Shift 2: From Files to Systems
Old way: Edit one file at a time, manually ensure consistency across related files. New way: Specify system-level changes, AI propagates across codebase.
When I added infinite scroll pagination to Hatcher News, the change required:
- Refactoring
StoryListViewModel(pagination state management) - Updating
NetworkService(fetch 20 stories instead of 100) - Modifying
StoryListView(loading indicator, scroll detection) - Fixing
CacheService(cache invalidation strategy) - Adjusting tests (mock pagination behavior)
I specified the user experience: “Implement infinite scroll pagination that loads 20 stories initially, fetches the next 20 when user scrolls to within 5 stories of the end, and shows a loading indicator during fetch.”
Claude Code modified 7 files simultaneously and maintained consistency across the entire system. It updated view models, networking layer, UI components, and test mocks - all in one pass.
The old way: you’d update one file, run the app, discover you broke something in another file, fix that, discover you broke tests, fix those, discover edge cases, fix those. Minimum 2 hours with multiple build-test-debug cycles.
Agentic way: 8 minutes including review and one round of refinement.
Shift 3: From Implementation to Verification
Old way: You implement, then you verify. New way: AI implements, you verify before it’s even built.
Here’s the wild part: Claude Code doesn’t just write code. It explains its architectural decisions before implementing and asks you to review.
When building the Robotron difficulty system, Claude proposed:
- Five difficulty presets (Easy, Normal, Hard, Nightmare, Adaptive)
- Wave-based scaling (5% increase per wave, capped at 300%)
- Performance tracking (K/D ratio, accuracy, rescue rate)
- Rolling 5-wave window for recent performance metrics
I reviewed the proposal. I suggested: “Add per-difficulty score multipliers so harder modes feel more rewarding.”
Claude updated the design doc. I approved. Then it implemented across 4 files (DifficultyManager, GameEngine, config, tests).
The old way: you discover architectural problems after you’ve already invested hours in implementation. You’re emotionally attached to your code. Refactoring feels like waste.
The new way: you discover architectural problems before implementation. No sunk cost. No emotional attachment. Just pure design iteration.
Holy Shit Moment #2: The AI Writes Better Code Than I Do
I need to be honest about something uncomfortable.
The code Claude wrote for both projects is way better than code I would have written.
Not “pretty good for AI.” Not “acceptable with some cleanup.” Actually better.
Ok, yeah, I’ve been in tech/software development for over 30 years, and about 15 of those years mostly in a leadership position. So, I haven’t been writing code every day. However, I do like to stay close to my team’s code - especially over the last 3 years using LLMs to help me do deep dives into codebases so that I can teleport in and be an active contributor.
Example: Hatcher News networking layer
Claude implemented:
- Proper async/await with structured concurrency
TaskGroupfor parallel fetching (20 concurrent requests)- Certificate pinning with custom
URLSessionDelegate - Exponential backoff for rate limiting (though not needed for HN API)
- Comprehensive error types (
NetworkErrorenum with associated values) - Protocol-based architecture for dependency injection
- Actor pattern for thread-safe cache access
Would I have implemented certificate pinning on my first pass? Hell no!
Would I have used TaskGroup instead of sequential fetches? Probably not.
Would I have created protocol abstractions for testing? Certainly not initially.
Claude implements best practices by default. Not because it’s trying to show off. Because it doesn’t have the cognitive shortcuts and lazy habits that humans (like me) develop.
When I write code, I’m thinking: “What’s the minimum viable implementation that solves this problem?”
When Claude writes code, it’s thinking: “What’s the correct implementation according to Swift best practices, security guidelines, performance optimization patterns, and testing requirements?”
The gap between those two questions is massive.
What This Actually Looks Like: A Real Session
Let me show you exactly how this works with a concrete example from Robotron development.
Me: “The collision detection is too slow. It looks like we’re checking every entity against every other entity. What’s the best way to prevent this?”
Claude: “I suggest implementing a spatial grid system for collision optimization. Here’s the plan I suggest:
- Divide game area into fixed-size cells (40x40 pixels based on typical entity size)
- Entities register themselves to cells they occupy
- Collision checks only happen within same cell + adjacent cells
- Update entity cell registration on movement
- Add collision layers using bit flags for filtering (player vs enemies, projectiles vs enemies, etc.)
This reduces collision checks from O(n²) to O(n) with typical entity distribution.
Should I proceed?”
Me: “Hell yes! Also add debug visualization so I can see the grid.”
(4 minutes later)
Claude: “Implementation complete. Changes:
- Created
CollisionSystem.tswith spatial grid - Added
CollisionLayerenum with bit flags - Modified entity update loop to register/unregister from grid
- Added debug rendering (toggle with ‘G’ key)
- Updated 6 entity classes to use new collision system
- Added unit tests for grid partitioning logic
Performance improvement: ~60% reduction in collision check time (measured with 100+ entities).
Debug mode shows green grid overlay with entity counts per cell.”
Me: (Tests it, finds grid cells too small for fast-moving projectiles) “Projectiles are moving fast enough to skip cells. Increase cell size to 80x80 and add swept collision for projectiles.”
Claude: “Good catch. Updated:
- Cell size now 80x80
- Projectiles now check cells along their movement path (Bresenham’s algorithm)
- Added configuration in
config.tsfor tuning cell size
Tested with high-velocity projectiles moving at 8 pixels/frame - no more missed collisions.”
Total time for this entire interaction: 12 minutes including testing.
What I didn’t do:
- Research spatial partitioning algorithms
- Implement grid data structures
- Calculate optimal cell sizes
- Debug swept collision edge cases
- Write collision layer bit masking logic
- Update six different entity classes manually
- Write unit tests
What I did do:
- Identify the problem (slow collision detection)
- Ask for a solution to mitigate (spatial partitioning)
- Review the implementation plan
- Test the result
- Provide feedback on issues (projectiles skipping cells)
- Verify the fix
My job shifted from coder to architect-reviewer + QA.
Holy Shit Moment #3: The AI Doesn’t Just Implement Your Architecture - It Helps Design It
Here’s where it gets really wild.
I’ve been describing agentic coding as “you architect, AI implements.” But that’s not quite right.
The AI collaborates on the architecture itself.
When I started the Hatcher News project, I didn’t have a complete system design. I had a vague idea: “Build a native iOS Hacker News reader with offline support and dark mode.”
I didn’t write a PRD. I didn’t sketch out the architecture. I didn’t plan the data model.
I just started a Claude Code session in /plan mode and said: “I want to build a native iOS Hacker News reader with a couple of vague requirements such as dark mode and offline support.”
Claude’s response wasn’t “Okay, what should I build?” It was:
“Let me enter plan mode to design the architecture before we start implementing.”
Plan Mode: Architecture as Conversation
Here’s what most people don’t understand about modern agentic tools: they have a planning phase.
Claude Code (and similar tools like Cursor, Windsurf) can enter “/plan mode” where they:
- Explore existing codebases and patterns
- Research best practices for the tech stack
- Propose system architectures with trade-offs
- Draft PRD-level specifications
- Get your approval before writing a single line of code
When I said “build a Hacker News reader,” Claude spent ~15 minutes in plan mode before implementation automatically spinning up multiple agents to research various aspects of the design:
Phase 1: Deep Think (3 minutes)
- Analyzed the Hacker News Firebase API structure
- Researched SwiftUI best practices for iOS 16+
- Identified core features (story browsing, comments, offline support)
- Proposed tech stack (SwiftUI, Core Data, CloudKit, URLSession)
Phase 2: Architecture Design (7 minutes)
- Proposed MVVM architecture with protocol-based services
- Designed data models (Story, Comment, User)
- Sketched networking layer (API client, caching, offline handling)
- Identified security requirements (HTML sanitization, certificate pinning)
- Outlined testing strategy (mocks, protocol conformance, unit tests)
Phase 3: PRD Draft (5 minutes)
- Listed core features with acceptance criteria
- Prioritized MVP vs. nice-to-have features
- Proposed user flows (browse stories, read articles, view comments)
- Identified edge cases (network failures, empty states, rate limiting)
Then it asked: “Does this architecture make sense for your needs? Any concerns or modifications?”
I reviewed the 3-page architectural proposal. I suggested:
- “Add iCloud sync for read status across devices”
- “Use Safari Reader Mode integration instead of in-app web view”
- “Start with infinite scroll disabled, add it later for performance”
Claude updated the architecture document. Then it started implementing.
What Just Happened?
Traditional workflow:
- You write PRD (4 hours)
- You design architecture (6 hours)
- You implement (40 hours)
- You discover architectural problems (oops)
- You refactor (12 hours)
Agentic workflow with plan mode:
- You describe high-level requirements (10 minutes)
- AI drafts PRD and architecture (15 minutes)
- You review and refine the plan together (20 minutes)
- AI implements approved architecture (2 hours)
- Architectural problems caught before implementation (0 hours wasted)
The AI did the first-pass architecture. Not the final architecture. Not perfect architecture. But a well-researched, best-practice-informed starting point that would have taken me hours to produce.
Deep Think: The Hidden Superpower
Here’s the part that blew my mind.
When Claude enters plan mode, you can see it thinking through architectural trade-offs in real-time.
For the Robotron game, I said: “Build a difficulty system that adapts to player skill.”
Claude’s deep think process (visible in plan mode):
Analyzing difficulty scaling approaches:
Option 1: Fixed difficulty levels (Easy/Normal/Hard)
+ Simple to implement and understand
+ Predictable player experience
- Doesn't adapt to individual skill levels
- Players outgrow difficulty settings
Option 2: Pure adaptive difficulty (rubber-banding)
+ Always challenging for player's skill level
+ High engagement potential
- Can feel unfair ("game is cheating")
- Hard to tune correctly
- Players may not feel progression
Option 3: Hybrid approach (presets + adaptive)
+ Player chooses baseline difficulty
+ Game adapts within that range
+ Feels fair and responsive
- More complex to implement
- Requires performance tracking
Recommendation: Option 3 (Hybrid)
- Implement 5 preset difficulties (Easy to Nightmare)
- Add Adaptive mode that adjusts based on K/D ratio, accuracy, survival time
- Track performance over rolling 5-wave window
- Scale difficulty 5% per wave (capped at 300%)
- Apply difficulty multipliers to spawn rate, enemy speed, and score
Should I proceed with detailed architecture for Option 3?
I didn’t know hybrid difficulty was even an option. I was thinking simple Easy/Normal/Hard presets.
Claude researched the problem space, identified three viable approaches, analyzed trade-offs, and recommended the best solution with reasoning.
This is Principal Engineer or Staff Engineer-level architectural thinking.
And it happened in 90 seconds.
PRDs Written in Minutes, Not Days
When I worked at Amazon or other enterprise companies, writing a PRD took 2-5 days minimum (if not weeks if you include iterations):
- Research existing solutions (4 hours)
- Draft requirements (6 hours)
- Design user flows (4 hours)
- Spec technical architecture (8 hours)
- Review with team (3 hours)
- Revise based on feedback (4 hours)
Total: ~30 hours spread across a week.
With Claude Code in plan mode, that same PRD quality takes 45 minutes:
- Describe the feature at high level (5 minutes)
- Claude researches and drafts PRD (15 minutes)
- Review and provide feedback (10 minutes)
- Claude revises (5 minutes)
- Final review and approval (10 minutes)
The AI does the research, first draft, and revisions. You do the strategic decisions and quality control.
The Architecture Becomes Living Documentation
Here’s the subtle brilliance: the architecture document Claude creates in plan mode becomes your project documentation.
For both projects, Claude generated:
- README.md - Feature overview and usage
- ARCHITECTURE.md - System design and patterns
- SETUP_GUIDE.md - Development environment setup
- TESTING.md - Testing strategy and instructions
- TROUBLESHOOTING.md - Common issues and solutions
Traditional development: you implement first, document later (maybe never).
Agentic development: you document first (via plan mode), then implement.
The architecture document drives implementation. Implementation updates document. They stay in sync because they’re part of the same workflow.
When Plan Mode Changes Everything
You don’t need to use plan mode for every change. However, you absolutely use it for:
1. New Projects - Design system architecture from scratch 2. Major Refactors - Evaluate migration strategies and trade-offs 3. Moderate to Complex Features - Think through edge cases and system interactions 4. Performance Optimization - Analyze bottlenecks and solutions 5. Security Hardening - Identify attack vectors and mitigations
Basically: any time you’d normally spend hours whiteboarding or writing docs.
Instead of whiteboarding alone, you’re whiteboarding with an AI that:
- Has read thousands of PRDs and architecture docs
- Knows current best practices for every major framework
- Can analyze your existing codebase for patterns
- Suggests approaches you didn’t consider
- Documents decisions as you make them
This is pair programming for architecture.
The New Workflow: Deep Think → Plan → Build → Iterate
Every complex feature in both projects followed this pattern:
1. Deep Think (AI explores the problem space)
- Researches existing solutions
- Identifies architectural patterns
- Proposes multiple approaches
- Analyzes trade-offs
2. Plan (collaborative architecture design)
- You choose between proposed approaches
- AI drafts detailed specifications
- You refine requirements
- AI updates architecture
3. Build (AI implements approved plan)
- Follows architecture document
- Implements across multiple files
- Maintains consistency
- Writes tests
4. Iterate (refinement cycle)
- You test and find issues
- AI proposes solutions
- You approve changes
- AI refactors
The breakthrough: Steps 1-2 used to happen entirely in your head (or in endless meetings).
Now they happen in collaboration with AI that brings research, pattern recognition, and documentation to the conversation.
Your brain does what it’s good at: strategic decisions, trade-off evaluation, creative problem-solving.
AI does what it’s good at: research, pattern matching, exhaustive analysis, documentation.
The Skills That Actually Matter Now
If AI handles implementation, what do humans do?
This is the uncomfortable question that everyone’s avoiding.
Here’s what I learned building these two projects as well as helping my teams at Amazon:
1. System Design Intuition
You need to know what good architecture looks like even if you’re not implementing it.
When Claude proposed using @Published properties in the SwiftUI view models, I knew to ask: “Should some of these updates be batched to avoid excessive view refreshes?”
When it implemented the game loop with delta time, I asked it to explain why, and then suggested: “Should we use fixed timestep instead for deterministic physics?”
You can’t outsource architectural judgment to AI. The AI knows best practices. You know which best practices matter for your specific use case.
2. Requirement Specification
The quality of output is 100% determined by quality of input.
Bad specification: “Add settings page.”
Good specification: “Add settings page with toggles for iCloud sync and auto Reader Mode, a button to clear read history with confirmation dialog, app version display, and links to Hacker News and API docs. Use SwiftUI Form with grouped style.”
The difference: Claude built exactly what I wanted vs. Claude built something I had to refactor three times.
Precision in requirements is the new core skill.
3. Verification & Testing
You need to know how to break your own software.
After Claude implemented infinite scroll, I tested:
- Scroll to bottom rapidly (race condition?)
- Toggle network off mid-fetch (offline handling?)
- Force kill app during cache write (data corruption?)
- Scroll to bottom, go back, scroll again (duplicate requests?)
Found bugs in all four scenarios. Claude fixed them. But I had to know which edge cases to test.
The AI doesn’t know what failure modes matter to you.
4. Contextual Trade-offs
Every implementation decision has trade-offs. AI can explain them. You decide which matters.
Claude proposed caching Hacker News stories for 1 hour. I asked: “What if users want fresher data?” We added pull-to-refresh. Then I asked: “What if users want older cache for longer offline usage?” We made cache expiry configurable.
The AI knows technical trade-offs (performance vs. memory, simplicity vs. flexibility). You know business trade-offs (freshness vs. offline support, features vs. launch date).
These are not the same thing.
Holy Shit Moment #4: This Is How Senior Developers Already Work
Here’s the realization that hit me halfway through the Robotron project:
Agentic coding is just Principal Engineer/Staff Engineer workflow, democratized.
When a senior developer delegates to junior developers:
- Senior architect specifies the system design
- Junior developers implement
- Senior reviews code in PR
- Junior refactors based on feedback
- Iterate until approved
When I use Claude Code:
- I collaborate with Claude on the system design
- Claude implements
- I review code in editor
- Claude refactors based on feedback
- Iterate until approved
The workflow is identical. I’m doing the job of a tech lead or Principal Engineer, except my “team” is an AI that:
- Works 24/7 without breaks
- Costs $20/month instead of $200k/year
- Never gets offended by feedback
- Never has knowledge gaps in unfamiliar frameworks
- Implements 10x faster than human developers
This isn’t replacing developers. This is giving every developer the leverage of a senior engineer with a team.
The bottleneck shifted from implementation capacity to architectural capacity.
How many good architectures can you design per week? How many system-level decisions can you evaluate? How many trade-offs can you reason through?
That’s your new output limit.
What This Means for Software Development
Let’s talk about what happens when implementation and architecture become collaborative.
Old economics:
- Feature idea: 1 hour
- PRD writing: 6+ hours (reviews + iterations)
- Architecture design: 8+ hours (reviews + iterations)
- Implementation: 40+ hours
- Testing: 8 hours (at least)
- Documentation: 3 hours
Total: 66+ hours. Most time spent on architecture and implementation.
New economics (with plan mode):
- Feature idea: 1 hour
- AI-assisted PRD (plan mode): 45 minutes
- Architecture review & refinement: 1 hour
- Implementation: 2 hours (AI implements from approved plan)
- Review & iteration: 3 hours
- Testing: 4 hours (AI writes tests, you verify)
- Documentation: 30 minutes (AI generated during plan mode)
Total: 12.25 hours. Most time spent on strategic decisions and review.
You’re 5.4x more productive - but in a completely different way.
You’re not writing code faster. You’re thinking about more systems simultaneously.
Before: I could actively develop one feature at a time. Now: I can architect three features in parallel while AI implements them.
The constraint shifted from execution time to cognitive load.
The Catch: This Requires Different Skills
Here’s the part nobody’s talking about: most jr. engineers aren’t trained for this.
Traditional CS education teaches:
- Data structures and algorithms
- Object-oriented programming patterns
- Framework-specific APIs and syntax
- Debugging techniques
- Code optimization
Agentic coding requires:
- System architecture and design patterns
- Requirement specification and communication
- Testing strategy and edge case identification
- Code review and security assessment
- Trade-off analysis and decision-making
Notice what’s missing: implementation skills.
The ability to write a quicksort implementation in 10 minutes is now irrelevant. The ability to recognize when quicksort is the wrong algorithm for your use case is critical.
We’re training developers for the wrong job.
Junior developers graduate knowing how to implement bubble sort but not how to specify API requirements. They can write React components but can’t articulate performance trade-offs between different state management approaches.
The skill gap just inverted.
Previously valuable skills (syntax mastery, API memorization, implementation speed): Decreasing in value.
Previously less-emphasized skills (architecture, communication, judgment): Becoming critical.
What to Do Next
If you’re a developer, here’s the playbook:
1. Start Using Agentic Tools Immediately
Stop using AI as autocomplete. Start using it as a collaborative implementation engine.
Tools to try:
- Claude Code - Full agentic development environment with plan mode
- Cursor - IDE with agentic features
- Aider - Terminal-based agentic coding
Minimum viable experiment: Pick a side project. Start with “/plan mode” to design the architecture. Let AI draft the PRD and system design. Review and refine together. Then let AI implement 100% of the code.
You’ll be terrible at it initially. Your specifications will be vague. You’ll miss obvious bugs. That’s the point. You’re learning new skills.
Critical: Use plan mode for architecture, not just implementation. Say “Let’s plan this out first” before any complex feature. The AI will research approaches, propose solutions, and document decisions before writing code.
2. Build Specification Skills
Practice writing detailed, precise requirements.
Bad: “Add authentication” Good: “Implement JWT-based authentication with refresh tokens, httpOnly cookies, CSRF protection, 15-minute access token expiry, 7-day refresh token expiry, automatic token refresh before expiration, and proper error handling for expired/invalid tokens.”
Read RFC documents. Study API design guides. Learn how senior engineers write technical specs.
Resource: How to Write a Good Technical Specification
3. Develop Verification Instincts
Learn to think like a QA engineer + security researcher + performance engineer simultaneously.
For every feature, ask:
- Security: What could go wrong? (XSS, injection, CSRF, etc.)
- Performance: Where are the bottlenecks? (N+1 queries, memory leaks, etc.)
- Edge cases: What breaks this? (Empty data, network failure, race conditions, etc.)
- Usability: Where will users get confused? (Error messages, loading states, etc.)
Practice: Take any open-source project. Spend 30 minutes trying to break it. Document every bug. This trains your verification instincts.
4. Study System Architecture
You need pattern recognition for good vs. bad architecture.
Read:
- A Philosophy of Software Design by John Ousterhout
- Designing Data-Intensive Applications by Martin Kleppmann
- Software Engineering at Google (free online)
Practice: Review open-source codebases. Ask: “Why was it designed this way? What trade-offs were made? What would I do differently?”
5. Build Something Ambitious
The best way to learn agentic coding: build something you couldn’t build before.
Pick a project that would have taken you 3 months. Give yourself 2 weeks with Claude Code.
My recommendations:
- Native mobile app (iOS/Android)
- Real-time multiplayer game
- Full-stack SaaS with payments
- Developer tool with CLI + web dashboard
Push your architectural skills, not your implementation skills.
The Bigger Picture: Software Development Is Splitting
Here’s my prediction for the next 3 years:
Software development splits into two distinct career paths:
Path 1: Implementation Engineers Focus: Writing code, fixing bugs, implementing features to spec. AI Impact: 80% automated by 2028. Career outlook: Declining demand, decreasing salaries.
Path 2: Software Architects Focus: System design, requirement specification, trade-off analysis, verification. AI Impact: AI is a force multiplier, not replacement. Career outlook: Increasing demand, increasing salaries.
The uncomfortable truth: Most current developers are on Path 1.
The ones who thrive will be the ones who rapidly transition to Path 2.
This doesn’t mean “learn system design eventually.” It means start today. The transition period is 12-24 months. After that, the gap becomes very hard to close because the seniors who made the transition will have 2x the output of those who didn’t.
First-mover advantage is real here.
The Real Question Nobody’s Asking
Everyone’s asking: “Will AI replace developers?”
Wrong question.
The right question: “Will developers who use AI replace developers who don’t?”
And the answer to that is: Absolutely yes. It’s already happening.
I built a production-ready iOS app and a complete web game in less than 15 hours total.
A traditional developer would need 2-3 weeks minimum for the iOS app alone.
I’m not 10x faster at coding. I’m 10x faster at shipping.
That’s the difference. And that difference compounds.
In 6 months, I’ll have shipped 10 projects. The traditional developer will have shipped 2.
Who’s more valuable? The one with 10 real projects in production, battle-tested and refined through user feedback.
Who’s learning faster? The one iterating through 10 different architectural challenges.
The productivity gap is widening every month. And it’s not because AI is getting better (though it is). It’s because developers who adopt agentic workflows are accumulating experience faster.
Final Thoughts: We’re at the GeoCities Moment for Development Tools
In 1999, building a website required HTML knowledge. Then WordPress launched. Suddenly anyone could build a website.
Did web developers disappear? No. But the job completely changed.
Before WordPress: most web dev work was implementing basic CRUD apps and content sites. After WordPress: web developers focused on complex applications, custom functionality, and performance optimization.
The commodity work automated away. The complex work became more valuable.
We’re at that inflection point for software development.
Basic CRUD apps? API integrations? UI implementation? Commodity work that’s automating away.
System architecture? Performance optimization? Security hardening? More valuable than ever.
The developers who recognize this early and adapt their skills accordingly will dominate the next decade.
The ones who keep writing implementation code manually will wonder why they’re competing with juniors for fewer and fewer positions.
The tools changed. The job changed. The skills required changed.
The only question is: are you going to change with them?
Sources & Further Reading
Agentic Coding Tools:
- Claude Code - Full agentic development environment from Anthropic
- Cursor - AI-first code editor with agentic features
- Aider - Terminal-based pair programming with AI
Example Projects (Source Code):
- Hatcher News iOS App - Native SwiftUI Hacker News reader
- Insight: Clean example of modern Swift architecture with zero dependencies
- Robotron 2084 Web Remake - TypeScript + PixiJS arcade game
- Insight: Demonstrates complex game systems (physics, collision, difficulty scaling) built with agentic workflow
- Live demo: Play it here
Architecture & System Design:
- A Philosophy of Software Design by John Ousterhout
- Insight: Essential reading for understanding complexity management and architectural trade-offs
- Designing Data-Intensive Applications by Martin Kleppmann
- Insight: Deep dive into distributed systems patterns that matter when you’re specifying rather than implementing
- Software Engineering at Google (free online)
- Insight: How tech giants think about code review, testing, and technical decision-making
The Changing Skills Landscape:
- Stack Overflow Developer Survey 2025
- Insight: 73% of developers now use AI coding assistants, but most still use them as autocomplete
- GitHub Copilot Impact Study
- Insight: Showed 55% faster completion time, but measured the wrong metric - speed of implementation rather than quality of architecture
What’s Coming Next:
- Anthropic’s Model Context Protocol (MCP)
- Insight: Standardizing how AI agents interact with development tools and codebases
- Simon Willison’s Blog on AI-Assisted Development
- Insight: Thoughtful analysis of AI tools from a working developer’s perspective
Built with Claude Code. Both example projects (iOS app + web game) developed using agentic workflow in under 15 hours total.