How I Built a Battle Card Game in 30 Days Using Agentic AI
I built a fully functional battle card game in 30 days. AI wrote the code. I did everything else.
And by "everything else," I mean establishing architectural patterns, debugging extensively, manually testing and refining UI, and building a component library from scratch. The AI agents handled the code generation—100% of it. The actual work was in the orchestration.
This is agentic AI development in practice: not AI as a code completion tool, but as the primary code generator while you architect, direct, and refine.
The Setup
The idea was straightforward: build a battle card game featuring artwork from my son and his friends. Hand-drawn characters and designs that kids actually created. The charm was built right in from the start.
But this was really about exploring what's possible with agentic AI as a development tool. Thirty days later, we had Scribble Cards: a fully functional game with comprehensive unit testing, live on Roblox.
Learning from Past Failures
I'd tried building a Roblox game with AI last year. It failed. The AI kept implementing game logic client-side that should have been server-side. Security was compromised. The architecture was fundamentally broken. I had to scrap the entire thing.
This time, I started differently: before writing a single feature, I had Claude research current best practices for Roblox game architecture and security. That foundational work paid off throughout the entire project. The AI laid down the proper structure before building anything on top of it.
The Development Flow
My son and I would brainstorm new features together. Then I'd have Claude run as an async background agent to develop the implementation plan—creating ASCII art diagrams showing mobile and desktop designs, writing those to a file for review. Once I selected the designs I wanted, I'd tell it to implement.
The agents wrote the code. Game logic, features, everything. But making that code actually work required substantial effort on my end—manually testing, identifying what worked and what didn't, debugging UI issues. The AI wrote code quickly. I made it functional.
Building the Component Library
Early in development, the AI started hard-coding UI components everywhere. Every screen had its own slightly different buttons, container boxes, and layouts. There wasn't a decent UI component library for Roblox, so I started building one.
I created rules: "Don't duplicate effort on these components. Use shared components." The AI kept ignoring them and creating its own variations anyway.
This continued until after launch, when I did a massive architecture review and refactor. That's when the AI finally identified all the duplicate code and properly consolidated everything into shared components. Sometimes you have to show the system its own patterns before it recognizes them.
The Post-Launch Transformation
After launch, I had the AI review its own code. The initial version worked, but it had fallen into the classic trap—massive files ranging from 1,000 to 5,000 lines. Not sustainable for long-term maintenance.
I had it analyze what it had built, identify duplication, spot architectural issues, and refactor everything. DRY up the code. Break things into cleaner, more manageable pieces.
The transformation was significant. In the first week after launch, I shipped multiple major releases—not minor bug fixes, but substantial feature additions. The cleaner architecture enabled rapid iteration without breaking everything.
Testing Infrastructure and Limitations
I got an MCP (Model Context Protocol) working that allowed Claude to communicate directly with Roblox Studio to run tests. The agents wrote the entire unit test suite.
I attempted end-to-end testing, but ran into limitations. The MCP has restricted ability to access client-side UI components. I started down a path of injecting code that the app would recognize to interact with the UI, but it wasn't reliable. Too sloppy, so I abandoned it and focused on unit tests instead.
If Roblox Studio eventually allows client-side access from the MCP to interact with UI components, it would be transformative. I could test entire workflows—run through a complete battle from start to finish, validate matchmaking, verify every interaction.
The Real QA Process
My son and his friends serve as the primary QA team. They test every move, every card, different dynamics and interactions. They surface issues that don't show up as crashes in analytics—unexpected behavior, matchmaking problems, edge cases in game mechanics.
When bugs come in, I feed them to Claude. It fixes them. I review, test, and push new releases. Then the cycle continues.
What This Actually Means
This isn't using AI as a helpful assistant while you write code. This is a fundamentally different development paradigm—AI agents generate 100% of the code while you handle everything else that makes software actually work.
The game went from concept to launch in 30 days. It has proper architecture because I insisted on researching and implementing best practices first. It has comprehensive unit testing because I set up the MCP infrastructure and directed the agents to build the test suite. It's live on Roblox with real users.
I'm shipping major updates multiple times per week now. I have code editors open to review implementation, modify JSON configuration files, debug issues, and ensure everything integrates properly. The AI generates code. I architect, direct, debug, and refine.
This is agentic AI development in 2026—not theoretical, not five years away. It's what's possible right now when you understand that the AI generates code, but you still make everything work. This process only gets more streamlined from here.