How I Built My First iOS App with AI
TL;DR
- I shipped Nami, a native SwiftUI habit tracker, to the App Store. I had never written an iOS app before.
- The stack: Claude Code (agent teams + custom skills), Codex CLI, XcodeBuildMCP, and ASC CLI.
- I wrote two custom skills,
ios-frontend-designandios-swift-dev(995 lines), shared by both Claude Code and Codex. - XcodeBuildMCP closed the agentic loop: agents could build, test, screenshot, and validate their own work.
- Three weeks of development in January. February for testing and App Store Connect. Zero external Swift package dependencies.
The Problem
I wanted to build an iOS app. I had never written one.
I knew Swift existed. I’d seen SwiftUI demos. But I had zero hands-on experience with Xcode projects, state management, StoreKit, or any of the frameworks you need to ship something real.
The usual advice: take a course, build a tutorial app, then build your real app. That’s months. I wanted to find out whether AI tooling had gotten good enough to compress that timeline. Not by cutting corners, but by handling the parts where I’d otherwise be stuck reading docs for hours.
The App
Nami (Daily Flow) is a habit tracker with a deliberate constraint: you can only track three habits. That’s the product decision. No habit hoarding. Pick three, commit, review.
The technical scope:
- SwiftUI-only: no UIKit, no external Swift package dependencies
- SwiftData for persistence
- WidgetKit + ActivityKit (Dynamic Island support)
- StoreKit 2 for in-app purchases
- UserNotifications for reminders
- Dark mode only, OLED-optimized
- Localized in English and German
- 117 Swift files, thoroughly tested
I intentionally scoped Nami to live entirely on the iPhone. That decision shaped everything.
No backend server. No CloudKit sync. No external authentication. No subscription billing infrastructure. No real-time collaboration. No multi-device sync. No push notification server. Everything runs on-device, persisted locally with SwiftData.
Each of those is a rabbit hole. CloudKit alone has a notoriously steep learning curve: conflict resolution, CKSyncEngine, subscription-based notifications, offline caching strategies. Sign in with Apple is mandatory if you offer any third-party login, which pulls in Keychain management, token handling, and session persistence. Auto-renewable subscriptions mean server-side receipt validation, grace periods, billing retry logic, and App Store Server Notifications. And multi-device sync? That’s an entire project on its own.
I skipped all of it. Not because these features don’t matter, but because none of them were necessary for a habit tracker that tracks three things on one device. The goal was a fully functional, feature-complete app that I could actually ship. Not a prototype, but also not the most complex app in the world. An MVP in the truest sense: polished, complete, and something I actually use every day.
The Toolchain
Here’s what I used and what each tool did.
Claude Code: Spec + Design
I used Claude Code for specification and design. Early on, I used Agent Teams to work through the app specification. Multiple Claude instances reviewing requirements, spotting gaps, and stress-testing the design before I wrote any Swift.
For UI work, I leaned on Claude Code with a custom ios-frontend-design skill that encoded SwiftUI patterns, layout rules, and the app’s visual language. Claude Code was where every idea got shaped into something buildable.
Codex CLI: Implementation
Codex CLI handled the actual implementation. Once the spec and design were solid, Codex wrote the Swift code, built out the features, and wrote the tests. It was the workhorse that turned specifications into working code.
Shared Infrastructure: XcodeBuildMCP + ASC CLI
Both Claude Code and Codex worked with the same set of custom skills and the same tooling underneath.
XcodeBuildMCP was the most critical piece of the entire setup. It’s an MCP server that gives AI agents direct control over Xcode: builds, simulator runs, screenshots, UI interaction, and running tests.
Why critical? Without it, there’s a gap. AI agents can write code, but they can’t see what it does. They can’t verify their own work. XcodeBuildMCP closes that loop. The agent writes code, builds it, runs it on the simulator, takes a screenshot, reads the UI, and decides whether to adjust. That’s not just automation. That’s the agent validating its own output. End-to-end tests become something the agent can run and react to, not just something you check after the fact.
My setup from AGENTS.md:
session-set-defaults:
projectPath: dots/dots.xcodeproj
scheme: nami
simulatorName: "iPhone 17 Pro"
useLatestOS: true
With 32+ XcodeBuildMCP methods enabled, both environments could build, run unit tests, launch the simulator, take screenshots, describe the UI, and tap elements. The feedback loop ran dozens of times per session. Write code, build, test, screenshot, adjust. That’s the agentic loop, closed.
ASC CLI handled App Store Connect automation: listing apps, managing in-app purchases, uploading screenshots, and tracking submission status. Without it, I would have spent days clicking through the ASC web interface.
Writing Custom Skills
For me, this was the highest-leverage move of the entire project.
I wrote two custom skills that both Claude Code and Codex used throughout the project:
-
ios-frontend-design: iOS design patterns, SwiftUI component conventions, and the app’s visual language. Dark mode, OLED-safe colors, layout rules. -
ios-swift-dev: 995 lines covering Swift 6.2 concurrency,Observablestate management, SwiftData patterns, the XcodeBuildMCP workflow, and testing conventions. Everything I learned about iOS development, encoded so the AI wouldn’t forget it between sessions.
The skills acted as persistent memory. Instead of re-explaining the project’s architecture every session, both Claude Code and Codex loaded the skills and started with full context. This mattered most for consistency. The same naming conventions, the same error handling patterns, the same architecture across 117 files.
Every time I figured out a Swift pattern, I added it to ios-swift-dev. I found the AI made fewer of the same mistakes in later sessions. My future sessions started smarter.
The Timeline
I started on January 11, 2026. Three weeks of building the app: architecture, SwiftData models, MVVM structure, the full UI in SwiftUI, notifications, onboarding, localization, widgets. All while learning Swift from scratch and tuning my Claude Code skills to produce code that actually compiled.
By early February the app was feature-complete. I spent the rest of the month on thorough testing (unit tests with Swift Testing, end-to-end validation through XcodeBuildMCP) and the App Store Connect submission process.
That second phase deserves its own section.
The App Store Connect Gauntlet
This nearly derailed the project.
Building the app was the creative part. Getting it into the App Store was a completely different kind of work: code signing, provisioning profiles, export options, screenshot specifications per device size, IAP product configuration, privacy declarations, age ratings. None of this is coding. It’s bureaucracy with a build system.
ASC CLI saved me here. Instead of clicking through Apple’s web interface for every operation, I could list apps, configure in-app purchases, and upload screenshots straight from the terminal. I wrote upload scripts that handled both EN and DE localizations. And finally: submission on February 19.
Tools That Made This Possible
| Tool | Role | Link |
|---|---|---|
| Claude Code | Spec refinement and iOS design | claude.com/product/claude-code |
| Claude Code Agent Teams | Multi-agent spec review | code.claude.com/docs/en/agent-teams |
| Claude Code Skills | Persistent iOS knowledge (shared by both) | code.claude.com/docs/en/skills |
| Codex CLI | Implementation and test writing | github.com/openai/codex |
| XcodeBuildMCP | Build, run, test, screenshot via MCP | github.com/getsentry/XcodeBuildMCP |
| ASC CLI | App Store Connect automation | asccli.sh |
Three Lessons for First-Time iOS Builders
1. Write skills, not just code.
The biggest unlock wasn’t any single tool. It was encoding what I learned into Claude Code skills. If you’re using AI for iOS development, invest in your skills files early. They compound.
2. Scope it so you can actually ship.
Every feature you add to v1 is a feature that can block your launch. CloudKit sync, server-side receipt validation, multi-device handoff, real-time collaboration: each of these is weeks of work and a new category of bugs. For my first app, I cut all of it. Everything runs on one device, persisted locally. That decision alone probably saved me a month. Ship first. The App Store doesn’t reward ambition, it rewards apps that exist.
3. Budget real time for App Store Connect.
I underestimated ASC. Code signing, screenshots, IAP configuration, privacy declarations. It took a full week even with CLI automation. Don’t treat it as a half-day task at the end. Plan for it.
Conclusion
I shipped a native SwiftUI app to the App Store having never written one before. I couldn’t have done this in a month without the AI toolchain, but the real multiplier was writing custom skills that encoded what I was learning. The tools handled the syntax. The skills handled the consistency.
Check out Nami on the App Store or read more about the project on the Nami project page.