While AI coding startups like Cursor and Replit have reached billion-dollar valuations, mobile vibe coding apps are failing to capture user interest. New data from Appfigures reveals that even the most popular mobile coding apps have generated minimal downloads and virtually no revenue, highlighting a stark disconnect between desktop AI coding success and mobile adoption.
The numbers tell a sobering story about mobile vibe coding's reality check. While Cursor's Anysphere commands a $9.9 billion valuation and Replit hit $3 billion on $150 million in annualized revenue, their mobile counterparts are barely registering on app store charts.
According to Appfigures analysis shared with TechCrunch, the leading mobile vibe coding app Instance: AI App Builder has managed just 16,000 downloads and a meager $1,000 in consumer spending. The second-place app, Vibe Studio, pulled in 4,000 downloads but generated zero revenue - a stark contrast to the billions flowing into desktop AI coding tools.
This mobile drought persists even as new players enter the space. Vibecode, which launched this year with $9.4 million in seed funding from Reddit co-founder Alexis Ohanian's Seven Seven Six, promises users can build mobile apps using AI directly on their iPhone. But the startup is so new that Appfigures doesn't yet track its performance data.
The mobile coding disconnect becomes clearer when you look at where AI-generated apps actually make money. RevenueCat, the subscription platform used by over 50,000 apps, revealed to TechCrunch that it now powers in-app purchases for more than 50% of all AI-built iOS apps on the market. The company's head of partnerships noted that AI-referred signups surged to over 35% of all new customers in Q2 2025, up dramatically from below 5% in Q2 2024.
But here's the twist - these AI-built apps making money through RevenueCat aren't being created on mobile devices. They're being coded on desktop platforms like Cursor and Claude Code, then deployed to mobile app stores. RevenueCat's MCP server integration allows desktop AI coders to quickly configure subscriptions and test monetization features, but the actual coding happens on traditional computers.
The technical reality explains much of this mobile gap. with developers revealed that AI-generated code still requires significant human oversight and debugging. A of nearly 800 developers found that 95% needed extra time to fix AI-generated code - a process that's particularly challenging on mobile screens and touch interfaces.