Google AI Studio has received a big coding update with a new interface, buttons, tips, and community features that allow anyone with an idea for an app (even beginners, laymen, or non-developers like yours truly) to bring it to life and deploy it live, on the web, so anyone can use it, within minutes.
The updated Build tab is now available in ai.studio/buildand getting started is free.
Users can experiment with building apps without needing to enter payment information upfront, although certain advanced features like Veo 3.1 and Cloud Run deployment require a paid API key.
It seems to me that the new features make Google’s AI models and offerings even more competitive, perhaps preferred, by many general users against rivals from dedicated AI startups like Anthropic’s Claude Code and OpenAI’s Codex, respectively, two "vibration coding" focused products that are loved by developers, but seem to have a higher barrier to entry or may require more technical knowledge.
A New Beginning: Redesigned Build Mode
The updated Build tab serves as an entry point to Vibe coding. It introduces a new design and workflow where users can select from Google’s suite of AI models and functions to power their applications. The default is Gemini 2.5 Pro, which is great for most cases.
Once selections are made, users simply describe what they want to build and the system automatically assembles the necessary components using Gemini APIs.
This mode supports mixing capabilities such as Nano Banana (a lightweight AI model), I Spy (for video understanding), Imagine (for image generation), Flashlight (for performance-optimized inference), and Google Search.
Patrick Löber, Developer Relations at Google DeepMind, highlighted that the experience is intended to help users “power their applications with AI” through a simple application access process.
In a video demo he posted on X and LinedIn, he showed how just a few clicks automatically generated a garden planning assistant app, complete with layouts, images and a conversational interface.
From request to production: creation and editing in real time
Once an application is built, users access a fully interactive editor. On the left, there is a traditional code assistance interface where developers can chat with the AI model for help or suggestions. On the right, a code editor displays the complete source code for the application.
Each component, such as React entry points, API calls, or style files, can be edited directly. Tooltips help users understand what each file does, which is especially useful for those less familiar with TypeScript or frontend frameworks.
Apps can be saved to GitHub, downloaded locally, or shared directly. Deployment is possible within the Studio environment or through Cloud Run if advanced scaling or hosting is needed.
Inspiration on demand: the “I’m feeling lucky” button
A notable feature of this update is the “I’m feeling lucky” button. Designed for users who need a creative boost, it generates random app concepts and configures the app accordingly. Each press generates a different idea, complete with suggested AI features and components.
Examples produced during the demonstrations include:
-
An interactive map-based chatbot powered by Google Search and conversational AI.
-
A dream garden designer using advanced imaging and planning tools.
-
A trivia game app with an AI server whose personality users can define, integrating Imagine and Flashlight with Gemini 2.5 Pro for conversation and reasoning.
Logan Kilpatrick, product lead for Google AI Studio and Gemini AI, noted in his own demo video that this feature encourages discovery and experimentation.
“You get really interesting and different experiences,” he said, emphasizing its role in helping users find novel ideas quickly.
Practical test: from message to app in 65 seconds
To test the new workflow, I asked Gemini:
A random dice rolling web app where the user can select from common dice sizes (6 sided, 10 sided, etc.) and then watch an animated dice rolling and also choose the color of their dice.
In 65 seconds (just over a minute), AI Studio returned a fully functional web application presenting:
-
Dice size selector (d4, d6, d8, d10, d12, d20)
-
Color customization options for the die.
-
Animated rolling effect with random results.
-
Clean and modern UI built with React, TypeScript and Tailwind CSS
The platform also generated a complete set of structured files, including App.tsx, constants.ts, and separate components for controls and dice logic.
After generation, it was easy to iterate: adding sound effects for each interaction (roll, choose a dice, change color) required only one follow-up message to the built-in wizard. By the way, this was also suggested by Gemini.
From there, the app can be previewed live or exported using built-in controls to:
-
Save to GitHub
-
Download the full code base
-
Copy the project to remix.
-
Implementation through integrated tools
My brief hands-on test showed how quickly even small utility applications can go from an idea to an interactive prototype, without leaving the browser or writing repetitive code manually.
AI-suggested feature improvements and refinements
In addition to code generation, Google AI Studio now offers contextual feature suggestions. These recommendations, generated by Gemini’s flashlight capability, analyze the current application and propose relevant improvements.
In one example, the system suggested implementing a feature that displays the history of previously generated images in an image study tab. These iterative improvements allow developers to expand the functionality of the application over time without having to start from scratch.
Kilpatrick emphasized that users can continue to refine their projects as they go, combining automatic generation and manual adjustments. “You can go in and continue editing and refining the experience you want iteratively,” he said.
Free to start, flexible to grow
The new experience is available at no cost to users who want to experiment, prototype or create lightweight applications. You do not need to enter credit card information to start using Vibe encryption.
However, more powerful capabilities, such as using models like Veo 3.1 or deploying through Cloud Run, require switching to a paid API key.
This pricing structure aims to lower the barrier to entry for experimentation while providing a clear path to scale when necessary.
Designed for all skill levels
One of the central goals of launching Vibe coding is to make AI application development accessible to more people. The system supports both high-level visual creators and low-level code editing, creating a workflow that works for developers of all experience levels.
Kilpatrick mentioned that while he is more familiar with Python than TypeScript, he still finds the editor useful due to the helpful file descriptions and intuitive layout.
This focus on usability could make AI Studio an attractive option for developers exploring AI for the first time.
More to come: a week of releases
The release of Vibe coding is the first in a series of announcements expected throughout the week. While no specific future features have been revealed yet, both Kilpatrick and Löber hinted that additional updates are on the way.
With this update, Google AI Studio is positioned as a flexible, easy-to-use environment for creating AI-powered applications, whether for fun, prototyping, or production deployment. The goal is clear: make the power of Gemini APIs accessible without unnecessary complexity.
