How to create powerful apps with Google AI Studio and Gemini

  • Google AI Studio allows you to create full-stack apps with Gemini from natural language instructions, combining frontend, backend, and npm packages.
  • The antigravity agent manages context, multiple files, and code verification to iterate over the app via chat, annotations, and direct editing.
  • Apps can be exported to ZIP or GitHub, deployed on Cloud Run, Vercel or other hosting services, always ensuring the security of API keys.
  • The generated web projects can be packaged as Android applications with Capacitor, generating APKs ready to install and update easily.

Create apps with Google AI Studio

If you have an idea for an app and the code has held you back until now, with Google AI Studio and the Gemini models That barrier is much lower. You no longer need to be a React or Node.js guru to build a decent prototype: simply describe what you want and let the AI ​​roll up its sleeves.

In this article you will see, step by step, how create apps with Google AI StudioWhat does "vibe coding" mean? How can you take advantage of the full stack it offers (frontend, backend, npm, secrets…), how to connect it to GitHub, deploy it to Cloud Run or Vercel, and even how to package your web project into a Native Android app using CapacitorIt's a long journey, but if you take it slowly, you can go from idea to functional app in no time.

What is Google AI Studio and why is it so useful for creating apps?

Google AI Studio It's Google's web environment for working with the Gemini SDK and API. Its motto is something like "build your ideas with Gemini," and it lives up to it: it lets you start with a simple natural language prompt and end up with functional applications or complete prototypes ready to test, export or integrate into your own projects.

Unlike Gemini's "for everyone" app, in AI Studio you have fine control over the behavior of the modelsYou can choose specific versions (Gemini Flash, Pro, Flash Lite, etc.) and access other models in the family such as Image, Veo, Gemini TTS, Gemini Native Audio or Nano BananaThis combination of models makes it possible to build multimodal apps that work with text, images, audio, or video without breaking a sweat.

Another great advantage is that Google AI Studio works as a kind of AI-assisted IDE in the browserYou can write prompts, see responses, review the generated code, manually interact with it, iterate with the AI, and ultimately download the project in formats such as Python, JavaScript, Go, or cURLor connect it to your own backend using the Gemini API.

All of this runs in the cloud, so you just need a browser and a Google accountThere's nothing to install, your device doesn't need to be powerful, and the free version provides plenty of time to learn, experiment, and build prototypes. When you start using Gemini API keys seriouslyYou will switch to a pay-per-use model, similar to any cloud platform.

Google AI Studio interface

“Vibe coding”: programming by talking to AI

One of the key concepts in Google AI Studio is the “vibe coding”Basically, it's about building applications by clearly describing what you want your app to do and how it should behave, instead of writing all the files and functions from scratch.

In practice, “vibe coding” allows you to starting projects without yet mastering the entire technical ecosystemYou can start building a website, an internal tool, or a sophisticated prototype simply by chatting with Gemini and refining the result through iterations.

However, as the project grows, technical knowledge becomes important again: if you want an application secure, efficient and scalableUnderstanding some programming, architecture, and best practices is still important. Google AI Studio gives you a comfortably low entry point, but it's no substitute for the experience of a professional environment.

The great thing is that it works very well as space to experiment, validate ideas and learnYou can present your idea, see how the AI ​​translates it into code, review the logic, ask why it has done something in a specific way, and refine it until the result fits what you are looking for.

In fact, a good approach is to separate your projects into two phases: a first one for define context, interface and basic logicand a second one for integrate databases, authentication, and multi-user capabilities when you already have a clear understanding of the flow and the experience.

How to start creating apps in Google AI Studio's Build mode

When you enter Google AI Studio, you will see several sections: Playground (to chat and try out models), Build (to create apps), and Dashboard (for managing API keys and usage). To build applications, the key section is Build, which is where vibe coding lives.

Within build mode, you have several ways to start your project. You can start with a natural language instructionUse the button “I’m going to be lucky” so that AI can suggest an app idea to you or remix a gallery project duplicating and adapting it.

If you choose to start with your own description, in the input box you write the application idea you want to build. You can supplement that prompt with the AI Chipswhich are quick options to indicate that you want, for example, image generation, integration with Google Maps, or certain data capabilitiesYou can even dictate the prompt using the voice-to-text button if you find that more convenient.

Another option, very useful for inspiration when you're short on creativity, is the button “I’m going to be lucky”You click, the platform generates a project suggestion with an initial prompt, and from there you can adapt what it proposes to your real needs.

And if you prefer to see finished examples first, you can dive into the App GalleryThere you have a visual collection of projects created with Gemini: you open them, test how they work, and if the idea suits you, you click on “Copy app” to use it as a template and modify it to your liking.

What does Google AI Studio generate when you run your prompt?

When you launch your instruction in Build mode, AI Studio automatically generates a working application.By default, it creates a full stack environment that includes a client (frontend) and a server (backend) based on current web ecosystem technologies.

On the side of companiesThe tool usually uses a React frontend as a default configuration. It builds the interface components, manages basic state, views, and user interaction, all based on what you've described in your prompt.

On the side of servera Node.js runtime Prepared to make secure calls to the Gemini API, connect to databases, use npm libraries, and handle business logic without exposing keys on the client.

All the code is organized into multiple files, and from the right panel you can switch between them. preview (Preview) of the real-time app and the tab Codewhere you can view and edit each file. This code view is very useful for understanding what the AI ​​has done and for making manual adjustments on the fly.

In the background, the one who keeps all of this together is the so-called antigravity agent, an AI agent designed to coordinate multiple files, maintain project context, and ensure that changes propagate correctly throughout the stack.

The Antigravity agent: the “brain” that coordinates your app

El Antigravity agent It's the AI ​​component that brings coherence to full-stack projects in Google AI Studio. It doesn't just spit out code without further ado: maintains the global context of the projectIt understands what you've ordered before and how your app is structured.

One of its key functions is the context understandingIt remembers the previous instructions you gave it, the state of the files, the runtime configuration, and the existing business logic. This allows it to apply your new requests while respecting what was there before.

It is also responsible for the managing multiple filescontrolling dependencies and relationships between modules. When you request a change that affects multiple parts (for example, adding a new endpoint in the backend and a button that calls it in the frontend), the agent knows which files to touch and how to connect them.

Finally, it incorporates a system of “verified execution”The idea is that it reviews the proposed code modifications and corrects inconsistencies typical of automatic generation, minimizing "hallucinations" or silly errors that would break the app.

All of this allows you to maintain a relationship with Gemini. ongoing dialogue about your projectYou describe a problem, attach an error message, comment on what behavior you expect, and the agent updates the code to move closer to that goal, keeping the application architecture under control.

Full stack capabilities: server, npm, secrets, and real-time

One of the great strengths of Google AI Studio is that it doesn't just stop at pretty client demos: it allows create full-stack applications with a real backend, npm packages, and serious data and security management.

On the server side you have a Fully functional Node.js environment with access to npm's vast library of packages. The agent itself can identify and install the dependencies needed for your case: visualization libraries, API clients, validation tools, etc.

If you want something specific, you can request in your prompt that it use a specific npm library, for example for connect to a database, manage dates, or work with CSV filesThe runtime will install those packages and integrate them into the server code.

The platform includes a system of secure management of secretsIn the settings menu you can save API keys and other sensitive data that will only be accessible from the server code, so that they are never injected into the client's JavaScript or made visible in the browser.

In addition, AI Studio allows you to create experiences of multiplayer or real-time collaboration, where multiple users interact simultaneously. The backend handles maintaining shared state, connections, and synchronization between clients.

Working and iterating within Google AI Studio

Once AI Studio has generated the initial version of your app, you have several ways to continue improving it without leaving the environment. You can combine the AI-assisted editing with the Manual code editingas you feel like it at any given moment.

On one side there is the panel of chat in Build modeThere you can ask Gemini for global changes (“make the design more minimalist”, “add an additional filter system”, “integrate a bar chart with this data library”) and see how it automatically adjusts the app.

On the other hand, the tab Code It allows you to directly edit the application files. You make a change, save it, and instantly see the result in the Preview view. If something breaks, you can always tell the agent that “Review and correct compilation errors” with the current code.

An interesting bonus is the so-called annotation modeIt allows you to visually highlight an interface element in the preview, type what you want to change right there, and let the AI ​​translate that annotation into appropriate code changes.

Once you have something you like, you can Share your app from AI Studio so that others can test it and collaborate, or move directly to the deployment phase on services like Google Cloud Run.

Export code and continue developing outside of AI Studio

There comes a time when you might want to integrate your app into a more traditional workflow, or continue developing it with your favorite editor. For that, Google AI Studio offers several options. export and integration with repositories.

One simple way is to use the option of Download your app in a ZIP fileYou get all the files generated by the platform (HTML, CSS, JS, components, build configuration, etc.), unzip them on your machine, and open them in VS Code, WebStorm, or your preferred editor.

Another very convenient alternative is integration with GitHubFrom the AI ​​Studio interface itself, you can click on "Save to GitHub", name the repository, choose whether it will be private or public, authorize the permissions, and let the platform make the first commit for you.

Once the repository is on GitHub, it's very easy Connect it to your CI/CD system or platforms like Vercel, Netlify, or GitHub Pages. Each push to the main branch (or the one you've configured) can trigger a new build and automatic deployment.

And if you prefer something fast without a sophisticated pipeline, you can always download only the core files (for example index.html, script.js, styles.css), put them all in the same folder and open the HTML in your browser to test the app locally outside of AI Studio.

Sharing, deployment, and security limits when creating apps

Once you have your application ready, or at least in a showable version, Google AI Studio offers several options for share it and implement it so that it is accessible from a public URL.

Within the platform itself you can generate a shared link to your app. Anyone with that URL will be able to open and use it. If the end user sees an error like “403 Access Restricted”This is usually due to browser extensions that block scripts (Privacy Badger and similar) or to compilation problems in the code.

If it's an extensions problem, simply... temporarily disable the ad blockerIf it's because of the code itself, you can ask the agent to “Correct any compilation problems with the current code” and share the link again once it's fixed.

For more serious deployments, you have two main options: Google Cloud Runwhich allows you to deploy the app as a scalable service on Google's infrastructure, or GitHubFrom there you can then publish to your trusted hosting provider or to services like Vercel, Netlify, or GitHub Pages.

However, you have to be careful with the Gemini API keys and other secretsNever put real keys in the client code; they should always be on the server, either in the AI ​​Studio runtime with its secrets manager, in Cloud Run, or in your own secure backend.

Key security, external deployments, and current limitations

One of the tricky points when creating apps with Google AI Studio is how to manage the API keys and other sensitive tokensThe golden rule is clear: never on the client. Always on the server or in a controlled environment.

On the side of companiesPlaceholders should not be replaced with actual keys in the JavaScript running in the browser. Any user could open the developer tools and copy the key, risking unauthorized access or uncontrolled API costs.

On the side of serverWhether in the AI ​​Studio runtime, Cloud Run, or your own backend, use secret managers and environment variablesFrom AI Studio you can save your keys in the configuration panel and access them only from the server-side code.

When you export the app and deploy it externally (for example, to a pure JavaScript hosting service that runs everything on the client), you need to move the logic that uses the key to a server componentThat part can go in a serverless function, in a Node API, in a Python backend, or wherever you want, but never embedded in the frontend bundle.

If you implement in Cloud Run directly from AI StudioThe platform already handles keeping the key protected in the server environment, so you don't need to restructure the logic as much. But if you switch to another provider, it's a good idea to double-check that no sensitive credentials have been left in public files.

Write good prompts and structure your apps with Gemini

With all this power, the bottleneck is usually in the promptThe clearer and more structured your request, the better the app Google AI Studio will generate. Simply saying "make me a working app" isn't enough; you need to provide context and details.

A useful approach is to start with a informal conversation with an AI assistant (Gemini, ChatGPT, etc.) to organize your ideas and then turn them into a robust prompt that you will copy to AI Studio's Build mode.

That prompt should make it clear the General objective of the app, who will use it, and what main actions the user will performIt is also advisable to specify what type of inputs it will have (forms, files, links) and what outputs you expect (reports, summaries, charts, thumbnails, etc.).

It wouldn't hurt to include specific rules and best practicesEspecially if your app generates content. For example, for a tool that produces YouTube titles and descriptions, you can set limits on characters, tone, language, moderate use of hashtags, and clear calls to action.

Once the first version is generated, the pattern is always the same: test, observe, adjust. If something isn't behaving as you want, you don't need to delve into all the code: Describe the problem clearly. (including error messages if any) and ask the AI ​​for a specific change instead of something generic.

From Google AI Studio to the web: GitHub, Vercel, and continuous deployment

To bring your app to the real world, a very convenient combination is to use Google AI Studio + GitHub + VercelWith that, you can go from an internal prototype to a public application in a few hours and have automatic deployments every time you update the code.

The typical workflow would be: first, create the app in AI Studio using Build mode, fine-tuning the logic and UI until it does what you intend. Then, use the option to “Save to GitHub” to create a repository with all the code.

Once you're on GitHub, you go to vercelYou connect your GitHub account, choose the repository you just generated, and let the platform detect the framework (usually Vite/React or another) and configure the build automatically.

In Vercel you will need to add a environment variable with the Gemini API key (for example, VITE_API_KEY or similar), which you will have obtained beforehand from the Google AI Studio panel in the "Get API Key" section. This way, the key is never in the repository or the frontend.

Once everything is set up, you perform an initial deployment. Vercel will install dependencies, compile the app, and provide you with a public URL. From there, each push to main GitHub branch It will trigger a new build and deployment without you having to do anything else.

From web to Android: Capacitor, APK and mobile testing

If you're interested in going a step further and bringing your Google AI Studio project to mobile, you can package your web app as a Native Android application using CapacitorThis is great if you have an AI-based tool that you want to use or distribute as a traditional app.

The starting point is your web project generated by AI Studio, usually with Vite as bundlerYou need to have it installed on your computer. Node.js and npm, and Android Studio with the corresponding SDK to compile and generate the APK.

The basic steps are: open your web project in Android Studio or in the terminal, run npm install to install dependencies and then npm run build to generate the production folder (dist, build or www, depending on the configuration).

Then you install @capacitor/android, initialize Capacitor with npx cap init (specifying the app name and package identifier) ​​and add the Android platform with npx cap add android. With npx cap sync android You synchronize the web build with the native project that was just created.

From there you can open the folder android of your project in Android Studio as if it were a native app. After syncing with Gradle, go to Build > Generate Signed Bundle / APK and, by creating or loading your keystore, you can build a APK debug for testing or release for distribution.

Maintenance, updates, and troubleshooting common problems

When you want to publish a new version of your Android app derived from Google AI Studio, the process is much faster: simply Update your web code, rebuild and resynchronize with Capacitor, and keep backup.

In practice, you make the changes to your project (either from AI Studio by exporting a new version or manually), and then run it again. npm run build at the root, then npx cap sync android so that it copies the new build to the native part and finally you generate another APK from Android Studio.

Along this path, some recurring problems often appear: Gradle errors due to permissions or antivirus in WindowsImage templates that do not respect the 16:9 aspect ratio or content generated in English when you need it in Spanish.

Gradle errors are usually fixed by clearing caches (deleting the .gradle folder in your user), adding exclusions in the antivirus for the SDK and Android project folders, and letting Android Studio re-sync everything from scratch.

Regarding the generation of images for thumbnails or other resources, it is advisable test several image models (for example, specific versions of Image or services like Nano Banana) and be very explicit in the prompts indicating “aspect ratio 16:9” and “Spanish text” if you want the visible text to appear in Spanish, especially in folding devices.

Together, the ecosystem of Google AI Studio, Gemini, GitHub, Vercel, and Capacitor gives you a complete chain: ideate, prototype, deploy to web and package for mobile without having to build the entire infrastructure from scratch yourself, something that a few years ago was much more complicated and slow for developers and, above all, for those who are starting out.

Final paragraph

Everything we've seen makes Google AI Studio a kind of digital workshop where you can Test ideas, shape them in a matter of hours, and bring them to both the web and Android.By leveraging Gemini to generate code, integrate AI models, fine-tune the user experience, protect your keys, and automate deployments, and adding good prompt management, a bit of technical judgment, and the extra tools from GitHub, Cloud Run, Vercel, and Capacitor, the leap from having an idea in your head to having a working app in your browser or on your mobile device becomes much more accessible, even if you don't come from the world of traditional development.

How to change the aspect ratio of apps on foldable phones
Related article:
Foldable devices: change the aspect ratio of apps