1. Greater flexibility in using LLMs
Android Studio significantly expands options for working with language models:
• Support for local models via providers like LM Studio or Ollama, ideal for:
– Limited connectivity environments
– Strict privacy requirements
– Experimenting with open-source models
• Gemini remains the recommended option, optimized specifically for Android development and compatible with all IDE capabilities.
• New model selector allows easy switching between available models.
You can now also use your own Gemini API Key to access more advanced models (like Gemini 3 Pro or Gemini 3 Flash), with:
• Larger context windows
• Higher quotas
• Better performance in long sessions, especially in Agent Mode
2. Major evolution of Agent Mode
Agent Mode takes a leap toward being a truly agentic assistant:
• It can install the app on devices, inspect the UI in real time, take screenshots, read Logcat, and interact with the running app.
• Enables more autonomous build → run → verify → fix loops.
• New changes drawer:
– Lists all files modified by the agent
– Allows reviewing diffs, accepting, or reverting changes individually
– Improves control and traceability of agent actions
• Support for multiple conversation threads, ideal for:
– Task separation
– Reducing context noise
– Improving response quality
3. Journeys: E2E tests in natural language
Introducing Journeys for Android Studio, a new way to define end-to-end tests:
• Tests are written in natural language
• Gemini performs real interactions on the app (vision + reasoning)
• Allows complex assertions based on visual output
• More robust tests, less fragile against layout changes
• Local or remote execution
• Detailed results:
– Step-by-step screenshots
– Performed action
– Agent reasoning
• Executed as Gradle tasks, easily integrable with CI
4. Integration with remote MCP servers
Android Studio can now connect to remote Model Context Protocol (MCP) servers like Figma, Notion, Canva, or Linear:
• Drastically reduces context switching
• The agent can use external product information directly
• Key example: generate code from real Figma designs
• A step toward an IDE connected to the full product stack
5. Deep integration of AI in the UI flow (Compose)
AI becomes a core part of UI development in Jetpack Compose:
• Generate UI from an image or mockup
• Iterative adjustments for pixel-perfect UI against a target image
• Modify UI using natural language
• Automatic audit and correction of:
– Accessibility issues
– Visual quality issues
• All integrated into Compose Preview, without leaving context
Also improved:
• Auto-generation of valid @Preview annotations
• Diagnosing and fixing rendering errors in previews
6. App Links automation with AI
The App Links Assistant now uses Agent Mode to:
• Automatically generate deep link logic
• Create associated code and tests
• Display changes as diffs for easy review
Eliminates one of the most tedious and error-prone tasks.
7. Indirect but critical impact on debugging
While not “pure generative AI,” key improvement:
• Automatic R8 stack trace retracing in Logcat, no manual steps
• Major debugging flow improvement, especially in obfuscated apps

