
Tech • IA • Crypto
Google unveiled new agentic AI tools, an open-source multi-agent system, the Gemma 4 model family, and details for the upcoming Google I/O livestream.
At its annual Google Cloud Next event in Las Vegas, the company centered its announcements on agentic AI, modern infrastructure, and practical deployment. The strategy emphasizes autonomous systems that can plan, coordinate, and execute complex tasks with minimal human intervention. These efforts reflect a broader push to operationalize AI beyond chat interfaces into real-world workflows.
Google introduced Race Condition, an open-source multi-agent simulation designed to model the planning of a city-scale marathon. Built on the Gemini Enterprise Agent platform, the system demonstrates how multiple AI agents can coordinate logistics such as road closures, medical placement, and compliance with regulatory standards. The project is positioned as a reference architecture for orchestrating, scaling, and securing autonomous agents.
Race Condition is designed as a deployable framework that developers can inspect and adapt. It includes a full systems architecture and codebase, allowing teams to experiment with multi-agent coordination in complex environments. By simulating a high-stakes event like a marathon, the project highlights how small errors in variables can cascade into systemic failures, underscoring the need for robust agent orchestration.
Google also introduced Agent CLI, a command-line tool that enables developers to build and deploy AI agents from initialization to production within a single workflow. The tool integrates with existing coding agents and platforms, including Gemini, and packages Google Cloud services into a unified interface. It also includes guided “skills” to help automate development steps.
Google DeepMind announced Gemma 4, a new family of open models derived from the same research foundations as Gemini 3. For the first time, the models are released under an Apache 2.0 open-source license, marking a significant shift toward broader accessibility and community collaboration. The release has been welcomed by platforms like Hugging Face, which is supporting the models from launch.
Gemma 4 is optimized to run across a wide range of hardware, from smartphones and laptops to cloud accelerators. The lineup includes multiple sizes, such as 4B, 26B mixture-of-experts, and 31B dense models, enabling flexibility depending on compute resources. Larger variants reportedly achieve performance comparable to models up to ten times their size.
The Gemma 4 family is built to handle complex reasoning and agentic workflows rather than simple conversational tasks. This aligns with Google’s broader emphasis on AI systems capable of executing multi-step processes, integrating with tools, and supporting autonomous decision-making in production environments.
Google confirmed that Google I/O will take place on May 19–20, with global livestream access. The event will feature keynote presentations and two days of technical sessions covering Android, Chrome, and Cloud. Developers can register online to access the full schedule and participate remotely.
Google’s latest announcements signal a shift toward practical, open, and scalable AI systems, combining agent-based architectures with accessible model releases and developer tools.
Today we have announcements fresh from Google Cloud. Next, the brand new Gemma four and how you can tune in to this year's Google I/O live stream. Every year, Google Cloud Next is the largest annual gathering of developers building on Google Cloud. And just last week in Las Vegas, the focus was turned squarely on Building Blocks of the future a agentic AI, modern infrastructure, and hands on problem solving. There were so many exciting announcements from the massive events. Speaking of massive events, planning one like a Citywide marathon is a logistical nightmare. But what if a team of AI agents could orchestrate the whole thing. Google Cloud just unveiled race condition. It's a fully open source multi-agent simulation powered by the New Gemini Enterprise Agent platform. It is a massive undertaking, but I imagine actually planning a certified marathon is even harder. You have governing bodies with strict core standards, city logistics, road closures and medical tent placements. Exactly and if you get one variable wrong, the whole thing can fall apart. And this complexity is exactly why we built race condition. Race condition is a deployable reference architecture for orchestrating, scaling and securing autonomous AI agents using Gemini. Enterprise agent platform and it is fully open source. It's a multi-agent simulation that models a marathon through Las Vegas, where we hosted Google Cloud Next this year, and we created this developer solution alongside race condition to help you explore the systems architecture. The demo experiences and inspect the code directly to get started with race condition in your own Google Cloud project. Check out the GitHub repo LinkedIn the description below. Next, from next is something for those building with AI agents. Here is a brand new CLI for you to build AI agents from scratch. You can now go from init to production all in one CLI. Today, we are super excited to announce agent CLI. It's a single tool that makes building and deploying agents super simple, not just for you, but also for your agent tools like Gemini, SLI, antigravity, or any other coding agent you already use. That's right. Think of agent CLI as a bridge. It packages all the Google Cloud tools you need into a simple command line interface. But the best part. It comes with a set of skills to guide the coding agent throughout the process. Check out the full announcement video, or if you want to get started with CLI, take a look at the link in the description. Race condition and agent CLI are just two of the many exciting announcements from Google Cloud. Next if you want to catch up on the full list of announcements, visit the link in the description. Google DeepMind released their latest open model, Gemma, for today, we are thrilled to announce Gemma for built from the same world class research and technology behind Gemini three. Gemma, 4, is a family of open models designed to run directly on the hardware you own phone, laptops, and desktop. For the first time ever, we are releasing Gemma under an open source Apache 2.0 license. What does this mean for the open source community. The co-founder and CEO of Hugging had this to say about Gemma for the release of Gemma for under an Apache 2.0 license is a huge milestone. We are incredibly excited to support the Gemma for family on Hugging Face on day one. Gemma four is sized to run on hardware from Android devices, laptop GPUs up to developer workstations and Cloud accelerators. Gemma four offers four new model sizes effective to be effective 4B 26(b mixture of experts and ME31B dense. The entire family moves beyond simple chat to handle complex logic and agentic workflows. The larger models deliver state of the art performance for their sizes, now competing models 10 times its size. To learn more and to get started with Gemma four, check out the announcement blog linked below. Google's Vegas developer event is right around the corner for those who won't be in Mountain view on May 19th and 20th. We have good news. You can tune in to the Livestream from anywhere around the world. I/O will kick off with the Google and developer keynotes, followed by two full days of live stream sessions. Tune in for the latest updates across i.e. Android, Chrome and Cloud. You can explore the Google I/O live stream schedule and register today at this link, which will include in the description below. You can catch all of the I/O live streams and technical sessions right here on the Google for Developers YouTube channel. We hope to see you there. That concludes all the updates we have for you in this episode. Did any of today's announcements inspire you to build something. Will you be joining us at Google. I/O What are you looking forward to the most. Drop your answers in the comments. Thanks for watching and we'll see you next time.