Google has introduced Gemini, its newest and most advanced AI model, designed to be multimodal and capable of understanding and combining different types of information including text, images, audio, video, and code. The model comes in three different sizes, with the flexibility to run on everything from data centers to mobile devices. Gemini was trained at scale using Google’s Tensor Processing Units (TPUs) v4 and v5e and is now available in some of Google’s core products such as Bard, Pixel 8 Pro, and Search. Android developers can now sign up for an early preview of Gemini Nano and Gemini Pro will be accessible via the Gemini API in Vertex AI or Google AI Studio starting December 13. Google plans to refine Gemini Ultra and make it available to select groups before opening it up broadly to developers and enterprise customers early next year. This announcement marks the start of the Gemini era for Google’s AI capabilities.