49. We introduced PaLM 2, our next generation language model. It’s faster and more efficient than previous models — and it comes in a variety of sizes, which makes it easy to deploy for a wide range of use cases.
50. PaLM 2 powers more than 25 products announced at I/O and dozens of product teams across Google are using it.
51. And PaLM 2 powers our new PaLM API.
52. Our health research teams used PaLM 2 to create Med-PaLM 2, which is fine-tuned for medical knowledge to help answer questions and summarize insights from a variety of dense medical texts. We’re now exploring multimodal capabilities, so it can synthesize patient information from images, like a chest x-ray or mammogram, to help improve patient care.
53. We’re opening Med-PaLM 2 up to a small group of Cloud customers for feedback later this summer to identify safe, helpful use cases.
54. We’re already at work on Gemini, our first model created from the ground up to be multimodal, highly capable at different sizes and efficient at integrating with other tools and APIs. Gemini is still in training, but it’s already exhibiting multimodal capabilities never before seen in prior models.
55. We announced a handful of improvements to and news about Bard, our experiment that lets you collaborate with generative AI — for instance, Dark theme!
56. You’ll soon be able to use images in your Bard prompts, allowing you to boost your creativity in completely new ways.
57. Access to Bard in English is also expanding in over 180 countries. And starting today, you can now use Bard in Japanese and Korean.
58. We’re on track to make Bard available in the 40 most spoken languages by the end of the year, so more people can collaborate with it in their native languages.
59. We also removed the waitlist so more people can interact directly with Bard.
60. Starting next week, we’re making code citations even more precise. If Bard brings in a block of code, just click the annotation and Bard will underline the block and link to the source.
61. Coming soon, Bard will become more visual by including images in its responses, giving you a much better sense of what you’re exploring.
62. In the future, you’ll see Bard integrated not only with Google services, but with popular apps you use — like Adobe, Instacart and Khan Academy.