Google launches Gemma 2, its next generation of open models


Built for developers and researchers

Gemma 2 is not only more powerful, it’s designed to more easily integrate into your workflows:

  • Open and accessible: Just like the original Gemma models, Gemma 2 is available under our commercially-friendly Gemma license, giving developers and researchers the ability to share and commercialize their innovations.
  • Broad framework compatibility: Easily use Gemma 2 with your preferred tools and workflows thanks to its compatibility with major AI frameworks like Hugging Face Transformers, and JAX, PyTorch and TensorFlow via native Keras 3.0, vLLM, Gemma.cpp, Llama.cpp and Ollama. In addition, Gemma is optimized with NVIDIA TensorRT-LLM to run on NVIDIA- accelerated infrastructure or as an NVIDIA NIM inference microservice. You can fine-tune today with Keras and Hugging Face. We are actively working to enable additional parameter-efficient fine-tuning options.
  • Effortless deployment: Starting next month, Google Cloud customers will be able to easily deploy and manage Gemma 2 on Vertex AI.

Explore the new Gemma Cookbook, a collection of practical examples and recipes to guide you through building your own applications and fine-tuning Gemma 2 models for specific tasks. Discover how to easily use Gemma with your tooling of choice, including for common tasks like retrieval-augmented generation.

Responsible AI development

We’re committed to providing developers and researchers with the resources they need to build and deploy AI responsibly, including through our Responsible Generative AI Toolkit. The recently open-sourced LLM Comparator helps developers and researchers with in-depth evaluation of language models. Starting today, you can use the companion Python library to run comparative evaluations with your model and data, and visualize the results in the app. Additionally, we’re actively working on open sourcing our text watermarking technology, SynthID, for Gemma models.

When training Gemma 2, we followed our robust internal safety processes, filtering pre-training data and performing rigorous testing and evaluation against a comprehensive set of metrics to identify and mitigate potential biases and risks. We publish our results on a large set of public benchmarks related to safety and representational harms.



Source link

Share:

Atbildēt

3 latest news
On Key

Related Posts

Solverwp- WordPress Theme and Plugin