Mistral AI, an AI company based in France, is on a mission to elevate publicly available models to state-of-the-art performance. They specialize in creating fast and secure large language models (LLMs) that can be used for various tasks, from chatbots to code generation.
We’re pleased to announce that two high-performing Mistral AI models, Mistral 7B and Mixtral 8x7B, will be available soon on Amazon Bedrock. AWS is bringing Mistral AI to Amazon Bedrock as our 7th foundation model provider, joining other leading AI companies like AI21 Labs, Anthropic, Cohere, Meta, Stability AI, and Amazon. With these two Mistral AI models, you will have the flexibility to choose the optimal, high-performing LLM for your use case to build and scale generative AI applications using Amazon Bedrock.
Overview of Mistral AI Models
Here’s a quick overview of these two highly anticipated Mistral AI models:
- Mistral 7B is the first foundation model from Mistral AI, supporting English text generation tasks with natural coding capabilities. It is optimized for low latency with a low memory requirement and high throughput for its size. This model is powerful and supports various use cases from text summarization and classification, to text completion and code completion.
- Mixtral 8x7B is a popular, high-quality sparse Mixture-of-Experts (MoE) model that is ideal for text summarization, question and answering, text classification, text completion, and code generation.
Choosing the right foundation model is key to building successful applications. Let’s have a look at a few highlights that demonstrate why Mistral AI models could be a good fit for your use case:
- Balance of cost and performance — One prominent highlight of Mistral AI’s models strikes a remarkable balance between cost and performance. The use of sparse MoE makes these models efficient, affordable, and scalable, while controlling costs.
- Fast inference speed — Mistral AI models have an impressive inference speed and are optimized for low latency. The models also have a low memory requirement and high throughput for their size. This feature matters most when you want to scale your production use cases.
- Transparency and trust — Mistral AI models are transparent and customizable. This enables organizations to meet stringent regulatory requirements.
- Accessible to a wide range of users — Mistral AI models are accessible to everyone. This helps organizations of any size integrate generative AI features into their applications.
Available Soon
Mistral AI publicly available models are coming soon to Amazon Bedrock. As usual, subscribe to this blog so that you will be among the first to know when these models will be available on Amazon Bedrock.
Learn more
Stay tuned,
— Donnie
from AWS News Blog https://aws.amazon.com/blogs/aws/mistral-ai-models-coming-soon-to-amazon-bedrock/