Mistral AI models coming soon to Amazon Bedrock

Mistral AI, an AI company based in France, is on a mission to elevate publicly available models to state-of-the-art performance. They specialize in creating fast and secure large language models (LLMs) that can be used for various tasks, from chatbots to code generation. We’re pleased to announce that two high-performing Mistral AI models, Mistral 7B and Mixtral 8x7B, will be available soon on Amazon Bedrock. AWS is bringing Mistral AI to Amazon Bedrock as our 7th foundation model provider, joining other leading AI companies like AI21 Labs, Anthropic, Cohere, Meta, Stability AI, and Amazon. With these two Mistral AI models, you will have the flexibility to choose the optimal, high-performing LLM for your use case to build and scale generative…

AWS Weekly Roundup — AWS Control Tower new API, TLS 1.3 with API Gateway, Private Marketplace Catalogs, and more — February 19, 2024

Over the past week, our service teams have continued to innovate on your behalf, and a lot has happened in the Amazon Web Services (AWS) universe that I want to tell you about. I’ll also share about all the AWS Community events and initiatives that are happening around the world. Let’s dive in! Last week’s launches Here are some launches that got my attention during the previous week. AWS Control Tower introduces APIs to register organizational units – With these new APIs, you can extend governance to organizational units (OUs) using APIs and automate your OU provisioning workflow. The APIs can also be used for OUs that are already under AWS Control Tower governance to re-register OUs after landing zone…

Knowledge Bases for Amazon Bedrock now supports Amazon Aurora PostgreSQL and Cohere embedding models

During AWS re:Invent 2023, we announced the general availability of Knowledge Bases for Amazon Bedrock. With a knowledge base, you can securely connect foundation models (FMs) in Amazon Bedrock to your company data for Retrieval Augmented Generation (RAG). In my previous post, I described how Knowledge Bases for Amazon Bedrock manages the end-to-end RAG workflow for you. You specify the location of your data, select an embedding model to convert the data into vector embeddings, and have Amazon Bedrock create a vector store in your AWS account to store the vector data, as shown in the following figure. You can also customize the RAG workflow, for example, by specifying your own custom vector store. Since my previous post in November,…