Luma AI’s Ray2 video model is now available in Amazon Bedrock

As we preannounced at AWS re:Invent 2024, you can now use Luma AI Ray2 video model in Amazon Bedrock to generate high-quality video clips from text, creating captivating motion graphics from static concepts. AWS is the first and only cloud provider to offer fully managed models from Luma AI. On January 16, 2025, Luma AI introduced Luma Ray2, the large–scale video generative model capable of creating realistic visuals with natural, coherent motion with strong understanding of text instructions. Luma Ray2 exhibits advanced capabilities as a result of being trained on Luma’s new multi-modal architecture. It scales to ten times compute of Ray1, enabling it to produce 5 second or 9 second video clips that show fast coherent motion, ultra-realistic details,…

Stable Diffusion 3.5 Large is now available in Amazon Bedrock

As we preannounced at AWS re:Invent 2024, you can now use Stable Diffusion 3.5 Large in Amazon Bedrock to generate high-quality images from text descriptions in a wide range of styles to accelerate the creation of concept art, visual effects, and detailed product imagery for customers in media, gaming, advertising, and retail. In October 2024, Stability AI introduced Stable Diffusion 3.5 Large, the most powerful model in the Stable Diffusion family at 8.1 billion parameters trained on Amazon SageMaker HyperPod, with superior quality and prompt adherence. Stable Diffusion 3.5 Large can accelerate storyboarding, concept art creation, and rapid prototyping of visual effects. You can quickly generate high-quality 1-megapixel images for campaigns, social media posts, and advertisements, saving time and resources…

New Amazon EC2 High Memory U7inh instance on HPE Server for large in-memory databases

Today we’re announcing the general availability of Amazon Elastic Compute Cloud (Amazon EC2) U7inh instance, a new addition to EC2 High Memory family, built in collaboration with Hewlett Packard Enterprise (HPE). Amazon EC2 U7inh instance runs on the 16-socket HPE Compute Scale-up Server 3200, and are built on the AWS Nitro System to deliver a fully integrated and managed experience consistent with other EC2 instances. Powered by the fourth generation Intel® Xeon® Scalable processors (Sapphire Rapids), U7inh instance supports 32 TB of memory and 1920 vCPUs. This instance offers the highest compute performance, largest compute and memory size in the Amazon Web Services (AWS) Cloud for running large, mission-critical database workloads, like SAP HANA. In May 2024, we launched U7i…

Accelerate foundation model training and fine-tuning with new Amazon SageMaker HyperPod recipes

Today, we’re announcing the general availability of Amazon SageMaker HyperPod recipes to help data scientists and developers of all skill sets to get started training and fine-tuning foundation models (FMs) in minutes with state-of-the-art performance. They can now access optimized recipes for training and fine-tuning popular publicly available FMs such as Llama 3.1 405B, Llama 3.2 90B, or Mixtral 8x22B. At AWS re:Invent 2023, we introduced SageMaker HyperPod to reduce time to train FMs by up to 40 percent and scale across more than a thousand compute resources in parallel with preconfigured distributed training libraries. With SageMaker HyperPod, you can find the required accelerated compute resources for training, create the most optimal training plans, and run training workloads across different…