Accelerate foundation model training and fine-tuning with new Amazon SageMaker HyperPod recipes

Today, we’re announcing the general availability of Amazon SageMaker HyperPod recipes to help data scientists and developers of all skill sets to get started training and fine-tuning foundation models (FMs) in minutes with state-of-the-art performance. They can now access optimized recipes for training and fine-tuning popular publicly available FMs such as Llama 3.1 405B, Llama 3.2 90B, or Mixtral 8x22B. At AWS re:Invent 2023, we introduced SageMaker HyperPod to reduce time to train FMs by up to 40 percent and scale across more than a thousand compute resources in parallel with preconfigured distributed training libraries. With SageMaker HyperPod, you can find the required accelerated compute resources for training, create the most optimal training plans, and run training workloads across different…

Meet your training timelines and budgets with new Amazon SageMaker HyperPod flexible training plans

Today, we’re announcing the general availability of Amazon SageMaker HyperPod flexible training plans to help data scientists train large foundation models (FMs) within their timelines and budgets and save them weeks of effort in managing the training process based on compute availability. At AWS re:Invent 2023, we introduced SageMaker HyperPod to reduce the time to train FMs by up to 40 percent and scale across thousands of compute resources in parallel with preconfigured distributed training libraries and built-in resiliency. Most generative AI model development tasks need accelerated compute resources in parallel. Our customers struggle to find timely access to compute resources to complete their training within their timeline and budget constraints. With today’s announcement, you can find the required accelerated…

Maximize accelerator utilization for model development with new Amazon SageMaker HyperPod task governance

Today, we’re announcing the general availability of Amazon SageMaker HyperPod task governance, a new innovation to easily and centrally manage and maximize GPU and Tranium utilization across generative AI model development tasks, such as training, fine-tuning, and inference. Customers tell us that they’re rapidly increasing investment in generative AI projects, but they face challenges in efficiently allocating limited compute resources. The lack of dynamic, centralized governance for resource allocation leads to inefficiencies, with some projects underutilizing resources while others stall. This situation burdens administrators with constant replanning, causes delays for data scientists and developers, and results in untimely delivery of AI innovations and cost overruns due to inefficient use of resources. With SageMaker HyperPod task governance, you can accelerate time…

New Amazon Q Developer agent capabilities include generating documentation, code reviews, and unit tests

Last year at AWS re:Invent, we previewed Amazon Q Developer, a generative AI–powered assistant for designing, building, testing, deploying, and maintaining software across integrated development environments (IDEs) such as Visual Studio, Visual Studio Code, JetBrains IDEs, Eclipse (preview), JupyterLab, Amazon EMR Studio, or AWS Glue Studio. You can also use Amazon Q Developer in the AWS Management Console, AWS Console Mobile Application, Amazon CodeCatalyst, AWS Support, AWS website, or through Slack and Microsoft Teams with AWS Chatbot. Due to the rapid pace of innovation, we announced the general availability of Amazon Q Developer in April and added more capabilities, such as supporting AWS Command Line Interface (AWS CLI), Amazon SageMaker Studio, AWS CloudShell, as well as inline chat for seamless…