Join us for the AWS Developer Day on February 20! This virtual event is designed to help developers and teams incorporate cutting-edge yet responsible generative AI across their development lifecycle to accelerate innovation. In his keynote, Jeff Barr, Vice President of AWS Evangelism, shares his thoughts on the next generation of software development based on generative AI, the skills needed to thrive in this changing environment, and how he sees it evolving in the future. Get a first look at exciting technical deep-dive and product updates about Amazon Q Developer, AWS Amplify, and GitLab Duo with Amazon Q. You get the chance to explore real-world use cases, live coding demos, interactive sessions, and community spotlight sessions with Christian Bonzelet (AWS…
Tag: Channy Yun (윤석찬)
DeepSeek-R1 models now available on AWS
During this past AWS re:Invent, Amazon CEO Andy Jassy shared valuable lessons learned from Amazon’s own experience developing nearly 1,000 generative AI applications across the company. Drawing from this extensive scale of AI deployment, Jassy offered three key observations that have shaped Amazon’s approach to enterprise AI implementation. First is that as you get to scale in generative AI applications, the cost of compute really matters. People are very hungry for better price performance. The second is actually quite difficult to build a really good generative AI application. The third is the diversity of the models being used when we gave our builders freedom to pick what they want to do. It doesn’t surprise us, because we keep learning the…
Luma AI’s Ray2 video model is now available in Amazon Bedrock
As we preannounced at AWS re:Invent 2024, you can now use Luma AI Ray2 video model in Amazon Bedrock to generate high-quality video clips from text, creating captivating motion graphics from static concepts. AWS is the first and only cloud provider to offer fully managed models from Luma AI. On January 16, 2025, Luma AI introduced Luma Ray2, the large–scale video generative model capable of creating realistic visuals with natural, coherent motion with strong understanding of text instructions. Luma Ray2 exhibits advanced capabilities as a result of being trained on Luma’s new multi-modal architecture. It scales to ten times compute of Ray1, enabling it to produce 5 second or 9 second video clips that show fast coherent motion, ultra-realistic details,…
Stable Diffusion 3.5 Large is now available in Amazon Bedrock
As we preannounced at AWS re:Invent 2024, you can now use Stable Diffusion 3.5 Large in Amazon Bedrock to generate high-quality images from text descriptions in a wide range of styles to accelerate the creation of concept art, visual effects, and detailed product imagery for customers in media, gaming, advertising, and retail. In October 2024, Stability AI introduced Stable Diffusion 3.5 Large, the most powerful model in the Stable Diffusion family at 8.1 billion parameters trained on Amazon SageMaker HyperPod, with superior quality and prompt adherence. Stable Diffusion 3.5 Large can accelerate storyboarding, concept art creation, and rapid prototyping of visual effects. You can quickly generate high-quality 1-megapixel images for campaigns, social media posts, and advertisements, saving time and resources…