New – Amazon SageMaker Ground Truth Now Supports Synthetic Data Generation

Today, I am happy to announce that you can now use Amazon SageMaker Ground Truth to generate labeled synthetic image data. Building machine learning (ML) models is an iterative process that, at a high level, starts with data collection and preparation, followed by model training and model deployment. And especially the first step, collecting large, diverse, and accurately labeled datasets for your model training, is often challenging and time-consuming. Let’s take computer vision (CV) applications as an example. CV applications have come to play a key role in the industrial landscape. They help improve manufacturing quality or automate warehouses. Yet, collecting the data to train these CV models often takes a long time or can be impossible. As a data…

AWS Week In Review – June 6, 2022

This post is part of our Week in Review series. Check back each week for a quick roundup of interesting news and announcements from AWS! I’ve just come back from a long (extended) holiday weekend here in the US and I’m still catching up on all the AWS launches that happened this past week. I’m particularly excited about some of the data, machine learning, and quantum computing news. Let’s have a look! Last Week’s Launches The launches that caught my attention last week are the following: Amazon EMR Serverless is now generally available – Amazon EMR Serverless allows you to run big data applications using open-source frameworks such as Apache Spark and Apache Hive without configuring, managing, and scaling clusters.…

Amazon SageMaker Serverless Inference – Machine Learning Inference without Worrying about Servers

In December 2021, we introduced Amazon SageMaker Serverless Inference (in preview) as a new option in Amazon SageMaker to deploy machine learning (ML) models for inference without having to configure or manage the underlying infrastructure. Today, I’m happy to announce that Amazon SageMaker Serverless Inference is now generally available (GA). Different ML inference use cases pose different requirements on your model hosting infrastructure. If you work on use cases such as ad serving, fraud detection, or personalized product recommendations, you are most likely looking for API-based, online inference with response times as low as a few milliseconds. If you work with large ML models, such as in computer vision (CV) applications, you might require infrastructure that is optimized to run…

AWS Week in Review – April 18, 2022

This post is part of our Week in Review series. Check back each week for a quick roundup of interesting news and announcements from AWS! Here we are with another roundup of the most significant AWS launches from the previous week. Among the news, we have a new deployment option for Amazon FSx for NetApp ONTAP, performance and scaling improvements done in AWS Fargate, and an update on the AWS AI & ML Scholarship program. Last Week’s Launches Here are some launches that caught my attention last week: Amazon FSx for NetApp ONTAP introduces a single Availability Zone (AZ) deployment option – Amazon FSx for NetApp ONTAP allows you to launch and run fully managed ONTAP file systems in the…