With Agents for Amazon Bedrock, generative artificial intelligence (AI) applications can run multistep tasks across different systems and data sources. A couple of months back, we simplified the creation and configuration of agents. Today, we are introducing in preview two new fully managed capabilities: Retain memory across multiple interactions – Agents can now retain a summary of their conversations with each user and be able to provide a smooth, adaptive experience, especially for complex, multistep tasks, such as user-facing interactions and enterprise automation solutions like booking flights or processing insurance claims. Support for code interpretation – Agents can now dynamically generate and run code snippets within a secure, sandboxed environment and be able to address complex use cases such as…
Category: AWS
Reposts from Amazon Web Services (AWS).
Guardrails for Amazon Bedrock can now detect hallucinations and safeguard apps built using custom or third-party FMs
Guardrails for Amazon Bedrock enables customers to implement safeguards based on application requirements and and your company’s responsible artificial intelligence (AI) policies. It can help prevent undesirable content, block prompt attacks (prompt injection and jailbreaks), and remove sensitive information for privacy. You can combine multiple policy types to configure these safeguards for different scenarios and apply them across foundation models (FMs) on Amazon Bedrock, as well as custom and third-party FMs outside of Amazon Bedrock. Guardrails can also be integrated with Agents for Amazon Bedrock and Knowledge Bases for Amazon Bedrock. Guardrails for Amazon Bedrock provides additional customizable safeguards on top of native protections offered by FMs, delivering safety features that are among the best in the industry: Blocks as…
Knowledge Bases for Amazon Bedrock now supports additional data connectors (in preview)
Using Knowledge Bases for Amazon Bedrock, foundation models (FMs) and agents can retrieve contextual information from your company’s private data sources for Retrieval Augmented Generation (RAG). RAG helps FMs deliver more relevant, accurate, and customized responses. Over the past months, we’ve continuously added choices of embedding models, vector stores, and FMs to Knowledge Bases. Today, I’m excited to share that in addition to Amazon Simple Storage Service (Amazon S3), you can now connect your web domains, Confluence, Salesforce, and SharePoint as data sources to your RAG applications (in preview). New data source connectors for web domains, Confluence, Salesforce, and SharePoint By including your web domains, you can give your RAG applications access to your public data, such as your company’s…
Introducing Amazon Q Developer in SageMaker Studio to streamline ML workflows
Today, we are announcing a new capability in Amazon SageMaker Studio that simplifies and accelerates the machine learning (ML) development lifecycle. Amazon Q Developer in SageMaker Studio is a generative AI-powered assistant built natively into the SageMaker JupyterLab experience. This assistant takes your natural language inputs and crafts a tailored execution plan for your ML development lifecycle by recommending the best tools for each task, providing step-by-step guidance, generating code to get started, and offering troubleshooting assistance when you encounter errors. It also helps when facing challenges such as translating complex ML problems into smaller tasks and searching for relevant information in the documentation. You may be a first-time user who evaluates Amazon SagaMaker for generative artificial intelligence (generative AI)…