<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
  <link href="http://pubsubhubbub.appspot.com/" rel="hub"/>
  <link href="https://f43.me/aws-blogs.xml" rel="self"/>
  <title>AWS Blogs</title>
  <subtitle>AWS Blog</subtitle>
  <link href="http://aws.amazon.com"/>
  <updated>2026-03-13T05:22:00+01:00</updated>
  <id>http://aws.amazon.com/</id>
  <author>
    <name>AWS Blogs</name>
  </author>
  <generator>f43.me</generator>
  <entry>
    <id>https://aws.amazon.com/blogs/aws/introducing-account-regional-namespaces-for-amazon-s3-general-purpose-buckets/</id>
    <title><![CDATA[Introducing account regional namespaces for Amazon S3 general purpose buckets]]></title>
    <summary><![CDATA[<section class="blog-post-content lb-rtxt"><table id="amazon-polly-audio-table"><tbody><tr><td id="amazon-polly-audio-tab">
<p></p></td></tr></tbody>


</table><p>Today, we’re announcing a new feature of <a href="https://aws.amazon.com/s3/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">Amazon Simple Storage Service (Amazon S3)</a> you can use to create general purpose buckets in your own account regional namespace simplifying bucket creation and management as your data storage needs grow in size and scope. You can create general purpose bucket names across multiple AWS Regions with assurance that your desired bucket names will always be available for you to use.</p><p>With this feature, you can predictably name and create general purpose buckets in your own account regional namespace by appending your account’s unique suﬃx in your requested bucket name. For example, I can create the bucket <code>mybucket-123456789012-us-east-1-an</code> in my account regional namespace. <code>mybucket</code> is the bucket name prefix that I specified, then I add my account regional suffix to the requested bucket name: <code>-123456789012-us-east-1-an</code>. If another account tries to create buckets using my account’s suffix, their requests will be automatically rejected.</p><p>Your security teams can use <a href="https://aws.amazon.com/iam/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">AWS Identity and Access Management (AWS IAM)</a> policies and <a href="https://aws.amazon.com/organizations/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">AWS Organizations</a> service control policies to enforce that your employees only create buckets in their account regional namespace using the new <code>s3:x-amz-bucket-namespace</code> condition key, helping teams adopt the account regional namespace across your organization.</p><p><strong class="c6">Create your S3 bucket with account regional namespace in action</strong><br />To get started, choose <strong>Create bucket</strong> in the <a href="https://console.aws.amazon.com/s3?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">Amazon S3 console</a>. To create your bucket in your account regional namespace, choose <strong>Account regional namespace</strong>. If you choose this option, you can create your bucket with any name that is unique to your account and region.</p><p>This configuration supports all of the same features as general purpose buckets in the global namespace. The only difference is that only your account can use bucket names with your account’s suffix. The bucket name prefix and the account regional suffix combined must be between 3 and 63 characters long.</p><p><img class="aligncenter size-full wp-image-102981 c7" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2026/02/12/2026-s3-bucket-account-regional-namespace.png" alt="" width="2098" height="2381" /></p><p>Using the <a href="https://aws.amazon.com/cli/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">AWS Command Line Interface (AWS CLI)</a>, you can create a bucket with account regional namespace by specifying the <code>x-amz-bucket-namespace:account-regional</code> request header and providing a compatible bucket name.</p><pre class="lang-bash">$ aws s3api create-bucket --bucket mybucket-123456789012-us-east-1-an \
   --bucket-namespace account-regional \
   --region us-east-1</pre><p>You can use the <a href="https://aws.amazon.com/sdk-for-python/">AWS SDK for Python (Boto3)</a> to create a bucket with account regional namespace using <code>CreateBucket</code> API request.</p><pre class="lang-python">import boto3
class AccountRegionalBucketCreator:
    """Creates S3 buckets using account-regional namespace feature."""
    ACCOUNT_REGIONAL_SUFFIX = "-an"
    def __init__(self, s3_client, sts_client):
        self.s3_client = s3_client
        self.sts_client = sts_client
    def create_account_regional_bucket(self, prefix):
        """
        Creates an account-regional S3 bucket with the specified prefix.
        Resolves caller AWS account ID using the STS GetCallerIdentity API.
        Format: ---an
        """
        account_id = self.sts_client.get_caller_identity()['Account']
        region = self.s3_client.meta.region_name
        bucket_name = self._generate_account_regional_bucket_name(
            prefix, account_id, region
        )
        params = {
            "Bucket": bucket_name,
            "BucketNamespace": "account-regional"
        }
        if region != "us-east-1":
            params["CreateBucketConfiguration"] = {
                "LocationConstraint": region
            }
        return self.s3_client.create_bucket(**params)
    def _generate_account_regional_bucket_name(self, prefix, account_id, region):
        return f"{prefix}-{account_id}-{region}{self.ACCOUNT_REGIONAL_SUFFIX}"
if __name__ == '__main__':
    s3_client = boto3.client('s3')
    sts_client = boto3.client('sts')
    creator = AccountRegionalBucketCreator(s3_client, sts_client)
    response = creator.create_account_regional_bucket('test-python-sdk')
    print(f"Bucket created: {response}")</pre><p>You can update your infrastructure as code (IaC) tools, such as <a href="https://aws.amazon.com/cloudformation/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">AWS CloudFormation</a>, to simplify creating buckets in your account regional namespace. AWS CloudFormation offers the pseudo parameters, <code>AWS::AccountId</code> and <code>AWS::Region</code>, making it easy to build <a href="https://docs.aws.amazon.com/AWSCloudFormation/latest/TemplateReference/aws-resource-s3-bucket.html?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">CloudFormation templates</a> that create account regional namespace buckets.</p><p>The following example demonstrates how you can update your existing CloudFormation templates to start creating buckets in your account regional namespace:</p><pre class="lang-json">BucketName: !Sub "amzn-s3-demo-bucket-${AWS::AccountId}-${AWS::Region}-an"
BucketNamespace: "account-regional"</pre><p>Alternatively, you can also use the <code>BucketNamePrefix</code> property to update your CloudFormation template. By using the <code>BucketNamePrefix</code>, you can provide only the customer defined portion of the bucket name and then it automatically adds the account regional namespace suffix based on the requesting AWS account and Region specified.</p><pre class="lang-json">BucketNamePrefix: 'amzn-s3-demo-bucket'
BucketNamespace: "account-regional"
</pre><p>Using these options, you can build a custom CloudFormation template to easily create general purpose buckets in your account regional namespace.</p><p><strong>Things to know</strong><br />You can’t rename your existing global buckets to bucket names with account regional namespace, but you can create new general purpose buckets in your account regional namespace. Also, the account regional namespace is only supported for general purpose buckets. S3 table buckets and vector buckets already exist in an account-level namespace and S3 directory buckets exist in a zonal namespace.</p><p>To learn more, visit <a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/gpbucketnamespaces.html?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">Namespaces for general purpose buckets</a> in the Amazon S3 User Guide.</p><p><strong class="c6">Now available</strong><br />Creating general purpose buckets in your account regional namespace in Amazon S3 is now available in 37 AWS Regions including the AWS China and AWS GovCloud (US) Regions. You can create general purpose buckets in your account regional namespace at no additional cost.</p><p>Give it a try in the <a href="https://console.aws.amazon.com/s3?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">Amazon S3 console</a> today and send feedback to <a href="https://repost.aws/tags/TADSTjraA0Q4-a1dxk6eUYaw/amazon-simple-storage-service?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">AWS re:Post for Amazon S3</a> or through your usual AWS Support contacts.</p><p>— <a href="https://linkedin.com/in/channy/">Channy</a></p></section><aside id="Comments" class="blog-comments"><div data-lb-comp="aws-blog:cosmic-comments" data-env="prod" data-content-id="6eb1e990-d30d-4de9-ac14-de5c634b2689" data-title="Introducing account regional namespaces for Amazon S3 general purpose buckets" data-url="https://aws.amazon.com/blogs/aws/introducing-account-regional-namespaces-for-amazon-s3-general-purpose-buckets/"><p data-failed-message="Comments cannot be loaded… Please refresh and try again.">Loading comments…</p></div>
</aside>]]></summary>
    <link href="https://aws.amazon.com/blogs/aws/introducing-account-regional-namespaces-for-amazon-s3-general-purpose-buckets/"/>
    <updated>2026-03-12T22:18:00+01:00</updated>
  </entry>
  <entry>
    <id>https://aws.amazon.com/blogs/aws/aws-weekly-roundup-amazon-connect-health-bedrock-agentcore-policy-gameday-europe-and-more-march-9-2026/</id>
    <title><![CDATA[AWS Weekly Roundup: Amazon Connect Health, Bedrock AgentCore Policy, GameDay Europe, and more (March 9, 2026)]]></title>
    <summary><![CDATA[<section class="blog-post-content lb-rtxt"><table id="amazon-polly-audio-table"><tbody><tr><td id="amazon-polly-audio-tab">
<p></p></td></tr></tbody>


</table><p>Fiti AWS Student Community Kenya!</p><p>Last week was an incredible whirlwind: a round of meetups, hands-on workshops, and career discussions across Kenya that culminated with the AWS Student Community Day at <a href="https://www.linkedin.com/school/meru-university-of-science-and-technology-must/">Meru University of Science and Technology</a>, with keynotes from my colleagues <a href="https://www.linkedin.com/in/veliswa-boya/">Veliswa</a> and <a href="https://www.linkedin.com/in/tiffanysouterre/">Tiffany</a>, and sessions on everything from GitOps to cloud-native engineering, and a whole lot of AI agent building.</p><table><tbody><tr><td><img class="aligncenter wp-image-103322 size-large" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2026/03/09/1772482486843-1-1024x500.jpg" alt="" width="1024" height="500" /></td>
<td><img class="aligncenter wp-image-103323 size-large" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2026/03/09/2026-jaws-days-1-1024x500.jpg" alt="" width="1024" height="500" /></td>
</tr></tbody></table><p><a href="https://jawsdays2026.jaws-ug.jp/floormap/">JAWS Days 2026</a> is the largest AWS Community Day in the world, with over 1,500 attendees on March 7th. This event started with a keynote speech on building an AI-driven development team by <a href="https://www.linkedin.com/in/jeffbarr/">Jeff Barr</a>, and included over 100 technical and community experience sessions, lightning talks, and workshops such as Game Days, Builders Card Challenges, and networking parties.</p><p>Now, let’s get into this week’s AWS news…</p><p><strong>Last week’s launches</strong><br />Here are some launches and updates from this past week that caught my attention:</p><ul><li><a href="https://aws.amazon.com/about-aws/whats-new/2026/03/amazon-connect-health-agentic-ai-healthcare/">Introducing Amazon Connect Health, Agentic AI Built for Healthcare</a> — Amazon Connect Health is now generally available with five purpose-built AI agents for healthcare: patient verification, appointment management, patient insights, ambient documentation, and medical coding. All features are HIPAA-eligible and deployable within existing clinical workflows in days.</li>
<li><a href="https://aws.amazon.com/about-aws/whats-new/2026/03/policy-amazon-bedrock-agentcore-generally-available/">Policy in Amazon Bedrock AgentCore is now generally available</a> — You can now use centralized, fine-grained controls for agent-tool interactions that operate outside your agent code. Security and compliance teams can define tool access and input validation rules using natural language that automatically converts to Cedar, the AWS open-source policy language.</li>
<li><a href="https://aws.amazon.com/blogs/aws/introducing-openclaw-on-amazon-lightsail-to-run-your-autonomous-private-ai-agents/">Introducing OpenClaw on Amazon Lightsail to run your autonomous private AI agents</a> — You can deploy a private AI assistant on your own cloud infrastructure with built-in security controls, sandboxed agent sessions, one-click HTTPS, and device pairing authentication. Amazon Bedrock serves as the default model provider, and you can connect to Slack, Telegram, WhatsApp, and Discord.</li>
<li><a href="https://aws.amazon.com/about-aws/whats-new/2026/03/vpc-encryption-controls-pricing/">AWS announces pricing for VPC Encryption Controls</a> — Starting March 1, 2026, VPC Encryption Controls transitions from free preview to a paid feature. You can audit and enforce encryption-in-transit of all traffic flows within and across VPCs in a region, with monitor mode to detect unencrypted traffic and enforce mode to prevent it.</li>
<li><a href="https://aws.amazon.com/about-aws/whats-new/2026/03/dbsp-opensearch-service-neptune-analytics/">Database Savings Plans now supports Amazon OpenSearch Service and Amazon Neptune Analytics</a> — You can save up to 35% on eligible serverless and provisioned instance usage with a one-year commitment. Savings Plans automatically apply regardless of engine, instance family, size, or AWS Region.</li>
<li><a href="https://aws.amazon.com/about-aws/whats-new/2026/03/elastic-beanstalk-ai-analysis/">AWS Elastic Beanstalk now offers AI-powered environment analysis</a> — When your environment health is degraded, Elastic Beanstalk can now collect recent events, instance health, and logs and send them to Amazon Bedrock for analysis, providing step-by-step troubleshooting recommendations tailored to your environment’s current state.</li>
<li><a href="https://aws.amazon.com/about-aws/whats-new/2026/03/aws-simplifies-iam-role-creation-and-setup/">AWS simplifies IAM role creation and setup in service workflows</a> — You can now create and configure IAM roles directly within service workflows through a new in-console panel, without switching to the IAM console. The feature supports Amazon EC2, Lambda, EKS, ECS, Glue, CloudFormation, and more.</li>
<li><a href="https://aws.amazon.com/about-aws/whats-new/2026/03/lambda-durable-kiro-power/">Accelerate Lambda durable functions development with new Kiro power</a> — You can now build resilient, long-running multi-step applications and AI workflows faster with AI agent-assisted development in Kiro. The power dynamically loads guidance on replay models, step and wait operations, concurrent execution patterns, error handling, and deployment best practices.</li>
<li><a href="https://aws.amazon.com/about-aws/whats-new/2026/03/amazon-gamelift-servers-ddos-protection/">Amazon GameLift Servers launches DDoS Protection</a> — You can now protect session-based multiplayer games against DDoS attacks with a co-located relay network that authenticates client traffic using access tokens and enforces per-player traffic limits, at no additional cost to GameLift Servers customers.</li>
</ul><p>For a full list of AWS announcements, be sure to keep an eye on the <a href="https://aws.amazon.com/new/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">What’s New with AWS</a> page.</p><p><strong>From AWS community</strong><br />Here are my personal favorite posts from AWS community and my colleagues:</p><ul><li><a href="https://builder.aws.com/content/3AhVKdfIvhOgaTT8Eu1PXzzLxRm/i-built-a-portable-ai-memory-layer-with-mcp-aws-bedrock-and-a-chrome-extension">I Built a Portable AI Memory Layer with MCP, AWS Bedrock, and a Chrome Extension</a> — Learn how to build a persistent memory layer for AI agents using MCP and Amazon Bedrock, packaged as a Chrome extension that carries context across sessions and applications.</li>
<li><a href="https://dev.to/aws/when-the-model-is-the-machine-25g4">When the Model Is the Machine</a> — Mike Chambers built an experimental app where an AI agent generates a complete, interactive web application at runtime from a single prompt — no codebase, no framework, no persistent state. A thought-provoking exploration of what happens when the model becomes the runtime.</li>
</ul><p><strong>Upcoming AWS events</strong><br />Check your calendar and sign up for upcoming AWS events:</p><ul><li><a href="https://builder.aws.com/content/39zVQT5ykq9bhnngp3kPeQNqjOc/aws-community-gameday-europe-on-the-1703-think-you-know-aws-come-prove-it">AWS Community GameDay Europe</a> — Think you know AWS? Prove it at the AWS Community GameDay Europe on March 17, a gamified learning event where teams compete to solve real-world technical challenges using AWS services.</li>
<li><a href="https://aws.amazon.com/events/aws-at-nvidia-gtc26/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">AWS at NVIDIA GTC 2026</a> — Join us at our AWS sessions, booths, demos, and ancillary events in NVIDIA GTC 2026 on March 16 – 19, 2026 in San Jose. You can receive 20% off event passes through AWS and request a 1:1 meeting at GTC.</li>
<li><a href="https://aws.amazon.com/events/summits/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">AWS Summits</a> — Join AWS Summits in 2026: free in-person events where you can explore emerging cloud and AI technologies, learn best practices, and network with industry peers and experts. Upcoming Summits include <a href="https://aws.amazon.com/events/summits/paris/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">Paris</a> (April 1), <a href="https://aws.amazon.com/events/summits/london/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">London</a> (April 22), and <a href="https://aws.amazon.com/events/summits/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">Bengaluru</a> (April 23–24).</li>
<li><a href="https://aws.amazon.com/events/community-day/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">AWS Community Days</a> — Community-led conferences where content is planned, sourced, and delivered by community leaders. Upcoming events include <a href="https://www.awscommunityday.sk/">Slovakia</a> (March 11), <a href="https://www.awsugpune.in/">Pune</a> (March 21), and the AWSome Women Summit LATAM in <a href="https://www.awswomensummitlatam.com/home.html">Mexico City</a> (March 28)</li>
</ul><p>Browse here for upcoming <a href="https://aws.amazon.com/events/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">AWS led in-person and virtual events</a>, <a href="https://aws.amazon.com/startups/events?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">startup events</a>, and <a href="https://builder.aws.com/connect/events?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">developer-focused events</a>.</p><p>That’s all for this week. Check back next Monday for another <a href="https://aws.amazon.com/blogs/aws/tag/week-in-review/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">Weekly Roundup</a>!</p><a href="https://linktr.ee/sebsto">— seb</a></section><aside id="Comments" class="blog-comments"><div data-lb-comp="aws-blog:cosmic-comments" data-env="prod" data-content-id="e26a14bc-f45f-416c-b36b-f4c4072d4cca" data-title="AWS Weekly Roundup: Amazon Connect Health, Bedrock AgentCore Policy, GameDay Europe, and more (March 9, 2026)" data-url="https://aws.amazon.com/blogs/aws/aws-weekly-roundup-amazon-connect-health-bedrock-agentcore-policy-gameday-europe-and-more-march-9-2026/"><p data-failed-message="Comments cannot be loaded… Please refresh and try again.">Loading comments…</p></div>
</aside>]]></summary>
    <link href="https://aws.amazon.com/blogs/aws/aws-weekly-roundup-amazon-connect-health-bedrock-agentcore-policy-gameday-europe-and-more-march-9-2026/"/>
    <updated>2026-03-09T17:15:00+01:00</updated>
  </entry>
  <entry>
    <id>https://aws.amazon.com/blogs/aws/introducing-openclaw-on-amazon-lightsail-to-run-your-autonomous-private-ai-agents/</id>
    <title><![CDATA[Introducing OpenClaw on Amazon Lightsail to run your autonomous private AI agents]]></title>
    <summary><![CDATA[<section class="blog-post-content lb-rtxt"><table id="amazon-polly-audio-table"><tbody><tr><td id="amazon-polly-audio-tab">
<p></p></td></tr></tbody>


</table><p>Today, we’re announcing the general availability of <a href="https://openclaw.ai">OpenClaw</a> on <a href="https://aws.amazon.com/lightsail/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">Amazon Lightsail</a> to launch OpenClaw instance, pairing your browser, enabling AI capabilities, and optionally connecting messaging channels. Your Lightsail OpenClaw instance is pre-configured with <a href="https://aws.amazon.com/bedrock">Amazon Bedrock</a> as the default AI model provider. Once you complete setup, you can start chatting with your AI assistant immediately — no additional configuration required.</p><p>OpenClaw is an open-source self-hosted autonomous private AI agent that acts as a personal digital assistant by running directly on your computer. You can AI agents on OpenClaw through your browser to connect to messaging apps like WhatsApp, Discord, or Telegram to perform tasks such as managing emails, browsing the web, and organizing files, rather than just answering questions.</p><p>AWS customers have asked if they can run OpenClaw on AWS. Some of them blogged about running OpenClaw on <a href="https://aws.amazon.com/ec2/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">Amazon EC2</a> instances. As someone who has experienced installing OpenClaw directly on my home device, I learned that this is not easy and that there are many security considerations.</p><p>So, let me introduce how to launch a pre-configured OpenClaw instance on Amazon Lightsail more easily and run it securely.</p><p><strong class="c6">OpenClaw on Amazon Lightsail in action</strong><br />To get started, go to the <a href="https://lightsail.aws.amazon.com/ls/webapp/home?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">Amazon Lightsail console</a> and choose <strong>Create instance</strong> on the <strong>Instances</strong> section. After choosing your preferred AWS Region and Availability Zone, Linux/Unix platform to run your instance, choose OpenClaw under <strong>Select a blueprint</strong>.</p><p><img class="aligncenter wp-image-103267 size-full c7" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2026/03/04/2026-openclaw-lightsail-1.png" alt="" width="2138" height="1366" /></p><p>You can choose your instance plan (4 GB memory plan is recommended for optimal performance) and enter a name for your instance. Finally choose <strong>Create instance</strong>. Your instance will be in a <strong>Running</strong> state in a few minutes.</p><p><img class="aligncenter wp-image-103269 size-full c7" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2026/03/04/2026-openclaw-lightsail-2-1.png" alt="" width="2148" height="1673" /></p><p>Before you can use the OpenClaw dashboard, you should pair your browser with OpenClaw. This creates a secure connection between your browser session and OpenClaw. To pair your browser with OpenClaw, choose <strong>Connect using SSH</strong> in the <strong>Getting started</strong> tab.</p><p>When a browser-based SSH terminal opens, you can see the dashboard URL, security credentials displayed in the welcome message. Copy them and open the dashboard in a new browser tab. In the OpenClaw dashboard, you can paste the copied access token into the Gateway Token field in the OpenClaw dashboard.</p><p><img class="aligncenter wp-image-103287 size-full c7" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2026/03/04/2026-openclaw-lightsail-4.png" alt="" width="2046" height="1740" /></p><p>When prompted, press <code>y</code> to continue and <code>a</code> to approve with device pairing in the SSH terminal. When pairing is complete, you can see the <strong>OK</strong> status in the OpenClaw dashboard and your browser is now connected to your OpenClaw instance.</p><p><img class="aligncenter wp-image-103271 size-full" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2026/03/04/2026-openclaw-lightsail-5.png" alt="" width="2148" height="1526" /></p><p>Your OpenClaw instance on Lightsail is configured to use Amazon Bedrock to power its AI assistant. To enable Bedrock API access, copy the script in the <strong>Getting started</strong> tab and run copied script into the <a href="https://aws.amazon.com/cloudshell/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">AWS CloudShell</a> terminal.</p><p><img class="aligncenter wp-image-103272 size-full c7" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2026/03/04/2026-openclaw-lightsail-3.png" alt="" width="2005" height="676" /></p><p>Once the script is complete, go to <strong>Chat</strong> in the OpenClaw dashboard to start using your AI assistant!</p><p>You can set up OpenClaw to work with messaging apps like Telegram and WhatsApp for interacting with your AI assistant directly from your phone or messaging client. To learn more, visit <a href="https://docs.aws.amazon.com/lightsail/latest/userguide/amazon-lightsail-quick-start-guide-openclaw.html#amazon-lightsail-openclaw-connect-messaging?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">Get started with OpenClaw on Lightsail</a> in the Amazon Lightsail User Guide.</p><p><img class="aligncenter size-large wp-image-103279" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2026/03/04/2026-openclaw-lightsail-7-1024x413.png" alt="" width="1024" height="413" /></p><p><strong>Things to know</strong><br />Here are key considerations to know about this feature:</p><ul><li><strong>Permission</strong> — You can customize AWS IAM permissions granted to your OpenClaw instance. The setup script creates an IAM role with a policy that grants access to Amazon Bedrock. You can customize this policy at any time. But, you should be careful when modifying permissions because it may prevent OpenClaw from generating AI responses. To learn more, visit <a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">AWS IAM policies</a> in the AWS documentation</li>
<li><strong>Cost</strong> — You pay for the instance plan you selected on an on-demand hourly rate only for what you use. Every message sent to and received from the OpenClaw assistant is processed through Amazon Bedrock using a token-based pricing model. If you select a third-party model distributed through <a href="https://aws.amazon.com/marketplace/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">AWS Marketplace</a> such as Anthropic Claude or Cohere, there may be additional software fees on top of the per-token cost.</li>
<li><strong>Security</strong> — Running a personal AI agnet on OpenClaw is powerful, but it may cause security threat if you are careless. I recommend to hide your OpenClaw gateway never to expose it to open internet. The gateway auth token is your password, so rotate it often and store it in your envirnment file not hardcoded in config file. To learn more about security tips, visit <a href="https://docs.openclaw.ai/gateway/security">Security on OpenClaw gateway</a>.</li>
</ul><p><strong class="c6">Now available</strong><br />OpenClaw on Amazon Lightsail is now available in all AWS commercial Regions where <a href="https://docs.aws.amazon.com/lightsail/latest/userguide/understanding-regions-and-availability-zones-in-amazon-lightsail.html?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">Amazon Lightsail is available</a>. For Regional availability and a future roadmap, visit the <a class="c-link" href="https://builder.aws.com/build/capabilities/explore?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el" target="_blank" rel="noopener noreferrer" data-stringify-link="https://builder.aws.com/capabilities/" data-sk="tooltip_parent">AWS Capabilities by Region</a>.</p><p>Give a try in the <a href="https://lightsail.aws.amazon.com/ls/webapp/home?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">Lightsail console</a> and send feedback to <a href="https://repost.aws/tags/TAG40l8mpESXKixja2uhSvgQ/amazon-lightsail?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">AWS re:Post for Amazon Lightsail</a> or through your usual AWS support contacts.</p><p>– <a href="https://linkedin.com/in/channy">Channy</a></p></section><aside id="Comments" class="blog-comments"><div data-lb-comp="aws-blog:cosmic-comments" data-env="prod" data-content-id="c2ea70b9-d528-4143-9a6a-f2ade01b7254" data-title="Introducing OpenClaw on Amazon Lightsail to run your autonomous private AI agents" data-url="https://aws.amazon.com/blogs/aws/introducing-openclaw-on-amazon-lightsail-to-run-your-autonomous-private-ai-agents/"><p data-failed-message="Comments cannot be loaded… Please refresh and try again.">Loading comments…</p></div>
</aside>]]></summary>
    <link href="https://aws.amazon.com/blogs/aws/introducing-openclaw-on-amazon-lightsail-to-run-your-autonomous-private-ai-agents/"/>
    <updated>2026-03-04T21:04:00+01:00</updated>
  </entry>
  <entry>
    <id>https://aws.amazon.com/blogs/aws/aws-weekly-roundup-openai-partnership-aws-elemental-inference-strands-labs-and-more-march-2-2026/</id>
    <title><![CDATA[AWS Weekly Roundup: OpenAI partnership, AWS Elemental Inference, Strands Labs, and more (March 2, 2026)]]></title>
    <summary><![CDATA[<section class="blog-post-content lb-rtxt"><table id="amazon-polly-audio-table"><tbody><tr><td id="amazon-polly-audio-tab">
<p></p></td></tr></tbody>


</table><p>This past week, I’ve been deep in the trenches helping customers transform their businesses through AI-DLC (AI-Driven Lifecycle) workshops. Throughout 2026, I’ve had the privilege of facilitating these sessions for numerous customers, guiding them through a structured framework that helps organizations identify, prioritize, and implement AI use cases that deliver measurable business value.</p><p><img class="alignnone size-large wp-image-103232" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2026/03/01/Screenshot-2026-03-01-at-11.34.10%E2%80%AFAM-1024x622.png" alt="Screenshot of GenAI Developer Hour" width="1024" height="622" /></p><p>AI-DLC is a methodology that takes companies from AI experimentation to production-ready solutions by aligning technical capabilities with business outcomes. If you’re interested in learning more, check out <a href="https://aws.amazon.com/blogs/devops/ai-driven-development-life-cycle/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">this blog post</a> that dives deeper into the framework, or watch as <a href="https://www.linkedin.com/in/riyadani/">Riya Dani</a> teaches me all about AI-DLC on our recent <a href="https://www.youtube.com/watch?v=5kUb_IZdlB8">GenAI Developer Hour livestream</a>!</p><p>Now, let’s get into this week’s AWS news…</p><p><img class="size-full wp-image-103235 alignright" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2026/03/02/2026-amazon-openai.png" alt="" width="150" height="130" /><a href="https://www.aboutamazon.com/news/aws/amazon-open-ai-strategic-partnership-investment?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">OpenAI and Amazon announced a multi-year strategic partnership</a> to accelerate AI innovation for enterprises, startups, and end consumers around the world. Amazon will invest $50 billion in OpenAI, starting with an initial $15 billion investment and followed by another $35 billion in the coming months when certain conditions are met. AWS and OpenAI are co-creating a Stateful Runtime Environment powered by OpenAI models, available through <a href="https://aws.amazon.com/bedrock/">Amazon Bedrock</a>, which allows developers to keep context, remember prior work, work across software tools and data sources, and access compute.</p><p>AWS will serve as the exclusive third-party cloud distribution provider for <a href="https://openai.com/index/introducing-openai-frontier/">OpenAI Frontier</a>, enabling organizations to build, deploy, and manage teams of AI agents. OpenAI and AWS are expanding their existing $38 billion multi-year agreement by $100 billion over 8 years, with OpenAI committing to consume approximately 2 gigawatts of Trainium capacity, spanning both Trainium3 and <a href="https://aws.amazon.com/ai/machine-learning/trainium/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">next-generation Trainium4 chips</a>.</p><p><strong>Last week’s launches</strong><br />Here are some launches and updates from this past week that caught my attention:</p><ul><li><a href="https://aws.amazon.com/blogs/aws/aws-security-hub-extended-offers-full-stack-enterprise-security-with-curated-partner-solutions/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">AWS Security Hub Extended offers full-stack enterprise security with curated partner solutions</a> — AWS launched Security Hub Extended, a plan that simplifies procurement, deployment, and integration of full-stack enterprise security solutions including 7AI, Britive, CrowdStrike, Cyera, Island, Noma, Okta, Oligo, Opti, Proofpoint, SailPoint, Splunk, Upwind, and Zscaler. With AWS as the seller of record, customers benefit from pre-negotiated pay-as-you-go pricing, a single bill, no long-term commitments, unified security operations within Security Hub, and unified Level 1 support for AWS Enterprise Support customers.</li>
<li><a href="https://aws.amazon.com/blogs/aws/transform-live-video-for-mobile-audiences-with-aws-elemental-inference/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">Transform live video for mobile audiences with AWS Elemental Inference</a> — AWS launched Elemental Inference, a fully managed AI service that automatically transforms live and on-demand video for mobile and social platforms in real time. The service uses AI-powered cropping to create vertical formats optimized for TikTok, Instagram Reels, and YouTube Shorts, and automatically extracts highlight clips with 6-10 second latency. Beta testing showed large media companies achieved 34% or more savings on AI-powered live video workflows. Deep dive into the <a href="https://aws.amazon.com/blogs/media/how-aws-built-a-live-ai-powered-vertical-video-capability-for-fox-sports-with-aws-elemental-inference/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">Fox Sports implementation</a>.</li>
<li><a href="https://aws.amazon.com/about-aws/whats-new/2026/02/aws-mediaconvert-introduces-video-probe/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">MediaConvert introduces new video probe API</a> — AWS Elemental MediaConvert introduced a free Probe API for quick metadata analysis of media files, reading header metadata to return codec specifications, pixel formats, and color space details without processing video content.</li>
<li><a href="https://aws.amazon.com/about-aws/whats-new/2026/03/amazon-bedrock-projects-api-mantle-inference-engine/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">OpenAI-compatible Projects API in Amazon Bedrock</a> — Projects API provides application-level isolation for your generative AI workloads using OpenAI-compatible APIs in the Mantle inference engine in Amazon Bedrock. You can organize and manage your AI applications with improved access control, cost tracking, and observability across your organization.</li>
<li><a href="https://aws.amazon.com/about-aws/whats-new/2026/02/amazon-location-service-introduces-kiro-power-claude-skill-llm-context/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">Amazon Location Service introduces LLM Context</a> — Amazon Location launched curated AI Agent context as a Kiro power, Claude Code plugin, and agent skill in the open Agent Skills format, improving code accuracy and accelerating feature implementation for location-based capabilities.</li>
<li><a href="https://aws.amazon.com/about-aws/whats-new/2026/02/amazon-eks-node-monitoring-agent-open-source/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">Amazon EKS Node Monitoring Agent is now open source</a> — The Amazon EKS Node Monitoring Agent is now open source on GitHub, allowing visibility into implementation, customization, and community contributions.</li>
<li><a href="https://aws.amazon.com/about-aws/whats-new/2026/02/aws-appconfig-new-relic-for-automated-rollback/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">AWS AppConfig integrates with New Relic</a> — AWS AppConfig launched integration with New Relic Workflow Automation for automated, intelligent rollbacks during feature flag deployments, reducing detection-to-remediation time from minutes to seconds.</li>
</ul><p>For a full list of AWS announcements, be sure to keep an eye on the <a href="https://aws.amazon.com/new/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">What’s New with AWS</a> page.</p><p><strong>Other AWS news</strong><br />Here are some additional posts and resources that you might find interesting:</p><ul><li><a href="https://aws.amazon.com/blogs/opensource/introducing-strands-labs-get-hands-on-today-with-state-of-the-art-experimental-approaches-to-agentic-development/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">Introducing Strands Labs</a> — We created Strands Labs as a separate Git organization to support experimental agentic AI projects and push the frontier of agentic development. At launch, we’re making Strands Labs available with three projects. The first is <a href="https://github.com/strands-labs/robots">Robots</a>, the second is <a href="https://github.com/strands-labs/robots-sim">Robots Sim</a> and the third is <a href="https://github.com/strands-labs/ai-functions">AI Functions</a>.</li>
<li><a href="https://aws.amazon.com/blogs/architecture/6000-aws-accounts-three-people-one-platform-lessons-learned/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">6,000 AWS accounts, three people, one platform: Lessons learned</a> — Architecture blog post on managing massive multi-account environments. Learn how ProGlove implemented a large-scale account-per-tenant model on AWS and how that model shifts complexity from service code to platform operations.</li>
<li><a href="https://aws.amazon.com/blogs/machine-learning/building-intelligent-event-agents-using-amazon-bedrock-agentcore-and-amazon-bedrock-knowledge-bases/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">Building intelligent event agents using Amazon Bedrock AgentCore and Amazon Bedrock Knowledge Bases</a> — Practical guide to building event-driven agents. Check out how you can use Amazon Bedrock AgentCore components to rapidly productionize an event assistant—taking it from prototype to enterprise-ready deployment at scale.</li>
</ul><p><strong>From AWS community</strong><br />Here are my personal favorite posts from AWS community:</p><ul><li><a href="https://builder.aws.com/content/3AFEHrVf0iugHBclfZGPGDGAfm0/how-to-run-a-kiro-ai-coding-workshop-that-actually-works?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">How to Run a Kiro AI Coding Workshop That Actually Works</a> — Running a Kiro workshop at your company or user group? Here is the full step-by-step facilitator guide, resources, and references.</li>
<li><a href="https://builder.aws.com/content/3AAxqZdFuNaiEWCZIiVYlpbt3ml/rag-vs-graphrag-when-agents-hallucinate-answers?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">RAG vs GraphRAG: When Agents Hallucinate Answers</a> — This demo builds a travel booking agent with Strands Agents and compares RAG (FAISS) vs GraphRAG (Neo4j) to measure which approach reduces hallucinations when answering queries</li>
<li><a href="https://aws.amazon.com/blogs/developer/announcing-new-output-formats-in-aws-cli-v2/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">New output formats in AWS CLI v2</a> — You can now use two new features for the AWS Command Line Interface (AWS CLI) v2: structured error output and the “off” output format.</li>
</ul><p><strong>Upcoming AWS events</strong><br />Check your calendar and sign up for upcoming AWS events:</p><ul><li><a href="https://aws.amazon.com/events/aws-at-nvidia-gtc26/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">AWS at NVIDIA GTC 2026</a> — Join us at our AWS sessions, booths, demos, ancillary events in NVIDIA GTC 2026 on March 16 – 19, 2026 in San Jose. You can receive 20% off event passes through AWS and request a 1:1 meeting at GTC.</li>
<li><a href="https://aws.amazon.com/events/summits/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">AWS Summits</a> — Join AWS Summits in 2026, free in-person events where you can explore emerging cloud and AI technologies, learn best practices, and network with industry peers and experts. Upcoming Summits include <a href="https://aws.amazon.com/events/summits/paris/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">Paris</a> (April 1), <a href="https://aws.amazon.com/events/summits/london/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">London</a> (April 22), and <a href="https://aws.amazon.com/events/summits/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">Bengaluru</a> (April 23–24).</li>
<li><a href="https://aws.amazon.com/events/community-day/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">AWS Community Days</a> — Community-led conferences where content is planned, sourced, and delivered by community leaders. Upcoming events include <a href="https://jawsdays2026.jaws-ug.jp/">JAWS Days in Tokyo</a> (March 7), <a href="https://www.acdchennai.com/">Chennai</a> (March 7), <a href="https://www.awscommunityday.sk/">Slovakia</a> (March 11), and <a href="https://www.awsugpune.in/">Pune</a> (March 21).</li>
</ul><p>Browse here for upcoming <a href="https://aws.amazon.com/events/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">AWS led in-person and virtual events</a>, <a href="https://aws.amazon.com/startups/events?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">startup events</a>, and <a href="https://builder.aws.com/connect/events?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">developer-focused events</a>.</p><p>That’s all for this week. Check back next Monday for another <a href="https://aws.amazon.com/blogs/aws/tag/week-in-review/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">Weekly Roundup</a>!</p></section><aside id="Comments" class="blog-comments"><div data-lb-comp="aws-blog:cosmic-comments" data-env="prod" data-content-id="0b7e0f18-6c19-4a65-be2d-a6152907127f" data-title="AWS Weekly Roundup: OpenAI partnership, AWS Elemental Inference, Strands Labs, and more (March 2, 2026)" data-url="https://aws.amazon.com/blogs/aws/aws-weekly-roundup-openai-partnership-aws-elemental-inference-strands-labs-and-more-march-2-2026/"><p data-failed-message="Comments cannot be loaded… Please refresh and try again.">Loading comments…</p></div>
</aside>]]></summary>
    <link href="https://aws.amazon.com/blogs/aws/aws-weekly-roundup-openai-partnership-aws-elemental-inference-strands-labs-and-more-march-2-2026/"/>
    <updated>2026-03-02T20:05:00+01:00</updated>
  </entry>
  <entry>
    <id>https://aws.amazon.com/blogs/aws/aws-security-hub-extended-o%EF%AC%80ers-full-stack-enterprise-security-with-curated-partner-solutions/</id>
    <title><![CDATA[AWS Security Hub Extended oﬀers full-stack enterprise security with curated partner solutions]]></title>
    <summary><![CDATA[<section class="blog-post-content lb-rtxt"><table id="amazon-polly-audio-table"><tbody><tr><td id="amazon-polly-audio-tab">
<p></p></td></tr></tbody>


</table><p>At re:Invent 2025, we <a href="https://aws.amazon.com/blogs/aws/aws-security-hub-now-generally-available-with-near-real-time-analytics-and-risk-prioritization/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">introduced</a> a completely re-imagined <a href="https://aws.amazon.com/security-hub/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">AWS Security Hub</a> that unifies AWS security services, including <a href="https://aws.amazon.com/guardduty/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">Amazon GuardDuty</a> and <a href="https://aws.amazon.com/inspector/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">Amazon Inspector</a> into a single experience. This unified experience automatically and continuously analyzes security findings in combination to help you prioritize and respond to your critical security risks.</p><p class="jss271" data-pm-slice="1 1 []">Today, we’re announcing <a href="https://aws.amazon.com/security-hub/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">AWS Security Hub Extended</a>, a plan of Security Hub that simplifies how you procure, deploy, and integrate a full-stack enterprise security solution across endpoint, identity, email, network, data, browser, cloud, AI, and security operations. With the Extended plan, you can expand your security portfolio beyond AWS to help protect your enterprise estate through a curated selection of AWS Partner solutions, including 7AI, Britive, CrowdStrike, Cyera, Island, Noma, Okta, Oligo, Opti, Proofpoint, SailPoint, Splunk, a Cisco company, Upwind, and Zscaler.</p><p>With AWS as the seller of record, you benefit from pre-negotiated pay-as-you-go pricing, a single bill, and no long-term commitments. You can also get unified security operations experience within Security Hub and unified Level 1 support for <a href="https://aws.amazon.com/premiumsupport/plans/enterprise/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">AWS Enterprise Support customers</a>. You told us that managing multiple procurement cycles and vendor negotiations was creating unnecessary complexity, costing you time and resources. In response, we’ve curated these partner offerings for you to establish more comprehensive protection across your entire technology stack through a single, simplified experience.</p><p>Security findings from all participating solutions are emitted in the <a href="https://github.com/ocsf">Open Cybersecurity Schema Framework (OCSF)</a> schema and automatically aggregated in AWS Security Hub. With the Extended plan, you can combine AWS and partner security solutions to quickly identify and respond to risks that span boundaries.</p><p><strong class="c6">The Security Hub Extended plan in action</strong><br />You can access the partner solutions directly within the <a href="https://console.aws.amazon.com/securityhub/v2/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">Security Hub console</a> by selecting <strong>Extended plan</strong> under the <strong>Management</strong> menu. From there, you can review and deploy any combination of curated and partner offerings.</p><p><img class="aligncenter wp-image-103186 size-full c7" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2026/02/26/2026-security-hub-extended-plan1.jpg" alt="" width="1800" height="1033" /></p><p>You can review details of each partner offering directly in the Security Hub console and subscribe. When you subscribe, you’ll be directed to an automated on-boarding experience from each partner. Once onboarded, consumption-based metering is automatic and you are billed monthly as part of your Security Hub bill.</p><p><img class="aligncenter wp-image-103178 size-full c7" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2026/02/26/2026-security-hub-extended-plan2.jpg" alt="" width="2406" height="1156" /></p><p>Security findings from all solutions are automatically consolidated in AWS Security Hub. This gives you immediate and direct access to all security findings in normalized OCSF schema.</p><p>To learn more about how to enhance your security posture with these integrations for AWS Security Hub, visit the <a href="https://docs.aws.amazon.com/securityhub/latest/userguide/what-are-securityhub-services.html?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">AWS Security Hub User Guide</a>.</p><p><strong class="c6">Now available</strong><br />The AWS Security Hub Extended plan is now generally available in all AWS commercial Regions where Security Hub is available. You can use flexible pay-as-you-go or flat-rate pricing—no upfront investments or long-term commitments required. For more information about pricing, visit the <a href="https://aws.amazon.com/security-hub/pricing/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">AWS Security Hub pricing page</a>.</p><p>Give it a try today in the <a href="https://console.aws.amazon.com/securityhub/v2/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">Security Hub console</a> and send feedback to <a href="https://repost.aws/tags/TAFZPV4oyuS6-TWxLQfz5qSQ?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">AWS re:Post for Security Hub</a> or through your usual AWS Support contacts.</p><p>— <a href="https://linkedin.com/in/channy/">Channy</a></p></section><aside id="Comments" class="blog-comments"><div data-lb-comp="aws-blog:cosmic-comments" data-env="prod" data-content-id="2dd2089c-42f2-465e-b738-6f7ba342e311" data-title="AWS Security Hub Extended oﬀers full-stack enterprise security with curated partner solutions" data-url="https://aws.amazon.com/blogs/aws/aws-security-hub-extended-o%EF%AC%80ers-full-stack-enterprise-security-with-curated-partner-solutions/"><p data-failed-message="Comments cannot be loaded… Please refresh and try again.">Loading comments…</p></div>
</aside>]]></summary>
    <link href="https://aws.amazon.com/blogs/aws/aws-security-hub-extended-o%EF%AC%80ers-full-stack-enterprise-security-with-curated-partner-solutions/"/>
    <updated>2026-02-26T19:52:00+01:00</updated>
  </entry>
  <entry>
    <id>https://aws.amazon.com/blogs/aws/aws-security-hub-extended-offers-full-stack-enterprise-security-with-curated-partner-solutions/</id>
    <title><![CDATA[AWS Security Hub Extended oﬀers full-stack enterprise security with curated partner solutions]]></title>
    <summary><![CDATA[<section class="blog-post-content lb-rtxt"><table id="amazon-polly-audio-table"><tbody><tr><td id="amazon-polly-audio-tab">
<p></p></td></tr></tbody>


</table><p>At re:Invent 2025, we <a href="https://aws.amazon.com/blogs/aws/aws-security-hub-now-generally-available-with-near-real-time-analytics-and-risk-prioritization/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">introduced</a> a completely re-imagined <a href="https://aws.amazon.com/security-hub/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">AWS Security Hub</a> that unifies AWS security services, including <a href="https://aws.amazon.com/guardduty/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">Amazon GuardDuty</a> and <a href="https://aws.amazon.com/inspector/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">Amazon Inspector</a> into a single experience. This unified experience automatically and continuously analyzes security findings in combination to help you prioritize and respond to your critical security risks.</p><p class="jss271" data-pm-slice="1 1 []">Today, we’re announcing <a href="https://aws.amazon.com/security-hub/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">AWS Security Hub Extended</a>, a plan of Security Hub that simplifies how you procure, deploy, and integrate a full-stack enterprise security solution across endpoint, identity, email, network, data, browser, cloud, AI, and security operations. With the Extended plan, you can expand your security portfolio beyond AWS to help protect your enterprise estate through a curated selection of AWS Partner solutions, including 7AI, Britive, CrowdStrike, Cyera, Island, Noma, Okta, Oligo, Opti, Proofpoint, SailPoint, Splunk, a Cisco company, Upwind, and Zscaler.</p><p>With AWS as the seller of record, you benefit from pre-negotiated pay-as-you-go pricing, a single bill, and no long-term commitments. You can also get unified security operations experience within Security Hub and unified Level 1 support for <a href="https://aws.amazon.com/premiumsupport/plans/enterprise/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">AWS Enterprise Support customers</a>. You told us that managing multiple procurement cycles and vendor negotiations was creating unnecessary complexity, costing you time and resources. In response, we’ve curated these partner offerings for you to establish more comprehensive protection across your entire technology stack through a single, simplified experience.</p><p>Security findings from all participating solutions are emitted in the <a href="https://github.com/ocsf">Open Cybersecurity Schema Framework (OCSF)</a> schema and automatically aggregated in AWS Security Hub. With the Extended plan, you can combine AWS and partner security solutions to quickly identify and respond to risks that span boundaries.</p><p><strong class="c6">The Security Hub Extended plan in action</strong><br />You can access the partner solutions directly within the <a href="https://console.aws.amazon.com/securityhub/v2/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">Security Hub console</a> by selecting <strong>Extended plan</strong> under the <strong>Management</strong> menu. From there, you can review and deploy any combination of curated and partner offerings.</p><p><img class="aligncenter wp-image-103186 size-full c7" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2026/02/26/2026-security-hub-extended-plan1.jpg" alt="" width="1800" height="1033" /></p><p>You can review details of each partner offering directly in the Security Hub console and subscribe. When you subscribe, you’ll be directed to an automated on-boarding experience from each partner. Once onboarded, consumption-based metering is automatic and you are billed monthly as part of your Security Hub bill.</p><p><img class="aligncenter wp-image-103178 size-full c7" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2026/02/26/2026-security-hub-extended-plan2.jpg" alt="" width="2406" height="1156" /></p><p>Security findings from all solutions are automatically consolidated in AWS Security Hub. This gives you immediate and direct access to all security findings in normalized OCSF schema.</p><p>To learn more about how to enhance your security posture with these integrations for AWS Security Hub, visit the <a href="https://docs.aws.amazon.com/securityhub/latest/userguide/what-are-securityhub-services.html?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">AWS Security Hub User Guide</a>.</p><p><strong class="c6">Now available</strong><br />The AWS Security Hub Extended plan is now generally available in all AWS commercial Regions where Security Hub is available. You can use flexible pay-as-you-go or flat-rate pricing—no upfront investments or long-term commitments required. For more information about pricing, visit the <a href="https://aws.amazon.com/security-hub/pricing/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">AWS Security Hub pricing page</a>.</p><p>Give it a try today in the <a href="https://console.aws.amazon.com/securityhub/v2/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">Security Hub console</a> and send feedback to <a href="https://repost.aws/tags/TAFZPV4oyuS6-TWxLQfz5qSQ?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">AWS re:Post for Security Hub</a> or through your usual AWS Support contacts.</p><p>— <a href="https://linkedin.com/in/channy/">Channy</a></p></section><aside id="Comments" class="blog-comments"><div data-lb-comp="aws-blog:cosmic-comments" data-env="prod" data-content-id="2dd2089c-42f2-465e-b738-6f7ba342e311" data-title="AWS Security Hub Extended oﬀers full-stack enterprise security with curated partner solutions" data-url="https://aws.amazon.com/blogs/aws/aws-security-hub-extended-offers-full-stack-enterprise-security-with-curated-partner-solutions/"><p data-failed-message="Comments cannot be loaded… Please refresh and try again.">Loading comments…</p></div>
</aside>]]></summary>
    <link href="https://aws.amazon.com/blogs/aws/aws-security-hub-extended-offers-full-stack-enterprise-security-with-curated-partner-solutions/"/>
    <updated>2026-02-26T19:52:00+01:00</updated>
  </entry>
  <entry>
    <id>https://aws.amazon.com/blogs/aws/transform-live-video-for-mobile-audiences-with-aws-elemental-inference/</id>
    <title><![CDATA[Transform live video for mobile audiences with AWS Elemental Inference]]></title>
    <summary><![CDATA[<section class="blog-post-content lb-rtxt"><table id="amazon-polly-audio-table"><tbody><tr><td id="amazon-polly-audio-tab">
<p></p></td></tr></tbody>


</table><p>Today, we’re announcing <a href="https://aws.amazon.com/elemental-inference/">AWS Elemental Inference</a>, a fully managed AI service that automatically transforms and maximizes live and on-demand video broadcasts to engage audiences at scale. At launch, you’ll be able to use AWS Elemental Inference to adapt video content into vertical formats optimized for mobile and social platforms in real time.</p><p>With AWS Elemental Inference, broadcasters and streamers can reach audiences on social and mobile platforms such as TikTok, Instagram Reels, and YouTube Shorts without manual postproduction work or AI expertise.</p><p>Today’s viewers consume content differently than they did even a few years ago. However, most broadcasts are produced in landscape format for traditional viewing. Converting these broadcasts into vertical formats for mobile platforms typically requires time-consuming manual editing that causes broadcasters and streamers to miss viral moments and lose audiences to mobile-first destinations.</p><p><strong>Let’s try it out<br /></strong> AWS Elemental Inference offers flexible deployment options to fit your existing workflow. You can choose to create a feed through the standalone console or configure AWS Elemental Inference through the <a href="https://aws.amazon.com/medialive/">AWS Elemental MediaLive</a> console.</p><p><img class="alignnone size-large wp-image-103082" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2026/02/17/elemental-01-1024x557.png" alt="AWS Elemental Inference console" width="1024" height="557" /></p><p>To get started with AWS Elemental Inference, navigate to the <a href="https://aws.amazon.com/console/">AWS Management Console</a> and choose <strong>AWS Elemental Inference</strong>. From the dashboard, choose <strong>Create feed</strong> to establish your top-level resource for AI-powered video processing. A feed contains your feature configurations and begins in CREATING state before transitioning to AVAILABLE when ready.</p><p><img class="alignnone size-large wp-image-103083" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2026/02/17/elemental-02-1024x597.png" alt="AWS Elemental Inference console" width="1024" height="597" /></p><p>After creating your feed, you can configure outputs for either vertical video cropping or clip generation. For cropping, you can start with an empty feed. The service automatically manages cropping parameters based on your video specifications. For clip generation, choose <strong>Add output</strong>, provide a name (such as “highlight-clips”), select <strong>Clipping</strong> as the output type, and set the status to <strong>ENABLED</strong>.</p><p>This standalone interface provides a streamlined experience for configuring and managing your AI-powered video transformations, making it straightforward to get started with vertical video creation and clip generation.</p><p><img class="alignnone size-large wp-image-103084" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2026/02/17/medialive-01-1024x448.png" alt="AWS MediaLive inference" width="1024" height="448" /></p><p>Alternatively, you can enable AWS Elemental Inference directly within your AWS Elemental MediaLive channel configuration. You can use this integrated approach to add AI capabilities to your existing live video workflows without modifying your architecture. Enable the features you need as part of your channel setup, and AWS Elemental Inference will work in parallel with your video encoding.</p><p><img class="alignnone size-large wp-image-103085" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2026/02/17/medialive-03-1024x736.png" alt="AWS MediaLive inference console" width="1024" height="736" /></p><p>After it’s enabled, you can configure <strong>Smart Crop</strong> with outputs for different resolution specifications within an <strong>Output group</strong>.</p><p><img class="alignnone size-large wp-image-103086" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2026/02/17/medialive-05-1024x790.png" alt="AWS MediaLive inference console" width="1024" height="790" /></p><p>AWS Elemental MediaLive now includes a dedicated AWS Elemental Inference tab on the channel details page, providing a centralized view of your AI-powered video transformation configuration. The tab displays the service <a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/reference-arns.html">Amazon Resource Name (ARN)</a>, data endpoints, and feed output details, including which features, such as Smart Crop, are enabled and their current operational status.</p><p><strong>How AWS Elemental Inference works<br /></strong> The service uses an agentic AI application that analyses video in real time and automatically applies the right optimizations at the right moments. Detection of vertical video cropping and clip generation happens independently, executing multistep transformations that require no human intervention to extract value.</p><p>AWS Elemental Inference analyzes video and automatically applies AI capabilities with no human-in-the-loop prompting required. While you focus on quality video production, the service autonomously optimizes content to create personalized content experiences for your audience.</p><p>AWS Elemental Inference applies AI capabilities in parallel with live video, achieving 6–10 second latency compared to minutes for traditional postprocessing approaches. This “process once, optimize everywhere” method runs multiple AI features simultaneously on the same video stream, eliminating the need to reprocess content for each capability.</p><p>The service integrates seamlessly with AWS Elemental MediaLive, so you can enable AI features without modifying your existing video architecture. AWS Elemental Inference uses fully managed <a href="https://aws.amazon.com/what-is/foundation-models/">foundation models (FMs)</a> that are automatically updated and optimized, so you don’t need dedicated AI teams or specialized expertise.</p><p><strong>Key features at launch<br /></strong> Enjoy the following key features when AWS Elemental Inference launches:</p><ul><li>Vertical video creation – AI-powered cropping intelligently transforms landscape broadcasts into vertical formats (9:16 aspect ratio) optimized for social and mobile platforms. The service tracks subjects and keeps key action visible, maintaining broadcast quality while automatically reformatting content for mobile viewing.</li>
<li>Clip generation with advanced metadata analysis – Automatically detects and extracts clips from live content, highlighting moments for real-time distribution. For live broadcasts, this means identifying game-winning plays in soccer and basketball—reducing manual editing from hours to minutes.</li>
</ul><p>Keep an eye on this space as more features and capabilities will be introduced throughout this year, including tighter integration with core <a href="https://aws.amazon.com/media-services/elemental/">AWS Elemental</a> services and features to help customers monetize their video content.</p><p><strong>Now available<br /></strong> AWS Elemental Inference is available today in 4 <a href="https://aws.amazon.com/about-aws/global-infrastructure/regions_az/">AWS Regions</a>: US East (N. Virginia), US West (Oregon), Europe (Ireland), and Asia Pacific (Mumbai). You can enable AWS Elemental Inference through the AWS Elemental MediaLive console or integrate it into your workflows using the <a href="https://docs.aws.amazon.com/medialive/latest/apireference/what-is.html">AWS Elemental MediaLive APIs</a>.</p><p>With consumption-based pricing, you pay only for the features you use and the video you process, with no upfront costs or commitments. This means you can scale during peak events and optimize costs during quieter periods.</p><p>To learn more about AWS Elemental Inference, visit the <a href="https://aws.amazon.com/elemental-inference">AWS Elemental Inference product page</a>. For technical implementation details, see the <a href="https://docs.aws.amazon.com/elemental-inference">AWS Elemental Inference documentation</a>.</p></section><aside id="Comments" class="blog-comments"><div data-lb-comp="aws-blog:cosmic-comments" data-env="prod" data-content-id="f941fd3e-843b-49f9-b8ff-76fd23c26059" data-title="Transform live video for mobile audiences with AWS Elemental Inference" data-url="https://aws.amazon.com/blogs/aws/transform-live-video-for-mobile-audiences-with-aws-elemental-inference/"><p data-failed-message="Comments cannot be loaded… Please refresh and try again.">Loading comments…</p></div>
</aside>]]></summary>
    <link href="https://aws.amazon.com/blogs/aws/transform-live-video-for-mobile-audiences-with-aws-elemental-inference/"/>
    <updated>2026-02-24T19:55:00+01:00</updated>
  </entry>
  <entry>
    <id>https://aws.amazon.com/blogs/aws/aws-weekly-roundup-claude-sonnet-4-6-in-amazon-bedrock-kiro-in-govcloud-regions-new-agent-plugins-and-more-february-23-2026/</id>
    <title><![CDATA[AWS Weekly Roundup: Claude Sonnet 4.6 in Amazon Bedrock, Kiro in GovCloud Regions, new Agent Plugins, and more (February 23, 2026)]]></title>
    <summary><![CDATA[<section class="blog-post-content lb-rtxt"><table id="amazon-polly-audio-table"><tbody><tr><td id="amazon-polly-audio-tab">
<p></p></td></tr></tbody>


</table><p>Last week, my team met many developers at <a href="https://www.developerweek.com/">Developer Week</a> in San Jose. My colleague, Vinicius Senger delivered a great keynote about renascent software—a new way of building and evolving applications where humans and AI collaborate as co-developers using Kiro. Other colleagues spoke about building and deploying production-ready AI agents. Everyone stayed to ask and hear the questions related to agent memory, multi-agent patterns, meta-tooling and hooks. It was interesting how many developers were actually building agents.</p><p><img class="aligncenter size-full wp-image-103102" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2026/02/20/2026-developer-week-conference.jpg" alt="" width="1800" height="594" /></p><p>We are continuing to meet developers and hear their feedback at third-party developer conferences. You can meet us at the <a href="https://devnexus.com/">dev/nexus</a>, the largest and longest-running Java ecosystem conference on March 4-6 in Atlanta. My colleague, <a href="https://devnexus.com/speakers/james-ward">James Ward</a> will speak about building AI Agents with Spring and MCP, and <a href="https://devnexus.com/speakers/vinicius-senger">Vinicius Senger</a> and <a href="https://devnexus.com/speakers/jonathan-vogel">Jonathan Vogel</a> will speak about 10 tools and tips to upgrade your Java code with AI. I’ll keep sharing places for you to connect with us.</p><p><strong>Last week’s launches</strong><br />Here are some of the other announcements from last week:</p><ul><li><a href="https://aws.amazon.com/about-aws/whats-new/2026/02/claude-sonnet-4.6-available-in-amazon-bedrock/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">Claude Sonnet 4.6 model in Amazon Bedrock</a> – You can now use Claude Sonnet 4.6 which offers frontier performance across coding, agents, and professional work at scale. Claude Sonnet 4.6 approaches Opus 4.6 intelligence at a lower cost. It enables faster, high-quality task completion, making it ideal for high-volume coding and knowledge work use cases.</li>
<li><a href="https://aws.amazon.com/blogs/aws/amazon-ec2-hpc8a-instances-powered-by-5th-gen-amd-epyc-processors-are-now-available/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">Amazon EC2 Hpc8a instances powered by 5th Gen AMD EPYC processors</a> – You can use new Hpc8a instances delivering up to 40% higher performance, increased memory bandwidth, and 300 Gbps Elastic Fabric Adapter networking. You can accelerate compute-intensive simulations, engineering workloads, and tightly coupled HPC applications.</li>
<li><a href="https://aws.amazon.com/blogs/aws/announcing-amazon-sagemaker-inference-for-custom-amazon-nova-models/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">Amazon SageMaker Inference for custom Amazon Nova models</a> – You can now configure the instance types, auto-scaling policies, and concurrency settings for custom Nova model deployments with Amazon SageMaker Inference to best meet your needs.</li>
<li><a href="https://aws.amazon.com/about-aws/whats-new/2026/02/amazon-ec2-nested-virtualization-on-virtual/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">Nested virtualization on virtual Amazon EC2 instances</a> – You can create nested virtual machines by running KVM or Hyper-V on virtual EC2 instances. You can leverage this capability for use cases such as running emulators for mobile applications, simulating in-vehicle hardware for automobiles, and running Windows Subsystem for Linux on Windows workstations.</li>
<li><a href="https://aws.amazon.com/about-aws/whats-new/2026/02/amazon-aurora-server-side-encryption-at-rest/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">Server-Side Encryption by default in Amazon Aurora</a> – Amazon Aurora further strengthens your security posture by automatically applying server-side encryption by default to all new databases clusters using AWS-owned keys. This encryption is fully managed, transparent to users, and with no cost or performance impact.</li>
<li><a href="https://kiro.dev/blog/introducing-govcloud/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">Kiro in AWS GovCloud (US) Regions</a> – You can use Kiro for the development teams behind government missions. Developers in regulated environments can now leverage Kiro’s agentic AI tool with the rigorous security controls required.</li>
</ul><p>For a full list of AWS announcements, be sure to keep an eye on the <a href="https://aws.amazon.com/new/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">What’s New with AWS</a> page.</p><p><strong>Additional updates</strong><br />Here are some additional news items that you might find interesting:</p><ul><li><a href="https://aws.amazon.com/blogs/developer/introducing-agent-plugins-for-aws/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">Introducing Agent Plugins for AWS</a> – You can see how new open-source Agent Plugins for AWS extend coding agents with skills for deploying applications to AWS. Using the <code>deploy-on-aws</code> plugin, you can generate architecture recommendations, cost estimates, and infrastructure-as-code directly from your coding agent.</li>
<li><a href="https://www.allthingsdistributed.com/2026/02/a-chat-with-byron-cook-on-automated-reasoning-and-trust-in-ai-systems.html?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">A chat with Byron Cook on automated reasoning and trust in AI systems</a> – You can hear how to verify AI systems doing the right thing using automated reasoning when they generate code or manage critical decisions. Byron Cook’s team has spent a decade proving correctness in AWS and apply those techniques to agentic systems.</li>
<li><a href="https://aws.amazon.com/blogs/devops/best-practices-for-deploying-aws-devops-agent-in-production/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">Best practices for deploying AWS DevOps Agent in production</a> – You can read best practices for setting up DevOps Agent Spaces that balance investigation capability with operational efficiency. According to <a href="https://www.linkedin.com/posts/swaminathansivasubramanian_what-happens-when-an-application-goes-down-activity-7429241370197880832-hDHh?utm_source=share&amp;utm_medium=member_desktop&amp;rcm=ACoAAABGHXYBf20AS88WHERjRIUKOnefBR3chJA">Swami Sivasubramanian</a>, AWS DevOps Agent, a frontier agent that resolves and proactively prevents incidents, has handled thousands of escalations, with an estimated root cause identification rate of over 86% within Amazon.</li>
</ul><p><strong>From AWS community</strong><br />Here are my personal favorite posts from AWS community:</p><ul><li><a href="https://dev.to/aws/everything-you-need-to-know-about-aws-for-your-first-developer-job-52o2">Everything You Need to Know About AWS for Your First Developer Job</a> – Your first week as a developer will not look like the tutorials you followed previous. Read Ifeanyi Otuonye’s real-world AWS guide for your first job.</li>
<li><a href="https://builder.aws.com/content/39P4kYpFQyYBeumcrXQp6IGhhDc/let-an-ai-agent-do-your-job-searching?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">Let an AI Agent Do Your Job Searching</a> – Still manually checking career pages during a job search? AWS Hero Danielle H. built an AI agent that does the work for you.</li>
<li><a href="https://builder.aws.com/content/3A1s1pHJUvbDQW227CLmcKw17OQ/building-the-aws-serverless-power-for-kiro?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">Building the AWS Serverless Power for Kiro</a> – A former AWS Serverless Hero, Gunnar Grosch built a Kiro Power to integrate 25 MCP tools, ten steering guides, and structured decision guidance for the full development lifecycle.</li>
</ul><p>Join the <a href="https://builder.aws.com/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">AWS Builder Center</a> to connect with community, share knowledge, and access content that supports your development.</p><p><strong>Upcoming AWS events</strong><br />Check your calendar and sign up for upcoming AWS events:</p><ul><li><a href="https://aws.amazon.com/events/summits/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">AWS Summits</a> – Join AWS Summits in 2026, free in-person events where you can explore emerging cloud and AI technologies, learn best practices, and network with industry peers and experts. Upcoming Summits include <a href="https://aws.amazon.com/events/summits/paris/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">Paris</a> (April 1), <a href="https://aws.amazon.com/events/summits/london/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">London</a> (April 22), and <a href="https://aws.amazon.com/events/summits/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">Bengaluru</a> (April 23–24).</li>
<li><a href="https://builder.aws.com/content/38zSJWK4FkDqvJDwnQ0n9nqKvSx/announcing-amazon-nova-ai-hackathon-turn-your-ideas-into-reality?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">Amazon Nova AI Hackathon</a> – Join developers worldwide to build innovative generative AI solutions using frontier foundation models and compete for $40,000 in prizes across five categories including agentic AI, multimodal understanding, UI automation, and voice experiences during this six-week challenge from February 2nd to March 16th, 2026.</li>
<li><a href="https://aws.amazon.com/events/community-day/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">AWS Community Days</a> – Community-led conferences where content is planned, sourced, and delivered by community leaders, featuring technical discussions, workshops, and hands-on labs. Upcoming events include <a href="https://awsahmedabad.community/">Ahmedabad</a> (February 28), <a href="https://jawsdays2026.jaws-ug.jp/">JAWS Days in Tokyo</a> (March 7), <a href="https://www.acdchennai.com/">Chennai</a> (March 7), <a href="https://www.awscommunityday.sk/">Slovakia</a> (March 11), and <a href="https://www.awsugpune.in/">Pune</a> (March 21).</li>
</ul><p>Browse here for upcoming <a href="https://aws.amazon.com/events/explore-aws-events/?refid=e61dee65-4ce8-4738-84db-75305c9cd4fe">AWS led in-person and virtual events</a>, <a href="https://aws.amazon.com/startups/events?tab=upcoming">startup events</a>, and <a href="https://builder.aws.com/connect/events?trk=e61dee65-4ce8-4738-84db-75305c9cd4fe&amp;sc_channel=el">developer-focused events</a>.</p><p>That’s all for this week. Check back next Monday for another <a href="https://aws.amazon.com/blogs/aws/tag/week-in-review/?trk=39d9c26c-b157-46ae-bde6-9cf598f5c9e0&amp;sc_channel=el">Weekly Roundup</a>!</p><p>— <a href="https://linkedin.com/in/channy/">Channy</a></p></section><aside id="Comments" class="blog-comments"><div data-lb-comp="aws-blog:cosmic-comments" data-env="prod" data-content-id="6397d396-c915-4dac-843f-ffb9b9f335f4" data-title="AWS Weekly Roundup: Claude Sonnet 4.6 in Amazon Bedrock, Kiro in GovCloud Regions, new Agent Plugins, and more (February 23, 2026)" data-url="https://aws.amazon.com/blogs/aws/aws-weekly-roundup-claude-sonnet-4-6-in-amazon-bedrock-kiro-in-govcloud-regions-new-agent-plugins-and-more-february-23-2026/"><p data-failed-message="Comments cannot be loaded… Please refresh and try again.">Loading comments…</p></div>
</aside>]]></summary>
    <link href="https://aws.amazon.com/blogs/aws/aws-weekly-roundup-claude-sonnet-4-6-in-amazon-bedrock-kiro-in-govcloud-regions-new-agent-plugins-and-more-february-23-2026/"/>
    <updated>2026-02-23T17:56:00+01:00</updated>
  </entry>
  <entry>
    <id>https://aws.amazon.com/blogs/aws/amazon-ec2-hpc8a-instances-powered-by-5th-gen-amd-epyc-processors-are-now-available/</id>
    <title><![CDATA[Amazon EC2 Hpc8a Instances powered by 5th Gen AMD EPYC processors are now available]]></title>
    <summary><![CDATA[<section class="blog-post-content lb-rtxt"><table id="amazon-polly-audio-table"><tbody><tr><td id="amazon-polly-audio-tab">
<p></p></td></tr></tbody>


</table><p>Today, we’re announcing the general availability of <a href="https://aws.amazon.com/ec2/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">Amazon Elastic Compute Cloud (Amazon EC2)</a> Hpc8a instances, a new high performance computing (HPC) optimized instance type powered by latest 5th Generation AMD EPYC processors with a maximum frequency of up to 4.5 GHz. These instances are ideal for compute-intensive tightly coupled HPC workloads, including computational fluid dynamics, simulations for faster design iterations, high-resolution weather modeling within tight operational windows, and complex crash simulations that require rapid time-to-results.</p><p>The new Hpc8a instances deliver up to 40% higher performance, 42% greater memory bandwidth, and up to 25% better price-performance compared to previous generation <a href="https://aws.amazon.com/blogs/aws/new-amazon-ec2-hpc7a-instances-powered-by-4th-gen-amd-epyc-processors-optimized-for-high-performance-computing/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">Hpc7a instances</a>. Customers benefit from the high core density, memory bandwidth, and low-latency networking that helped them scale efficiently and reduce job completion times for their compute-intensive simulation workloads.</p><p><strong class="c6">Hpc8a instances</strong><br />Hpc8a instances are available with 192 cores, 768 GiB memory, and 300 Gbps <a href="https://aws.amazon.com/hpc/efa/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">Elastic Fabric Adapter (EFA)</a> networking to run applications requiring high levels of inter node communications at scale.</p><table class="c10"><tbody><tr class="c8"><td class="c7"><strong>Instance Name</strong></td>
<td class="c7"><strong>Physical Cores</strong></td>
<td class="c7"><strong>Memory (Gib)</strong></td>
<td class="c7"><strong>EFA Network Bandwidth (Gbps)</strong></td>
<td class="c7"><strong>Network Bandwidth (Gbps)</strong></td>
<td class="c7"><strong>Attached Storage</strong></td>
</tr><tr class="c9"><td class="c7"><strong>Hpc8a.96xlarge</strong></td>
<td class="c7">192</td>
<td class="c7">768</td>
<td class="c7">Up to 300</td>
<td class="c7">75</td>
<td class="c7">EBS Only</td>
</tr></tbody></table><p>Hpc8a instances are available in a single <strong>96xlarge</strong> size with a 1:4 core-to-memory ratio. You will have the capability to right size based on HPC workload requirements by customizing the number of cores needed at launch instances. These instances also use sixth-generation <a href="https://aws.amazon.com/ec2/nitro/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">AWS Nitro</a> cards, which offload CPU virtualization, storage, and networking functions to dedicated hardware and software, enhancing performance and security for your workloads.</p><p>You can use Hpc8a instances with <a href="https://aws.amazon.com/hpc/parallelcluster/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">AWS ParallelCluster</a> and <a href="https://aws.amazon.com/pcs/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">AWS Parallel Computing Service (AWS PCS)</a> to simplify workload submission and cluster creation and <a href="https://aws.amazon.com/fsx/lustre/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">Amazon FSx for Lustre</a> for sub-millisecond latencies and up to hundreds of gigabytes per second of throughput for storage. To achieve the best performance for HPC workloads, these instances have Simultaneous Multithreading (SMT) disabled.</p><p><strong class="c6">Now available</strong><br />Amazon EC2 Hpc8a instances are now available in US East (Ohio) and Europe (Stockholm) <a href="https://docs.aws.amazon.com/glossary/latest/reference/glos-chap.html#region?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">AWS Regions</a>. For Regional availability and a future roadmap, search the instance type in the <strong>CloudFormation</strong> resources tab of <a href="https://builder.aws.com/build/capabilities/explore?tab=cfn-resources&amp;trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">AWS Capabilities by Region</a>.</p><p>You can purchase these instances as <a href="https://aws.amazon.com/ec2/pricing/on-demand/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">On-Demand Instances</a> and <a href="https://aws.amazon.com/savingsplans/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">Savings Plan</a>. To learn more, visit the <a href="https://aws.amazon.com/ec2/pricing/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">Amazon EC2 Pricing page</a>.</p><p>Give Hpc8a instances a try in the <a href="https://console.aws.amazon.com/ec2/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">Amazon EC2 console</a>. To learn more, visit the <a href="https://aws.amazon.com/ec2/instance-types/hpc8a/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">Amazon EC2 Hpc8a instances page</a> and send feedback to <a href="https://repost.aws/tags/TAO-wqN9fYRoyrpdULLa5y7g/amazon-ec-2?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">AWS re:Post for EC2</a> or through your usual AWS Support contacts.</p><p>— <a href="https://linkedin.com/in/channy/">Channy</a></p></section><aside id="Comments" class="blog-comments"><div data-lb-comp="aws-blog:cosmic-comments" data-env="prod" data-content-id="ab875734-5a8b-427b-b1ca-06783d43e420" data-title="Amazon EC2 Hpc8a Instances powered by 5th Gen AMD EPYC processors are now available" data-url="https://aws.amazon.com/blogs/aws/amazon-ec2-hpc8a-instances-powered-by-5th-gen-amd-epyc-processors-are-now-available/"><p data-failed-message="Comments cannot be loaded… Please refresh and try again.">Loading comments…</p></div>
</aside>]]></summary>
    <link href="https://aws.amazon.com/blogs/aws/amazon-ec2-hpc8a-instances-powered-by-5th-gen-amd-epyc-processors-are-now-available/"/>
    <updated>2026-02-17T00:12:00+01:00</updated>
  </entry>
  <entry>
    <id>https://aws.amazon.com/blogs/aws/announcing-amazon-sagemaker-inference-for-custom-amazon-nova-models/</id>
    <title><![CDATA[Announcing Amazon SageMaker Inference for custom Amazon Nova models]]></title>
    <summary><![CDATA[<section class="blog-post-content lb-rtxt"><table id="amazon-polly-audio-table"><tbody><tr><td id="amazon-polly-audio-tab">
<p></p></td></tr></tbody>


</table><p>Since we launched <a href="https://aws.amazon.com/blogs/aws/announcing-amazon-nova-customization-in-amazon-sagemaker-ai/">Amazon Nova customization in Amazon SageMaker AI</a> at AWS NY Summit 2025, customers have been asking for the same capabilities with <a href="https://aws.amazon.com/nova/">Amazon Nova</a> as they do when they customize open weights models in <a href="https://aws.amazon.com/sagemaker/ai/deploy/">Amazon SageMaker Inference</a>. They also wanted have more control and flexibility in custom model inference over instance types, auto-scaling policies, context length, and concurrency settings that production workloads demand.</p><p>Today, we’re announcing the general availability of custom Nova model support in Amazon SageMaker Inference, a production-grade, configurable, and cost-efficient managed inference service to deploy and scale full-rank customized Nova models. You can now experience an end-to-end customization journey to train Nova Micro, Nova Lite, and Nova 2 Lite models with reasoning capabilities using <a href="https://aws.amazon.com/sagemaker/ai/train/">Amazon SageMaker Training Jobs</a> or <a href="https://aws.amazon.com/sagemaker/ai/hyperpod/">Amazon HyperPod</a> and seamlessly deploy them with managed inference infrastructure of Amazon SageMaker AI.</p><p>With Amazon SageMaker Inference for custom Nova models, you can reduce inference cost through optimized GPU utilization using <a href="https://aws.amazon.com/ec2">Amazon Elastic Compute Cloud (Amazon EC2)</a> <a href="https://aws.amazon.com/ec2/instance-types/g5/">G5</a> and <a href="https://aws.amazon.com/ec2/instance-types/g6/">G6</a> instances over <a href="https://aws.amazon.com/ec2/instance-types/p5/">P5 instances</a>, auto-scaling based on 5-minute usage patterns, and configurable inference parameters. This feature enables deployment of customized Nova models with continued pre-training, supervised fine-tuning, or reinforcement fine-tuning for your use cases. You can also set advanced configurations about context length, concurrency, and batch size for optimizing the latency-cost-accuracy tradeoff for your specific workloads.</p><p>Let’s see how to deploy customized Nova models on SageMaker AI real-time endpoints, configure inference parameters, and invoke your models for testing.</p><p><strong class="c6">Deploy custom Nova models in SageMaker Inference</strong><br />At AWS re:Invent 2025, we introduced <a href="https://aws.amazon.com/blogs/aws/new-serverless-customization-in-amazon-sagemaker-ai-accelerates-model-fine-tuning/">new serverless customization in Amazon SageMaker AI</a> for popular AI models including Nova models. With a few clicks, you can seamlessly select a model and customization technique, and handle model evaluation and deployment. If you already have a trained custom Nova model artifact, you can deploy the models on SageMaker Inference through the <a href="https://console.aws.amazon.com/sagemaker?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">SageMaker Studio</a> or <a href="https://docs.aws.amazon.com/sagemaker/latest/dg/api-and-sdk-reference-overview.html">SageMaker AI SDK</a>.</p><p>In the SageMaker Studio, choose a trained Nova model in Models in your models in the <strong>Models</strong> menu. You can deploy the model by choosing <strong>Deploy</strong> button, <strong>SageMaker AI</strong> and <strong>Create new endpoint</strong>.</p><p><img class="aligncenter size-full wp-image-102959" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2026/02/11/2026-sagemaker-ai-custom-nova-1-deploy.jpg" alt="" width="2380" height="1282" /></p><p>Choose the endpoint name, instance type, and advanced options such as instance count, max instance count, permission and networking, and <strong>Deploy</strong> button. At GA launch, you can use <code>g5.12xlarge</code>, <code>g5.24xlarge</code>, <code>g5.48xlarge</code>, <code>g6.12xlarge</code>, <code>g6.24xlarge</code>, <code>g6.48xlarge</code>, and <code>p5.48xlarge</code> instance types for the Nova Micro model, <code>g5.24xlarge</code>, <code>g5.48xlarge</code>, <code>g6.24xlarge</code>, <code>g6.48xlarge</code>, and <code>p5.48xlarge</code> for the Nova Lite model, and <code>p5.48xlarge</code> for the Nova 2 Lite model.</p><p><img class="aligncenter size-full wp-image-102961" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2026/02/11/2026-sagemaker-ai-custom-nova-2-deploy.jpg" alt="" width="2348" height="1466" /></p><p>Creating your endpoint requires time to provision the infrastructure, download your model artifacts, and initialize the inference container.</p><p>After model deployment completes and the endpoint status shows <strong>InService</strong>, you can perform real-time inference using the new endpoint. To test the model, choose the <strong>Playground</strong> tab and input your prompt in the <strong>Chat</strong> mode.</p><p><img class="aligncenter wp-image-103015 size-full" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2026/02/16/2026-sagemaker-ai-custom-nova-3-deploy-1.jpg" alt="" width="2560" height="1950" /></p><p>You can also use the SageMaker AI SDK to create two resources: a SageMaker AI model object that references your Nova model artifacts, and an endpoint configuration that defines how the model will be deployed.</p><p>The following code creates a SageMaker AI model that references your Nova model artifacts:</p><pre class="lang-python"># Create a SageMaker AI model
    model_response = sagemaker.create_model(
        ModelName= 'Nova-micro-ml-g5-12xlarge',
        PrimaryContainer={
            'Image': '123456789012.dkr.ecr.us-east-1.amazonaws.com/nova-inference-repo:v1.0.0',
            'ModelDataSource': {
                'S3DataSource': {
                   'S3Uri': 's3://your-bucket-name/path/to/model/artifacts/',
                   'S3DataType': 'S3Prefix',
                   'CompressionType': 'None'
                }
            },
            # Model Parameters
            'Environment': {
                'CONTEXT_LENGTH': 8000,
                'CONCURRENCY': 16,
                'DEFAULT_TEMPERATURE': 0.0,
                'DEFAULT_TOP_P': 1.0
            }
        },
        ExecutionRoleArn=SAGEMAKER_EXECUTION_ROLE_ARN,
        EnableNetworkIsolation=True
    )
    print("Model created successfully!")</pre><p>Next, create an endpoint configuration that defines your deployment infrastructure and deploy your Nova model by creating a SageMaker AI real-time endpoint. This endpoint will host your model and provide a secure HTTPS endpoint for making inference requests.</p><pre class="lang-python"># Create Endpoint Configuration
    production_variant = {
        'VariantName': 'primary',
        'ModelName': 'Nova-micro-ml-g5-12xlarge',
        'InitialInstanceCount': 1,
        'InstanceType': 'ml.g5.12xlarge',
    }
    config_response = sagemaker.create_endpoint_config(
        EndpointConfigName= 'Nova-micro-ml-g5-12xlarge-Config',
        ProductionVariants= production_variant
    )
    print("Endpoint configuration created successfully!")
# Deploy your Noval model
    endpoint_response = sagemaker.create_endpoint(
        EndpointName= 'Nova-micro-ml-g5-12xlarge-endpoint',
        EndpointConfigName= 'Nova-micro-ml-g5-12xlarge-Config'
    )
    print("Endpoint creation initiated successfully!")
</pre><p>After the endpoint is created, you can send inference requests to generate predictions from your custom Nova model. Amazon SageMaker AI supports synchronous endpoints for real-time with streaming/non-streaming modes and asynchronous endpoints for batch processing.</p><p>For example, the following code creates streaming completion format for text generation:</p><pre class="lang-python"># Streaming chat request with comprehensive parameters
streaming_request = {
"messages": [
        {"role": "user", "content": "Compare our Q4 2025 actual spend against budget across all departments and highlight variances exceeding 10%"}
    ],
    "max_tokens": 512,
    "stream": True,
    "temperature": 0.7,
    "top_p": 0.95,
    "top_k": 40,
    "logprobs": True,
    "top_logprobs": 2,
    "reasoning_effort": "low",  # Options: "low", "high"
    "stream_options": {"include_usage": True}
}
invoke_nova_endpoint(streaming_request)
def invoke_nova_endpoint(request_body):
"""
    Invoke Nova endpoint with automatic streaming detection.
    Args:
        request_body (dict): Request payload containing prompt and parameters
    Returns:
        dict: Response from the model (for non-streaming requests)
        None: For streaming requests (prints output directly)
    """
    body = json.dumps(request_body)
    is_streaming = request_body.get("stream", False)
    try:
        print(f"Invoking endpoint ({'streaming' if is_streaming else 'non-streaming'})...")
        if is_streaming:
            response = runtime_client.invoke_endpoint_with_response_stream(
                EndpointName=ENDPOINT_NAME,
                ContentType='application/json',
                Body=body
            )
            event_stream = response['Body']
            for event in event_stream:
                if 'PayloadPart' in event:
                    chunk = event['PayloadPart']
                    if 'Bytes' in chunk:
                        data = chunk['Bytes'].decode()
                        print("Chunk:", data)
        else:
            # Non-streaming inference
            response = runtime_client.invoke_endpoint(
                EndpointName=ENDPOINT_NAME,
                ContentType='application/json',
                Accept='application/json',
                Body=body
            )
            response_body = response['Body'].read().decode('utf-8')
            result = json.loads(response_body)
            print("✅ Response received successfully")
            return result
    except ClientError as e:
        error_code = e.response['Error']['Code']
        error_message = e.response['Error']['Message']
        print(f"❌ AWS Error: {error_code} - {error_message}")
    except Exception as e:
        print(f"❌ Unexpected error: {str(e)}")</pre><p>To use full code examples, visit <a href="https://docs.aws.amazon.com/sagemaker/latest/dg/nova-model.html">Customizing Amazon Nova models on Amazon SageMaker AI</a>. To learn more about best practices on deploying and managing models, visit <a href="https://docs.aws.amazon.com/sagemaker/latest/dg/best-practices.html">Best Practices for SageMaker AI</a>.</p><p><strong class="c6">Now available</strong><br />Amazon SageMaker Inference for custom Nova models is available today in US East (N. Virginia) and US West (Oregon) AWS Regions. For Regional availability and a future roadmap, visit the <a class="c-link" href="https://builder.aws.com/build/capabilities/explore?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el" target="_blank" rel="noopener noreferrer" data-stringify-link="https://builder.aws.com/capabilities/" data-sk="tooltip_parent">AWS Capabilities by Region</a>.</p><p>The feature supports Nova Micro, Nova Lite, and Nova 2 Lite models with reasoning capabilities, running on EC2 G5, G6, and P5 instances with auto-scaling support. You pay only for the compute instances you use, with per-hour billing and no minimum commitments. For more information, visit <a href="https://aws.amazon.com/sagemaker/ai/pricing/">Amazon SageMaker AI Pricing page</a>.</p><p>Give it a try in <a href="https://console.aws.amazon.com/sagemaker?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">Amazon SageMaker AI console</a> and send feedback to <a href="https://repost.aws/tags/TAT80swPyVRPKPcA0rsJYPuA/amazon-sagemaker?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">AWS re:Post for SageMaker</a> or through your usual AWS Support contacts.</p><p>— <a href="https://linkedin.com/in/channy">Channy</a></p></section><aside id="Comments" class="blog-comments"><div data-lb-comp="aws-blog:cosmic-comments" data-env="prod" data-content-id="06298100-6aea-4142-918e-b54729592b91" data-title="Announcing Amazon SageMaker Inference for custom Amazon Nova models" data-url="https://aws.amazon.com/blogs/aws/announcing-amazon-sagemaker-inference-for-custom-amazon-nova-models/"><p data-failed-message="Comments cannot be loaded… Please refresh and try again.">Loading comments…</p></div>
</aside>]]></summary>
    <link href="https://aws.amazon.com/blogs/aws/announcing-amazon-sagemaker-inference-for-custom-amazon-nova-models/"/>
    <updated>2026-02-16T22:25:00+01:00</updated>
  </entry>
  <entry>
    <id>https://aws.amazon.com/blogs/aws/aws-weekly-roundup-amazon-ec2-m8azn-instances-new-open-weights-models-in-amazon-bedrock-and-more-february-16-2026/</id>
    <title><![CDATA[AWS Weekly Roundup: Amazon EC2 M8azn instances, new open weights models in Amazon Bedrock, and more (February 16, 2026)]]></title>
    <summary><![CDATA[<section class="blog-post-content lb-rtxt"><table id="amazon-polly-audio-table"><tbody><tr><td id="amazon-polly-audio-tab">
<p></p></td></tr></tbody>


</table><p>I joined AWS in 2021, and since then I’ve watched the <a href="https://aws.amazon.com/ec2">Amazon Elastic Compute Cloud (Amazon EC2)</a> instance family grow at a pace that still surprises me. From AWS Graviton-powered instances to specialized accelerated computing options, it feels like every few months there’s a new instance type landing that pushes performance boundaries further. As of February 2026, AWS offers over 1,160 Amazon EC2 instance types, and that number keeps climbing.</p><p>This week’s opening news is a good example: The general availability of <a href="https://aws.amazon.com/about-aws/whats-new/2026/02/aws-m8azn-instances-generally-available/">Amazon EC2 M8azn instances</a>. These are general purpose, high-frequency, high-network instances powered by fifth generation AMD EPYC processors, offering the highest maximum CPU frequency in the cloud at 5 GHz. Compared to the previous generation M5zn instances, M8azn instances deliver up to 2x compute performance, 4.3x higher memory bandwidth, and a 10x larger L3 cache. They also provide up to 2x networking throughput and up to 3x <a href="https://aws.amazon.com/ebs">Amazon Elastic Block Store (Amazon EBS)</a> throughput compared with M5zn.</p><p><img class="alignnone size-full wp-image-103010" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2026/02/15/wir-feb16-2026-v3.png" alt="" width="3024" height="1692" /></p><p>Built on the <a href="https://aws.amazon.com/ec2/nitro/">AWS Nitro System</a> using sixth generation Nitro Cards, M8azn instances target workloads such as real-time financial analytics, high-performance computing, high-frequency trading, CI/CD pipelines, gaming, and simulation modeling across automotive, aerospace, energy, and telecommunications. The instances feature a 4:1 ratio of memory to vCPU and are available in 9 sizes ranging from 2 to 96 vCPUs with up to 384 GiB of memory, including two bare metal variants. For more information visit the <a href="https://aws.amazon.com/ec2/instance-types/m8a">Amazon EC2 M8azn instance page</a>.</p><p><strong>Last week’s launches</strong><br />Here are some of the other announcements from last week:</p><ul><li><a href="https://aws.amazon.com/about-aws/whats-new/2026/02/amazon-bedrock-adds-support-six-open-weights-models/">Amazon Bedrock adds support for six fully managed open weights models</a> – Amazon Bedrock now supports DeepSeek V3.2, MiniMax M2.1, GLM 4.7, GLM 4.7 Flash, Kimi K2.5, and Qwen3 Coder Next. These models span frontier reasoning and agentic coding workloads. DeepSeek V3.2 and Kimi K2.5 target reasoning and agentic intelligence, GLM 4.7 and MiniMax M2.1 support autonomous coding with large output windows, and Qwen3 Coder Next and GLM 4.7 Flash provide cost-efficient alternatives for production deployment. These models are powered by Project Mantle and provide out-of-the-box compatibility with OpenAI API specifications. With the launch, you can also use new open weight models–<a href="https://kiro.dev/blog/open-weight-models/">DeepSeek v3.2 , MiniMax 2.1, and Qwen3 Coder Next in Kiro</a>, a spec-driven AI development tool.</li>
<li><a href="https://aws.amazon.com/about-aws/whats-new/2026/02/amazon-bedrock-expands-aws-privatelink-support-openai-api-endpoints/">Amazon Bedrock expands support for AWS PrivateLink</a> – Amazon Bedrock now supports AWS PrivateLink for the <code>bedrock-mantle</code> endpoint, in addition to existing support for the <code>bedrock-runtime</code> endpoint. The bedrock-mantle endpoint is powered by Project Mantle, a distributed inference engine for large-scale machine learning model serving on Amazon Bedrock. Project Mantle provides serverless inference with quality of service controls, higher default customer quotas with automated capacity management, and out-of-the-box compatibility with OpenAI API specifications. AWS PrivateLink support for OpenAI API-compatible endpoints is available in 14 AWS Regions. To get started, visit the Amazon Bedrock console or the OpenAI API compatibility documentation.</li>
<li><a href="https://aws.amazon.com/about-aws/whats-new/2026/02/amazon-eks-auto-mode-enhanced-logging/">Amazon EKS Auto Mode announces enhanced logging for managed Kubernetes capabilities</a> – You can now configure log delivery sources using Amazon CloudWatch Vended Logs in Amazon EKS Auto Mode. This helps you collect logs from Auto Mode’s managed Kubernetes capabilities for compute autoscaling, block storage, load balancing, and pod networking. Each Auto Mode capability can be configured as a CloudWatch Vended Logs delivery source with built-in AWS authentication and authorization at a reduced price compared to standard CloudWatch Logs. You can deliver logs to CloudWatch Logs, Amazon S3, or Amazon Data Firehose destinations. This feature is available in all Regions where EKS Auto Mode is available.</li>
<li><a href="https://aws.amazon.com/about-aws/whats-new/2026/02/amazon-opensearch-serverless-supports-collection-groups/">Amazon OpenSearch Serverless now supports Collection Groups</a> – You can use new Collection Groups to share OpenSearch Compute Units (OCUs) across collections with different AWS Key Management Service (AWS KMS) keys. Collection Groups reduce overall OCU costs through a shared compute model while maintaining collection-level security and access controls. They also introduce the ability to specify minimum OCU allocations alongside maximum OCU limits, providing guaranteed baseline capacity at startup for latency-sensitive applications. Collection Groups are available in all Regions where Amazon OpenSearch Serverless is currently available.</li>
<li><a href="https://aws.amazon.com/about-aws/whats-new/2026/02/rds-aurora-backup-configuration-restoring-snapshots/">Amazon RDS now supports backup configuration when restoring snapshots</a> – You can view and modify the backup retention period and preferred backup window before and during snapshot restore operations. Previously, restored database instances and clusters inherited backup parameter values from snapshot metadata and could only be modified after restore was complete. You can now view backup settings as part of automated backups and snapshots, and specify or modify these values when restoring, eliminating the need for post-restoration modifications. This is available for all Amazon RDS database engines (MySQL, PostgreSQL, MariaDB, Oracle, SQL Server, and Db2) and Amazon Aurora (MySQL-Compatible and PostgreSQL-Compatible editions) in all AWS commercial Regions and AWS GovCloud (US) Regions at no additional cost.</li>
</ul><p>For a full list of AWS announcements, be sure to keep an eye on the <a href="https://aws.amazon.com/new/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">What’s New with AWS</a> page.</p><p><strong>Upcoming AWS events</strong><br />Check your calendar and sign up for upcoming AWS events:</p><p><a href="https://aws.amazon.com/events/summits/?trk=ep_card_event_page&amp;awsf.location=*all&amp;refid=ep_card_event_page">AWS Summits</a> – Join AWS Summits in 2026, free in-person events where you can explore emerging cloud and AI technologies, learn best practices, and network with industry peers and experts. Upcoming Summits include <a href="https://aws.amazon.com/events/summits/paris/">Paris</a> (April 1), <a href="https://aws.amazon.com/events/summits/london/">London</a> (April 22), and <a href="https://aws.amazon.com/events/summits/">Bengaluru</a> (April 23–24).</p><p><a href="https://aws.amazon.com/uki/cloud-services/aws-events/ai-and-data-conference-2026/">AWS AI and Data Conference 2026</a> – A free, single-day in-person event on March 12 at the Lyrath Convention Centre in Ireland. The conference covers designing, training, and deploying agents with Amazon Bedrock, Amazon SageMaker, and QuickSight, integrating them with AWS data services, and applying governance practices to operate them at scale. The agenda includes strategic guidance and hands-on labs for architects, developers, and business leaders.</p><p><a href="https://aws.amazon.com/events/community-day/">AWS Community Days</a> – Community-led conferences where content is planned, sourced, and delivered by community leaders, featuring technical discussions, workshops, and hands-on labs. Upcoming events include <a href="https://awsahmedabad.community/">Ahmedabad</a> (February 28), <a href="https://www.awscommunityday.sk/">Slovakia</a> (March 11), and <a href="https://www.awsugpune.in/">Pune</a> (March 21).</p><p>Join the <a href="https://builder.aws.com/?trk=e61dee65-4ce8-4738-84db-75305c9cd4fe&amp;sc_channel=el">AWS Builder Center</a> to connect with builders, share solutions, and access content that supports your development. Browse here for upcoming <a href="https://aws.amazon.com/events/explore-aws-events/?refid=e61dee65-4ce8-4738-84db-75305c9cd4fe">AWS led in-person and virtual events</a> and <a href="https://builder.aws.com/connect/events?trk=e61dee65-4ce8-4738-84db-75305c9cd4fe&amp;sc_channel=el">developer-focused events</a>.</p><p>That’s all for this week. Check back next Monday for another Weekly Roundup!</p><a href="https://www.linkedin.com/in/esrakayabali/">— Esra</a><p><em>This post is part of our Weekly Roundup series. Check back each week for a quick roundup of interesting news and announcements from AWS!</em></p></section><aside id="Comments" class="blog-comments"><div data-lb-comp="aws-blog:cosmic-comments" data-env="prod" data-content-id="9f8f37a4-feea-4162-a9f4-36a5124962f1" data-title="AWS Weekly Roundup: Amazon EC2 M8azn instances, new open weights models in Amazon Bedrock, and more (February 16, 2026)" data-url="https://aws.amazon.com/blogs/aws/aws-weekly-roundup-amazon-ec2-m8azn-instances-new-open-weights-models-in-amazon-bedrock-and-more-february-16-2026/"><p data-failed-message="Comments cannot be loaded… Please refresh and try again.">Loading comments…</p></div>
</aside>]]></summary>
    <link href="https://aws.amazon.com/blogs/aws/aws-weekly-roundup-amazon-ec2-m8azn-instances-new-open-weights-models-in-amazon-bedrock-and-more-february-16-2026/"/>
    <updated>2026-02-16T18:28:00+01:00</updated>
  </entry>
  <entry>
    <id>https://aws.amazon.com/blogs/aws/aws-weekly-roundup-claude-opus-4-6-in-amazon-bedrock-aws-builder-id-sign-in-with-apple-and-more-february-9-2026/</id>
    <title><![CDATA[AWS Weekly Roundup: Claude Opus 4.6 in Amazon Bedrock, AWS Builder ID Sign in with Apple, and more (February 9, 2026)]]></title>
    <summary><![CDATA[<section class="blog-post-content lb-rtxt"><table id="amazon-polly-audio-table"><tbody><tr><td id="amazon-polly-audio-tab">
<p></p></td></tr></tbody>


</table><p>Here are the notable launches and updates from last week that can help you build, scale, and innovate on AWS.</p><p><strong class="c6">Last week’s launches</strong><br />Here are the launches that got my attention this week.</p><p>Let’s start with news related to compute and networking infrastructure:</p><ul><li><a href="https://aws.amazon.com/about-aws/whats-new/2026/02/amazon-ec2-c8id-m8id-r8id-instances/">Introducing Amazon EC2 C8id, M8id, and R8id instances:</a> These new Amazon EC2 C8id, M8id, and R8id instances are powered by custom Intel Xeon 6 processors. These instances offer up to 43% higher performance and 3.3x more memory bandwidth compared to previous generation instances.</li>
<li><a href="https://aws.amazon.com/about-aws/whats-new/2026/02/aws-network-firewall-new-price-reduction/">AWS Network Firewall announces new price reductions:</a> The service has added the hourly and data processing discounts on NAT Gateways that are service-chained with Network Firewall secondary endpoints. Additionally, AWS Network Firewall has removed additional data processing charges for Advanced Inspection, which enables Transport Layer Security (TLS) inspection of encrypted network traffic.</li>
<li><a href="https://aws.amazon.com/about-aws/whats-new/2026/02/amazon-ecs-nlb-linear-canary-deployments/">Amazon ECS adds Network Load Balancer support for Linear and Canary deployments:</a> Applications that commonly use NLB, such as those requiring TCP/UDP-based connections, low latency, long-lived connections, or static IP addresses, can take advantage of managed, incremental traffic shifting natively from ECS when rolling out updates.</li>
<li><a href="https://aws.amazon.com/about-aws/whats-new/2026/02/aws-config-new-resource-types/">AWS Config now supports 30 new resource types:</a> These range across key services including Amazon EKS, Amazon Q, and AWS IoT. This expansion provides greater coverage over your AWS environment, enabling you to more effectively discover, assess, audit, and remediate an even broader range of resources.</li>
<li><a href="https://aws.amazon.com/about-aws/whats-new/2026/02/dynamodb-gt-multi-account/">Amazon DynamoDB global tables now support replication across multiple AWS accounts:</a> DynamoDB global tables are a fully managed, serverless, multi-Region, and multi-active database. With this new capability, you can replicate tables across AWS accounts and Regions to improve resiliency, isolate workloads at the account level, and apply distinct security and governance controls.</li>
<li><a href="https://aws.amazon.com/about-aws/whats-new/2026/02/amazon-rds-provides-enhanced-console-experience/">Amazon RDS now provides an enhanced console experience to connect to a database:</a> The new console experience provides ready-made code snippets for Java, Python, Node.js, and other programming languages as well as tools like the <code>psql</code> command line utility. These code snippets are automatically adjusted based on your database’s authentication settings. For example, if your cluster uses IAM authentication, the generated code snippets will use token-based authentication to connect to the database. The console experience also includes integrated CloudShell access, offering the ability to connect to your databases directly from within the RDS console.</li>
</ul><p>Then, I noticed three news items related to security and how you authenticate on AWS:</p><ul><li><a href="https://aws.amazon.com/about-aws/whats-new/2026/02/aws-builder-id-sign-in-apple/">AWS Builder ID now supports Sign in with Apple:</a> AWS Builder ID, your profile for accessing AWS applications including AWS Builder Center, AWS Training and Certification, AWS re:Post, AWS Startups, and Kiro, now supports sign-in with Apple as a social login provider. This expansion of sign-in options builds on the existing sign-in with Google capability, providing Apple users with a streamlined way to access AWS resources without managing separate credentials on AWS.</li>
<li><a href="https://aws.amazon.com/about-aws/whats-new/2026/01/aws-sts-supports-validation-identity-provider-claims/">AWS STS now supports validation of select identity provider specific claims from Google, GitHub, CircleCI and OCI:</a> You can reference these custom claims as condition keys in IAM role trust policies and resource control policies, expanding your ability to implement fine-grained access control for federated identities and help you establish your data perimeters. This enhancement builds upon IAM’s existing OIDC federation capabilities, which allow you to grant temporary AWS credentials to users authenticated through external OIDC-compatible identity providers.</li>
<li><a href="https://aws.amazon.com/about-aws/whats-new/2026/02/console-displays-account-name-on-nav-bar/">AWS Management Console now displays Account Name on the Navigation bar for easier account identification:</a> You now have an easy way to identify your accounts at a glance. You can now quickly distinguish between accounts visually using the account name that appears in the navigation bar for all authorized users in that account.</li>
<li><a href="https://aws.amazon.com/about-aws/whats-new/2026/01/amazon-cloudfront-mutual-tls-for-origins/">Amazon CloudFront announces mutual TLS support for origins:</a> Now with origin mTLS support, you can implement a standardized, certificate-based authentication approach that eliminates operational burden. This enables organizations to enforce strict authentication for their proprietary content, ensuring that only verified CloudFront distributions can establish connections to backend infrastructure ranging from AWS origins and on-premises servers to third-party cloud providers and external CDNs.</li>
</ul><p>Finally, there is not a single week without news around AI :</p><ul><li><a href="https://aws.amazon.com/about-aws/whats-new/2026/2/claude-opus-4.6-available-amazon-bedrock/">Claude Opus 4.6 now available in Amazon Bedrock:</a> Opus 4.6 is Anthropic’s most intelligent model to date and a premier model for coding, enterprise agents, and professional work. Claude Opus 4.6 brings advanced capabilities to Amazon Bedrock customers, including industry-leading performance for agentic tasks, complex coding projects, and enterprise-grade workflows that require deep reasoning and reliability.</li>
<li><a href="https://aws.amazon.com/about-aws/whats-new/2026/2/claude-opus-4.6-available-amazon-bedrock/">Structured outputs now available in Amazon Bedrock:</a> Amazon Bedrock now supports structured outputs, a capability that provides consistent, machine-readable responses from foundation models that adhere to your defined JSON schemas. Instead of prompting for valid JSON and adding extra checks in your application, you can specify the format you want and receive responses that match it—making production workflows more predictable and resilient.</li>
</ul><p><strong class="c6">Upcoming AWS events</strong><br />Check your calendars so that you can sign up for this upcoming event:</p><p><a href="https://aws-community.ro/">AWS Community Day Romania (April 23–24, 2026):</a> This community-led AWS event brings together developers, architects, entrepreneurs, and students for more than 10 professional sessions delivered by AWS Heroes, Solutions Architects, and industry experts. Attendees can expect expert-led technical talks, insights from speakers with global conference experience, and opportunities to connect during dedicated networking breaks, all hosted at a premium venue designed to support collaboration and community engagement.</p><p>If you’re looking for more ways to stay connected beyond this event, join the <a href="https://builder.aws.com/?trk=e61dee65-4ce8-4738-84db-75305c9cd4fe&amp;sc_channel=el">AWS Builder Center</a> to learn, build, and connect with builders in the AWS community.</p><p>Check back next Monday for another <a href="https://aws.amazon.com/blogs/aws/tag/week-in-review/?trk=e82bba63-d46d-4435-acd0-7531b14ad817&amp;sc_channel=el">Weekly Roundup</a>.</p><a href="https://linktr.ee/sebsto">— seb</a></section><aside id="Comments" class="blog-comments"><div data-lb-comp="aws-blog:cosmic-comments" data-env="prod" data-content-id="19d4e3e5-7274-4193-85ce-5c099e1c8919" data-title="AWS Weekly Roundup: Claude Opus 4.6 in Amazon Bedrock, AWS Builder ID Sign in with Apple, and more (February 9, 2026)" data-url="https://aws.amazon.com/blogs/aws/aws-weekly-roundup-claude-opus-4-6-in-amazon-bedrock-aws-builder-id-sign-in-with-apple-and-more-february-9-2026/"><p data-failed-message="Comments cannot be loaded… Please refresh and try again.">Loading comments…</p></div>
</aside>]]></summary>
    <link href="https://aws.amazon.com/blogs/aws/aws-weekly-roundup-claude-opus-4-6-in-amazon-bedrock-aws-builder-id-sign-in-with-apple-and-more-february-9-2026/"/>
    <updated>2026-02-09T21:42:00+01:00</updated>
  </entry>
  <entry>
    <id>https://aws.amazon.com/blogs/aws/amazon-ec2-c8id-m8id-and-r8id-instances-with-up-to-22-8-tb-local-nvme-storage-are-generally-available/</id>
    <title><![CDATA[Amazon EC2 C8id, M8id, and R8id instances with up to 22.8 TB local NVMe storage are generally available]]></title>
    <summary><![CDATA[<section class="blog-post-content lb-rtxt"><table id="amazon-polly-audio-table"><tbody><tr><td id="amazon-polly-audio-tab">
<p></p></td></tr></tbody>


</table><p>Last year, we launched the <a href="https://aws.amazon.com/ec2/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">Amazon Elastic Compute Cloud (Amazon EC2)</a> <a href="https://aws.amazon.com/blogs/aws/introducing-new-compute-optimized-amazon-ec2-c8i-and-c8i-flex-instances/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">C8i instances</a>, <a href="https://aws.amazon.com/blogs/aws/new-general-purpose-amazon-ec2-m8i-and-m8i-flex-instances-are-now-available/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">M8i instances</a>, and <a href="https://aws.amazon.com/blogs/aws/best-performance-and-fastest-memory-with-the-new-amazon-ec2-r8i-and-r8i-flex-instances/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">R8i instances</a> powered by custom Intel Xeon 6 processors available only on AWS with sustained all-core 3.9 GHz turbo frequency. They deliver the highest performance and fastest memory bandwidth among comparable Intel processors in the cloud.</p><p>Today we’re announcing new Amazon EC2 C8id, M8id, and R8id instances backed up to 22.8TB of NVMe-based SSD block-level instance storage physically connected to the host server. These instances offer 3 times more vCPUs, memory and local storage compared to previous sixth-generation instances.</p><p>These instances deliver up to 43% higher compute performance and 3.3 times more memory bandwidth compared to previous sixth-generation instances. They also deliver up to 46% higher performance for I/O intensive database workloads, and up to 30% faster query results for I/O intensive real-time data analytics compared to previous sixth generation instances.</p><ul><li><strong>C8id instances</strong> are ideal for compute-intensive workloads, including those that need access to high-speed, low-latency local storage like video encoding, image manipulation, and other forms of media processing.</li>
<li><strong>M8id instances</strong> are best for workloads that require a balance of compute and memory resources along with high-speed, low-latency local block storage, including data logging, media processing, and medium-sized data stores.</li>
<li><strong>R8id instances</strong> are designed for memory-intensive workloads such as large-scale SQL and NoSQL databases, in-memory databases, large-scale data analytics, and AI inference.</li>
</ul><p>C8id, M8id, and R8id instances now scale up to <strong>96xlarge</strong> (versus <strong>32xlarge</strong> sizes in the sixth generation) with up to 384 vCPUs, 3TiB of memory, and 22.8TB of local storage that make it easier to scale up applications and drive greater efficiencies. These instances also offer two bare metal sizes (<strong>metal-48xl</strong> and <strong>metal-96xl</strong>), allowing you to right size your instances and deploy your most performance sensitive workloads that benefit from direct access to physical resources.</p><p>The instances are available in 11 sizes per family, as well as two bare metal configurations each:</p><table class="c10"><thead><tr class="c7"><th class="c6">Instance Name</th>
<th class="c6">vCPUs</th>
<th class="c6">Memory (GiB) (C/M/R)</th>
<th class="c6">Local NVMe storage (GB)</th>
<th class="c6">Network bandwidth (Gbps)</th>
<th class="c6">EBS bandwidth (Gbps)</th>
</tr></thead><tbody><tr class="c9"><td class="c8"><strong>large</strong></td>
<td class="c8">2</td>
<td class="c8">4/8/16*</td>
<td class="c8">1 x 118</td>
<td class="c8">Up to 12.5</td>
<td class="c8">Up to 10</td>
</tr><tr class="c9"><td class="c8"><strong>xlarge</strong></td>
<td class="c8">4</td>
<td class="c8">8/16/32*</td>
<td class="c8">1 x 237</td>
<td class="c8">Up to 12.5</td>
<td class="c8">Up to 10</td>
</tr><tr class="c9"><td class="c8"><strong>2xlarge</strong></td>
<td class="c8">8</td>
<td class="c8">16/32/64*</td>
<td class="c8">1 x 474</td>
<td class="c8">Up to 15</td>
<td class="c8">Up to 10</td>
</tr><tr class="c9"><td class="c8"><strong>4xlarge</strong></td>
<td class="c8">16</td>
<td class="c8">32/64/128*</td>
<td class="c8">1 x 950</td>
<td class="c8">Up to 15</td>
<td class="c8">Up to 10</td>
</tr><tr class="c9"><td class="c8"><strong>8xlarge</strong></td>
<td class="c8">32</td>
<td class="c8">64/128/256*</td>
<td class="c8">1 x 1,900</td>
<td class="c8">15</td>
<td class="c8">10</td>
</tr><tr class="c9"><td class="c8"><strong>12xlarge</strong></td>
<td class="c8">48</td>
<td class="c8">96/192/384*</td>
<td class="c8">1 x 2,850</td>
<td class="c8">22.5</td>
<td class="c8">15</td>
</tr><tr class="c9"><td class="c8"><strong>16xlarge</strong></td>
<td class="c8">64</td>
<td class="c8">128/256/512*</td>
<td class="c8">1 x 3,800</td>
<td class="c8">30</td>
<td class="c8">20</td>
</tr><tr class="c9"><td class="c8"><strong>24xlarge</strong></td>
<td class="c8">96</td>
<td class="c8">192/384/768*</td>
<td class="c8">2 x 2,850</td>
<td class="c8">40</td>
<td class="c8">30</td>
</tr><tr class="c9"><td class="c8"><strong>32xlarge</strong></td>
<td class="c8">128</td>
<td class="c8">256/512/1024*</td>
<td class="c8">2 x 3,800</td>
<td class="c8">50</td>
<td class="c8">40</td>
</tr><tr class="c9"><td class="c8"><strong>48xlarge</strong></td>
<td class="c8">192</td>
<td class="c8">384/768/1536*</td>
<td class="c8">3 x 3,800</td>
<td class="c8">75</td>
<td class="c8">60</td>
</tr><tr class="c9"><td class="c8"><strong>96xlarge</strong></td>
<td class="c8">384</td>
<td class="c8">768/1536/3072*</td>
<td class="c8">6 x 3,800</td>
<td class="c8">100</td>
<td class="c8">80</td>
</tr><tr class="c9"><td class="c8"><strong>metal-48xl</strong></td>
<td class="c8">192</td>
<td class="c8">384/768/1536*</td>
<td class="c8">3 x 3,800</td>
<td class="c8">75</td>
<td class="c8">60</td>
</tr><tr class="c9"><td class="c8"><strong>metal-96xl</strong></td>
<td class="c8">384</td>
<td class="c8">768/1536/3072*</td>
<td class="c8">6 x 3,800</td>
<td class="c8">100</td>
<td class="c8">80</td>
</tr></tbody></table><p class="c11"><em>*Memory values are for C8id/M8id/R8id respectively.</em></p><p>These instances support the <a href="https://docs.aws.amazon.com/ebs/latest/userguide/instance-bandwidth-configuration.html?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">Instance Bandwidth Configuration (IBC)</a> feature like other eighth-generation instance types, offering flexibility to allocate resources between network and <a href="https://aws.amazon.com/ebs/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">Amazon Elastic Block Store (Amazon EBS)</a> bandwidth. You can scale network or EBS bandwidth by 25%, allocating resources optimally for each workload. These instances also use sixth-generation AWS Nitro cards offloading CPU virtualization, storage, and networking functions to dedicated hardware and software, enhancing performance and security for your workloads.</p><p>You can use any <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">Amazon Machine Images (AMIs)</a> that include drivers for the <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/enhanced-networking-ena.html?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">Elastic Network Adapter (ENA)</a> and NVMe to fully utilize the performance and capabilities. All current generation AWS Windows and Linux AMIs come with the AWS NVMe driver installed by default. If you use an AMI that does not have the AWS NVMe driver, you can manually install <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/aws-nvme-drivers.html?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">AWS NVMe drivers</a>.</p><p>As I noted in <a href="https://aws.amazon.com/blogs/aws/new-amazon-ec2-m6id-and-c6id-instances-with-up-to-7-6-tb-local-nvme-storage/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">my previous blog post</a>, here are a couple of things to remind you about the local NVMe storage on these instances:</p><ul><li>You don’t have to specify a block device mapping in your AMI or during the instance launch; the local storage will show up as one or more devices (<code>/dev/nvme[0-26]n1</code> on Linux) after the guest operating system has booted.</li>
<li>Each local NVMe device is hardware encrypted using the <code>XTS-AES-256</code> block cipher and a unique key. Each key is destroyed when the instance is stopped or terminated.</li>
<li>Local NVMe devices have the same lifetime as the instance they are attached to and do not persist after the instance has been stopped or terminated.</li>
</ul><p>To learn more, visit <a href="https://docs.aws.amazon.com/ebs/latest/userguide/nvme-ebs-volumes.html?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">Amazon EBS volumes and NVMe</a> in the Amazon EBS User Guide.</p><p><strong class="c12">Now available</strong><br />Amazon EC2 C8id, M8id and R8id instances are available in US East (N. Virginia), US East (Ohio), and US West (Oregon) <a href="https://docs.aws.amazon.com/glossary/latest/reference/glos-chap.html#region?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">AWS Regions</a>. R8id instances are additionally available in Europe (Frankfurt) Region. For Regional availability and a future roadmap, search the instance type in the <strong>CloudFormation</strong> resources tab of <a href="https://builder.aws.com/build/capabilities/explore?tab=cfn-resources&amp;trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">AWS Capabilities by Region</a>.</p><p>You can purchase these instances as <a href="https://aws.amazon.com/ec2/pricing/on-demand/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">On-Demand Instances</a>, <a href="https://aws.amazon.com/savingsplans/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">Savings Plans</a>, and <a href="https://aws.amazon.com/ec2/spot/pricing/?trk=trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">Spot Instances</a>. These instances are also available as <a href="https://aws.amazon.com/ec2/pricing/dedicated-instances/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">Dedicated Instances</a> and <a href="https://aws.amazon.com/ec2/dedicated-hosts/pricing/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">Dedicated Hosts</a>. To learn more, visit the <a href="https://aws.amazon.com/ec2/pricing/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">Amazon EC2 Pricing page</a>.</p><p>Give C8id, M8id, and R8id instances a try in the <a href="https://console.aws.amazon.com/ec2/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">Amazon EC2 console</a>. To learn more, visit the <a href="https://aws.amazon.com/ec2/instance-types/c8i/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">EC2 C8i instances</a>, <a href="https://aws.amazon.com/ec2/instance-types/m8i/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">M8i instances</a>, and <a href="https://aws.amazon.com/ec2/instance-types/r8i/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">R8i instances</a> page and send feedback to <a href="https://repost.aws/tags/TAO-wqN9fYRoyrpdULLa5y7g/amazon-ec-2?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">AWS re:Post for EC2</a> or through your usual AWS Support contacts.</p><p>— <a href="https://linkedin.com/in/channy/">Channy</a></p></section><aside id="Comments" class="blog-comments"><div data-lb-comp="aws-blog:cosmic-comments" data-env="prod" data-content-id="4eea21eb-5ed0-4cfb-9448-80e863e43f0b" data-title="Amazon EC2 C8id, M8id, and R8id instances with up to 22.8 TB local NVMe storage are generally available" data-url="https://aws.amazon.com/blogs/aws/amazon-ec2-c8id-m8id-and-r8id-instances-with-up-to-22-8-tb-local-nvme-storage-are-generally-available/"><p data-failed-message="Comments cannot be loaded… Please refresh and try again.">Loading comments…</p></div>
</aside>]]></summary>
    <link href="https://aws.amazon.com/blogs/aws/amazon-ec2-c8id-m8id-and-r8id-instances-with-up-to-22-8-tb-local-nvme-storage-are-generally-available/"/>
    <updated>2026-02-04T23:31:00+01:00</updated>
  </entry>
  <entry>
    <id>https://aws.amazon.com/blogs/aws/aws-iam-identity-center-now-supports-multi-region-replication-for-aws-account-access-and-application-use/</id>
    <title><![CDATA[AWS IAM Identity Center now supports multi-Region replication for AWS account access and application use]]></title>
    <summary><![CDATA[<section class="blog-post-content lb-rtxt"><table id="amazon-polly-audio-table"><tbody><tr><td id="amazon-polly-audio-tab">
<p></p></td></tr></tbody>


</table><p>Today, we’re announcing the general availability of <a href="https://aws.amazon.com/iam/identity-center/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">AWS IAM Identity Center</a> multi-Region support to enable <a href="https://docs.aws.amazon.com/singlesignon/latest/userguide/manage-your-accounts.html?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">AWS account access</a> and <a href="https://docs.aws.amazon.com/singlesignon/latest/userguide/awsapps.html?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">managed application use</a> in additional <a href="https://docs.aws.amazon.com/glossary/latest/reference/glos-chap.html#region?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">AWS Regions</a>.</p><p>With this feature, you can replicate your workforce identities, permission sets, and other metadata in your organization instance of IAM Identity Center connected to an external identity provider (IdP), such as Microsoft Entra ID and Okta, from its current primary Region to additional Regions for improved resiliency of AWS account access.</p><p>You can also deploy AWS managed applications in your preferred Regions, close to application users and datasets for improved user experience or to meet data residency requirements. Your applications deployed in additional Regions access replicated workforce identities locally for optimal performance and reliability.</p><p>When you replicate your workforce identities to an additional Region, your workforce gets an active AWS access portal endpoint in that Region. This means that in the unlikely event of an IAM Identity Center service disruption in its primary Region, your workforce can still access their AWS accounts through the AWS access portal in an additional Region using already provisioned permissions. You can continue to manage IAM Identity Center configurations from the primary Region, maintaining centralized control.</p><p><strong class="c6">Enable IAM Identity Center in multiple Regions</strong><br />To get started, you should confirm that the AWS managed applications you’re currently using support <a href="https://docs.aws.amazon.com/kms/latest/cryptographic-details/basic-concepts.html?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">customer managed AWS Key Management Service (AWS KMS) key</a> enabled in AWS Identity Center. When we introduced <a href="https://aws.amazon.com/blogs/aws/aws-iam-identity-center-now-supports-customer-managed-kms-keys-for-encryption-at-rest/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">this feature</a> in October 2025, Seb recommended using multi-Region AWS KMS keys unless your company policies restrict you to single-Region keys. Multi-Region keys provide consistent key material across Regions while maintaining independent key infrastructure in each Region.</p><p>Before replicating IAM Identity Center to an additional Region, you must first replicate the customer managed AWS KMS key to that Region and configure the replica key with the permissions required for IAM Identity Center operations. For instructions on creating multi-Region replica keys, refer to <a href="https://docs.aws.amazon.com/singlesignon/latest/userguide/identity-center-customer-managed-keys.html#replicate-kms-key?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">Create multi-Region replica keys</a> in the AWS KMS Developer Guide.</p><p>Go to the <a href="https://console.aws.amazon.com/singlesignon/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">IAM Identity Center console</a> in the primary Region, for example, US East (N. Virginia), choose <strong>Settings</strong> in the left-navigation pane, and select the <strong>Management</strong> tab. Confirm that your configured encryption key is a multi-Region customer managed AWS KMS key. To add more Regions, choose <strong>Add Region</strong>.</p><p><img class="aligncenter wp-image-102879 size-full c7" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2026/01/29/2026-idc-multiRegion-1.png" alt="" width="1382" height="628" /></p><p>You can choose additional Regions to replicate the IAM Identity Center in a list of the available Regions. When choosing an additional Region, consider your intended use cases, for example, data compliance or user experience.</p><p>If you want to run AWS managed applications that access datasets limited to a specific Region for compliance reasons, choose the Region where the datasets reside. If you plan to use the additional Region to deploy AWS applications, verify that the required <a href="https://docs.aws.amazon.com/singlesignon/latest/userguide/awsapps-that-work-with-identity-center.html?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">applications support</a> your chosen Region and deployment in additional Regions.</p><p><img class="aligncenter size-full wp-image-102821 c7" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2026/01/16/2026-idc-multiRegion-2.png" alt="" width="1350" height="742" /></p><p>Choose <strong>Add Region</strong>. This starts the initial replication whose duration depends on the size of your Identity Center instance.</p><p><img class="aligncenter wp-image-102880 size-full c7" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2026/01/29/2026-idc-multiRegion-3.png" alt="" width="1140" height="271" /></p><p>After the replication is completed, your users can access their AWS accounts and applications in this new Region. When you choose <strong>View ACS URLs</strong>, you can view <a href="https://docs.oasis-open.org/security/saml/Post2.0/sstc-saml-tech-overview-2.0.html">SAML</a> information, such as an Assertion Consumer Service (ACS) URL, about the primary and additional Regions.</p><p><strong class="c6">How your workforce can use an additional Region</strong><br />AWS Identity Center supports SAML single sign-on with external IdPs, such as Microsoft Entra ID and Okta. Upon authentication in the IdP, the user is redirected to the AWS access portal. To enable the user to be redirected to the AWS access portal in the newly added Region, you need to add the additional Region’s ACS URL to the IdP configuration.</p><p>The following screenshots show you how to do this in the Okta admin console:</p><p><img class="aligncenter wp-image-102861 size-full c7" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2026/01/22/2026-idc-multiRegion-4-1.png" alt="" width="2132" height="1057" /></p><p>Then, you can create a bookmark application in your identity provider for users to discover the additional Region. This bookmark app functions like a browser bookmark and contains only the URL to the AWS access portal in the additional Region.</p><p><img class="aligncenter wp-image-102860 size-full c7" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2026/01/22/2026-idc-multiRegion-4-1-1.png" alt="" width="1970" height="888" /></p><p>You can also deploy AWS managed applications in additional Regions using your existing deployment workflows. Your users can access applications or accounts using the existing access methods, such as the <a href="https://docs.aws.amazon.com/singlesignon/latest/userguide/multi-region-workforce-access.html?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">AWS access portal</a>, an application link, or through the <a href="https://aws.amazon.com/cli/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">AWS Command Line Interface (AWS CLI)</a>.</p><p>To learn more about which AWS managed applications support deployment in additional Regions, visit the <a href="https://docs.aws.amazon.com/singlesignon/latest/userguide/awsapps-that-work-with-identity-center.html?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">IAM Identity Center User Guide</a>.</p><p><strong class="c6">Things to know</strong><br />Here are key considerations to know about this feature:</p><ul><li><strong>Consideration</strong> – To take advantage of this feature at launch, you must be using an organization instance of IAM Identity Center connected to an external IdP. Also, the primary and additional Regions must be enabled by default in an AWS account. Account instances of IAM Identity Center, and the other two identity sources (Microsoft Active Directory and IAM Identity Center directory) are presently not supported.</li>
<li><strong>Operation</strong> – The primary Region remains the central place for managing workforce identities, account access permissions, external IdP, and other configurations. You can use the IAM Identity Center console in additional Regions with a limited feature set. Most operations are read-only, except for application management and user session revocation.</li>
<li><strong>Monitoring</strong> – All workforce actions are emitted in <a href="https://aws.amazon.com/cloudtrail/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">AWS CloudTrail</a> in the Region where the action was performed. This feature enhances account access continuity. You can set up <a href="https://docs.aws.amazon.com/whitepapers/latest/organizing-your-aws-environment/break-glass-access.html?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">break-glass access</a> for privileged users to access AWS if the external IdP has a service disruption.</li>
</ul><p><strong class="c6">Now available<br /></strong> AWS IAM Identity Center multi-Region support is now available in the <a href="https://docs.aws.amazon.com/accounts/latest/reference/manage-acct-regions.html#manage-acct-regions-regional-availability?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">17 enabled-by-default commercial AWS Regions</a>. For Regional availability and a future roadmap, visit the <a class="c-link" href="https://builder.aws.com/build/capabilities/explore?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el" target="_blank" rel="noopener noreferrer" data-stringify-link="https://builder.aws.com/capabilities/" data-sk="tooltip_parent">AWS Capabilities by Region</a>. You can use this feature at no additional cost. Standard <a href="https://aws.amazon.com/kms/pricing/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">AWS KMS charges</a> apply for storing and using customer managed keys.</p><p>Give it a try in the <a href="https://console.aws.amazon.com/singlesignon/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">AWS Identity Center console</a>. To learn more, visit the <a href="https://docs.aws.amazon.com/singlesignon/latest/userguide/multi-region-iam-identity-center.html?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">IAM Identity Center User Guide</a> and send feedback to <a href="https://repost.aws/tags/TAJNFEvp8UQUaLplKZtOsAaw/aws-iam-identity-center?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">AWS re:Post for Identity Center</a> or through your usual AWS Support contacts.</p><p>— <a href="https://linkedin.com/in/channy/">Channy</a></p></section><aside id="Comments" class="blog-comments"><div data-lb-comp="aws-blog:cosmic-comments" data-env="prod" data-content-id="62c696c4-2f35-4458-a3e5-71372e6dba58" data-title="AWS IAM Identity Center now supports multi-Region replication for AWS account access and application use" data-url="https://aws.amazon.com/blogs/aws/aws-iam-identity-center-now-supports-multi-region-replication-for-aws-account-access-and-application-use/"><p data-failed-message="Comments cannot be loaded… Please refresh and try again.">Loading comments…</p></div>
</aside>]]></summary>
    <link href="https://aws.amazon.com/blogs/aws/aws-iam-identity-center-now-supports-multi-region-replication-for-aws-account-access-and-application-use/"/>
    <updated>2026-02-03T20:13:00+01:00</updated>
  </entry>
  <entry>
    <id>https://aws.amazon.com/blogs/aws/aws-weekly-roundup-amazon-bedrock-agent-workflows-amazon-sagemaker-private-connectivity-and-more-february-2-2026/</id>
    <title><![CDATA[AWS Weekly Roundup: Amazon Bedrock agent workflows, Amazon SageMaker private connectivity, and more (February 2, 2026)]]></title>
    <summary><![CDATA[<section class="blog-post-content lb-rtxt"><table id="amazon-polly-audio-table"><tbody><tr><td id="amazon-polly-audio-tab">
<p></p></td></tr></tbody>


</table><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2026/01/31/Screenshot-2026-01-31-at-19.17.02.png"><img class="wp-image-102903 size-medium alignright" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2026/01/31/Screenshot-2026-01-31-at-19.17.02-300x213.png" alt="" width="300" height="213" /></a>Over the past week, we passed <a href="https://en.wikipedia.org/wiki/Laba_Festival">Laba festival</a>, a traditional marker in the Chinese calendar that signals the final stretch leading up to the Lunar New Year. For many in China, it’s a moment associated with reflection and preparation, wrapping up what the year has carried, and turning attention toward what lies ahead.</p><p>Looking forward, next week also brings <a href="https://en.wikipedia.org/wiki/Lichun">Lichun</a>, the beginning of spring and the first of the 24 solar terms. In Chinese tradition, spring is often seen as the season when growth begins and new cycles take shape. There’s a common saying that “a year’s plans begin in spring,” capturing the idea that this is a time to set one’s direction and start fresh.</p><p><strong class="c6">Last week’s launches</strong><br />Here are the launches that got my attention this week:</p><ul><li><a href="https://aws.amazon.com/bedrock/">Amazon Bedrock</a> enhances support for agent workflows with server-side tools and extended prompt caching – Amazon Bedrock introduced two updates that improve how developers build and operate AI agents. <a href="https://aws.amazon.com/about-aws/whats-new/2026/01/amazon-bedrock-server-side-custom-tools-responses-api/">The Responses API now supports server-side tool use</a>, so agents can perform actions such as web search, code execution, and database updates within AWS security boundaries. <a href="https://aws.amazon.com/about-aws/whats-new/2026/01/amazon-bedrock-one-hour-duration-prompt-caching/">Bedrock also adds a 1-hour time-to-live (TTL) option for prompt caching</a>, which helps improve performance and reduce the cost for long-running, multi-turn agent workflows. Server-side tools are available with OpenAI GPT OSS 20B and 120B models, and the 1-hour prompt caching TTL is generally available for select Claude models by Anthropic in Amazon Bedrock.</li>
<li><a href="https://aws.amazon.com/about-aws/whats-new/2026/01/amazon-sagemaker-unified-studio-aws-privatelink/">Amazon SageMaker Unified Studio adds private VPC connectivity with AWS PrivateLink</a> – <a href="https://aws.amazon.com/sagemaker/unified-studio/">Amazon SageMaker Unified Studio</a> now supports AWS PrivateLink, providing private connectivity between your VPC and SageMaker Unified Studio without routing customer data over the public internet. With SageMaker service endpoints onboarded into a VPC, data traffic remains within the AWS network and is governed by IAM policies, supporting stricter security and compliance requirements.</li>
<li><a href="https://aws.amazon.com/about-aws/whats-new/2026/01/change-the-server-side-encryption-type-of-s3-objects/">Amazon S3 adds support for changing object encryption without data movement</a> – <a href="https://aws.amazon.com/s3/">Amazon S3</a> now supports changing the server-side encryption type of existing encrypted objects without moving or re-uploading data. Using the <code>UpdateObjectEncryption</code> API, you can switch from SSE-S3 to SSE-KMS, rotate customer -managed AWS Key Management Service (AWS KMS) keys, or standardize encryption across buckets at scale with S3 Batch Operations while preserving object properties and lifecycle eligibility.</li>
<li><a href="https://aws.amazon.com/blogs/database/introducing-pre-warming-for-amazon-keyspaces-tables/">Amazon Keyspaces introduces table pre-warming for predictable high-throughput workloads</a> – Amazon Keyspaces (for Apache Cassandra) now supports table pre-warming, which helps you proactively set warm throughput levels so tables can handle high read and write traffic instantly without cold-start delays. Pre-warming helps reduce throttling during sudden traffic spikes, such as product launches or sales events, and works with both on-demand and provisioned capacity modes, including multi-Region tables. The feature supports consistent, low-latency performance while giving you more control over throughput readiness.</li>
<li><a href="https://aws.amazon.com/about-aws/whats-new/2026/01/amazon-dynamodb-global-tables-with-mrsc-fis/">Amazon DynamoDB MRSC global tables integrate with AWS Fault Injection Service</a> – <a href="https://aws.amazon.com/dynamodb/">Amazon DynamoDB</a> multi-Region strong consistency (MRSC) global tables now integrate with AWS Fault Injection Service. With this integration, you can simulate Regional failures, test replication behavior, and validate application resiliency for strongly consistent, multi-Region workloads.</li>
</ul><p><strong class="c6">Additional updates</strong><br />Here are some additional projects, blog posts, and news items that I found interesting:</p><ul><li><a href="https://aws.amazon.com/blogs/networking-and-content-delivery/building-zero-trust-access-across-multi-account-aws-environments/">Building zero-trust access across multi-account AWS environments with AWS Verified Access</a> – This post walks through how to implement AWS Verified Access in a centralized, shared-services architecture. It shows how to integrate with AWS IAM Identity Center and AWS Resource Access Manager (AWS RAM) to apply zero trust access controls at the application layer and reduce operational overhead across multi-account AWS environments.</li>
<li><a href="https://aws.amazon.com/about-aws/whats-new/2026/01/amazon-eventbridge-increases-event-payload-size-256-kb-1-mb/">Amazon EventBridge increases event payload size to 1 MB</a> – Amazon EventBridge now supports event payloads up to 1 MB, an increase from the previous 256 KB limit. This update helps event-driven architectures carry richer context in a single event, including complex JSON structures, telemetry data, and machine learning (ML) or generative AI outputs, without splitting payloads or relying on external storage.</li>
<li><a href="https://aws.amazon.com/about-aws/whats-new/2025/01/aws-announces-deployment-agent-sops-in-aws-mcp-server-preview/">AWS MCP Server adds deployment agent SOPs (preview)</a> – AWS introduced deployment standard operating procedures (SOPs) that AI agents can deploy web applications to AWS from a single natural language prompt in MCP -compatible integrated development environments (IDEs) and command line interfaces (CLIs) such as Kiro, Cursor, and Claude Code. The agent generates AWS Cloud Development Kit (AWS CDK) infrastructure, deploys AWS CloudFormation stacks, and sets up continuous integration and continuous delivery (CI/CD) workflows following AWS best practices. The preview supports frameworks including React, Vue.js, Angular, and Next.js.</li>
<li><a href="https://aws.amazon.com/about-aws/whats-new/2026/01/aws-network-firewall-web-category-based-filtering/">AWS Network Firewall adds generation AI traffic visibility with web category filtering</a> – AWS Network Firewall now provides visibility into generative AI application traffic through predefined web categories. You can use these categories directly in firewall rules to govern access to generative AI tools and other web services. When combined with TLS inspection, category-based filtering can be applied at the full URL level.</li>
<li>A<a href="https://aws.amazon.com/about-aws/whats-new/2026/01/aws-Lambda-observability-for-kafka-esm/">WS Lambda adds enhanced observability for Kafka event source mappings</a> – <a href="https://aws.amazon.com/lambda/">AWS Lambda</a> introduced enhanced observability for Kafka event source mappings, providing Amazon CloudWatch Logs and metrics to monitor event polling configuration, scaling behavior, and event processing state. The update improves visibility into Kafka-based Lambda workloads, helping teams diagnose configuration issues, permission errors, and function failures more efficiently. The capability supports both Amazon Managed Streaming for Apache Kafka (Amazon MSK) and self-managed Apache Kafka event sources.</li>
<li><a href="https://aws.amazon.com/blogs/devops/aws-cloudformation-2025-year-in-review/">AWS CloudFormation 2025 year in review</a> – This year-in-review post highlights CloudFormation updates delivered throughout 2025, with a focus on early validation, safer deployments, and improved developer workflows. It covers enhancements such as improved troubleshooting, drift-aware change sets, stack refactoring, StackSets updates, and new -IDE and AI -assisted tooling, including the CloudFormation language server and the Infrastructure as Code (IaC) MCP server.</li>
</ul><p><strong class="c6">Upcoming AWS events</strong><br />Check your calendars so that you can sign up for this upcoming event:</p><p><a href="https://aws-community.ro/">AWS Community Day Romania (April 23–24, 2026)</a> – This community-led AWS event brings together developers, architects, entrepreneurs, and students for more than 10 professional sessions delivered by AWS Heroes, Solutions Architects, and industry experts. Attendees can expect expert-led technical talks, insights from speakers with global conference experience, and opportunities to connect during dedicated networking breaks, all hosted at a premium venue designed to support collaboration and community engagement.</p><p>If you’re looking for more ways to stay connected beyond this event, join the <a href="https://builder.aws.com/?trk=e61dee65-4ce8-4738-84db-75305c9cd4fe&amp;sc_channel=el">AWS Builder Center</a> to learn, build, and connect with builders in the AWS community.</p><p>Check back next Monday for another <a href="https://aws.amazon.com/blogs/aws/tag/week-in-review/?trk=e82bba63-d46d-4435-acd0-7531b14ad817&amp;sc_channel=el">Weekly Roundup</a>.</p><p>–<a href="http://www.linkedin.com/in/zhengyubin714">betty</a></p></section><aside id="Comments" class="blog-comments"><div data-lb-comp="aws-blog:cosmic-comments" data-env="prod" data-content-id="0d928235-4ce9-4b67-9230-7a831da6c5b6" data-title="AWS Weekly Roundup: Amazon Bedrock agent workflows, Amazon SageMaker private connectivity, and more (February 2, 2026)" data-url="https://aws.amazon.com/blogs/aws/aws-weekly-roundup-amazon-bedrock-agent-workflows-amazon-sagemaker-private-connectivity-and-more-february-2-2026/"><p data-failed-message="Comments cannot be loaded… Please refresh and try again.">Loading comments…</p></div>
</aside>]]></summary>
    <link href="https://aws.amazon.com/blogs/aws/aws-weekly-roundup-amazon-bedrock-agent-workflows-amazon-sagemaker-private-connectivity-and-more-february-2-2026/"/>
    <updated>2026-02-02T18:19:00+01:00</updated>
  </entry>
  <entry>
    <id>https://aws.amazon.com/blogs/aws/aws-weekly-roundup-amazon-ec2-g7e-instances-with-nvidia-blackwell-gpus-january-26-2026/</id>
    <title><![CDATA[AWS Weekly Roundup: Amazon EC2 G7e instances with NVIDIA Blackwell GPUs (January 26, 2026)]]></title>
    <summary><![CDATA[<section class="blog-post-content lb-rtxt"><table id="amazon-polly-audio-table"><tbody><tr><td id="amazon-polly-audio-tab">
<p></p></td></tr></tbody>


</table><p>Hey! It’s my first post for 2026, and I’m writing to you while watching our driveway getting dug out. I hope wherever you are you are safe and warm and your data is still flowing!</p><p><img class="alignnone size-large wp-image-102867" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2026/01/26/IMG_0652-1024x676.jpg" alt="Our driveway getting snow plowed" width="1024" height="676" /></p><p>This week brings exciting news for customers running GPU-intensive workloads, with the launch of our newest graphics and AI inference instances powered by NVIDIA’s latest Blackwell architecture. Along with several service enhancements and regional expansions, this week’s updates continue to expand the capabilities available to AWS customers.</p><p><strong>Last week’s launches</strong></p><p><strong><a href="https://aws.amazon.com/blogs/aws/announcing-amazon-ec2-g7e-instances-accelerated-by-nvidia-rtx-pro-6000-blackwell-server-edition-gpus/">Amazon EC2 G7e instances are now generally available</a></strong> — The new G7e instances accelerated by NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs deliver up to 2.3 times better inference performance compared to G6e instances. With two times the GPU memory and support for up to 8 GPUs providing 768 GB of total GPU memory, these instances enable running medium-sized models of up to 70B parameters with FP8 precision on a single GPU. G7e instances are ideal for generative AI inference, spatial computing, and scientific computing workloads. Available now in US East (N. Virginia) and US East (Ohio).</p><p><strong>Additional updates</strong></p><p>I thought these projects, blog posts, and news items were also interesting:</p><p><strong><a href="https://aws.amazon.com/about-aws/whats-new/2026/01/amazon-corretto-january-2026-quarterly-updates/">Amazon Corretto January 2026 Quarterly Updates</a></strong> — AWS released quarterly security and critical updates for Amazon Corretto Long-Term Supported (LTS) versions of OpenJDK. Corretto 25.0.2, 21.0.10, 17.0.18, 11.0.30, and 8u482 are now available, ensuring Java developers have access to the latest security patches and performance improvements.</p><p><strong><a href="https://aws.amazon.com/about-aws/whats-new/2026/01/amazon-ecr-cross-repository-layer-sharing/">Amazon ECR now supports cross-repository layer sharing</a></strong> — Amazon Elastic Container Registry now enables you to share common image layers across repositories through blob mounting. This feature helps you achieve faster image pushes by reusing existing layers and reduce storage costs by storing common layers once and referencing them across repositories.</p><p><strong><a href="https://aws.amazon.com/about-aws/whats-new/2026/01/amazon-cloudwatch-database-insights-on-demand-analysis-available-additional-regions/">Amazon CloudWatch Database Insights expands to four additional regions</a></strong> — CloudWatch Database Insights on-demand analysis is now available in Asia Pacific (New Zealand), Asia Pacific (Taipei), Asia Pacific (Thailand), and Mexico (Central). This feature uses machine learning to help identify performance bottlenecks and provides specific remediation advice.</p><p><strong><a href="https://aws.amazon.com/about-aws/whats-new/2026/01/amazon-connect-conditional-logic-real-time-updates-step-by-step-guides/">Amazon Connect adds conditional logic and real-time updates to Step-by-Step Guides</a></strong> — Amazon Connect Step-by-Step Guides now enables managers to build dynamic guided experiences that adapt based on user interactions. Managers can configure conditional user interfaces with dropdown menus that show or hide fields, change default values, or adjust required fields based on prior inputs. The feature also supports automatic data refresh from Connect resources, ensuring agents always work with current information.</p><p><strong>Upcoming AWS events</strong></p><p>Keep a look out and be sure to sign up for these upcoming events:</p><p><strong><a href="https://aws.amazon.com/best-of-reinvent/">Best of AWS re:Invent</a> (January 28-29, Virtual)</strong> — Join us for this free virtual event bringing you the most impactful announcements and top sessions from AWS re:Invent. AWS VP and Chief Evangelist Jeff Barr will share highlights during the opening session. Sessions run January 28 at 9:00 AM PT for AMER, and January 29 at 9:00 AM SGT for APJ and 9:00 AM CET for EMEA. Register to access curated technical learning, strategic insights from AWS leaders, and live Q&amp;A with AWS experts.</p><p><strong><a href="https://awsahmedabad.community/">AWS Community Day Ahmedabad</a> (February 28, 2026, Ahmedabad, India)</strong> — The 11th edition of this community-driven AWS conference brings together cloud professionals, developers, architects, and students for expert-led technical sessions, real-world use cases, tech expo booths with live demos, and networking opportunities. This free event includes breakfast, lunch, and exclusive swag.</p><p>Join the <a href="https://aws.amazon.com/builders">AWS Builder Center</a> to learn, build, and connect with builders in the AWS community. Browse for upcoming in-person and virtual developer-focused events in your area.</p><hr /><p>That’s all for this week. Check back next Monday for another Weekly Roundup!</p><p>~ micah</p></section><aside id="Comments" class="blog-comments"><div data-lb-comp="aws-blog:cosmic-comments" data-env="prod" data-content-id="8fe51a14-6e7b-45a6-8012-5567601665db" data-title="AWS Weekly Roundup: Amazon EC2 G7e instances with NVIDIA Blackwell GPUs (January 26, 2026)" data-url="https://aws.amazon.com/blogs/aws/aws-weekly-roundup-amazon-ec2-g7e-instances-with-nvidia-blackwell-gpus-january-26-2026/"><p data-failed-message="Comments cannot be loaded… Please refresh and try again.">Loading comments…</p></div>
</aside>]]></summary>
    <link href="https://aws.amazon.com/blogs/aws/aws-weekly-roundup-amazon-ec2-g7e-instances-with-nvidia-blackwell-gpus-january-26-2026/"/>
    <updated>2026-01-26T17:25:00+01:00</updated>
  </entry>
  <entry>
    <id>https://aws.amazon.com/blogs/aws/announcing-amazon-ec2-g7e-instances-accelerated-by-nvidia-rtx-pro-6000-blackwell-server-edition-gpus/</id>
    <title><![CDATA[Announcing Amazon EC2 G7e instances accelerated by NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs]]></title>
    <summary><![CDATA[<section class="blog-post-content lb-rtxt"><table id="amazon-polly-audio-table"><tbody><tr><td id="amazon-polly-audio-tab">
<p></p></td></tr></tbody>


</table><p>Today, we’re announcing the general availability of <a href="https://aws.amazon.com/ec2/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">Amazon Elastic Compute Cloud (Amazon EC2)</a> G7e instances that deliver cost-effective performance for generative AI inference workloads and the highest performance for graphics workloads.</p><p>G7e instances are accelerated by the NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs and are well suited for a broad range of GPU-enabled workloads including spatial computing and scientific computing workloads. G7e instances deliver up to 2.3 times inference performance compared to <a href="https://aws.amazon.com/ec2/instance-types/g6e/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">G6e instances</a>.</p><p>Improvements made compared to predecessors:</p><ul><li><strong>NVIDIA RTX PRO 6000 Blackwell GPUs</strong> — NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs offer two times the GPU memory and 1.85 times the GPU memory bandwidth compared to G6e instances. By using the higher GPU memory offered by G7e instances, you can run medium-sized models of up to 70B parameters with FP8 precision on a single GPU.</li>
<li><strong>NVIDIA GPUDirect P2P</strong> — For models that are too large to fit into the memory of a single GPU, you can split the model or computations across multiple GPUs. G7e instances reduce the latency of your multi-GPU workloads with support for NVIDIA GPUDirect P2P, which enables direct communication between GPUs over PCIe interconnect. These instances offer the lowest peer to peer latency for GPUs on the same PCIe switch. Additionally, G7e instances offer up to four times the inter-GPU bandwidth compared to L40s GPUs featured in G6e instances, boosting the performance of multi-GPU workloads. These improvements mean you can run inference for larger models across multiple GPUs offering up to 768 GB of GPU memory in a single node.</li>
<li><strong>Networking</strong> — G7e instances offer four times the networking bandwidth compared to G6e instances, which means you can use the instance for small-scale multi-node workloads. Additionally, multi-GPU G7e instances support NVIDIA GPUDirect Remote Direct Memory Access (RDMA) with <a href="https://aws.amazon.com/hpc/efa/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">Elastic Fabric Adapter (EFA)</a>, which reduces the latency of remote GPU-to-GPU communication for multi-node workloads. These instance sizes also support NVIDIA GPUDirectStorage with <a href="https://aws.amazon.com/fsx/lustre/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">Amazon FSx for Lustre</a>, which increases throughput by up to 1.2 Tbps to the instances compared to G6e instances, which means you can quickly load your models.</li>
</ul><p><strong class="c6">EC2 G7e specifications</strong><br />G7e instances feature up to 8 NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs with up to 768 GB of total GPU memory (96 GB of memory per GPU) and Intel Emerald Rapids processors. They also support up to 192 vCPUs, up to 1,600 Gbps of network bandwidth, up to 2,048 GiB of system memory, and up to 15.2 TB of local NVMe SSD storage.</p><p>Here are the specs:</p><table class="c11"><tbody><tr class="c9"><td class="c7"><strong>Instance name<br /></strong></td>
<td class="c7"><strong> GPUs</strong></td>
<td class="c8"><strong>GPU memory (GB)</strong></td>
<td class="c8"><strong>vCPUs</strong></td>
<td class="c8"><strong>Memory (GiB)</strong></td>
<td class="c8"><strong>Storage (TB)</strong></td>
<td class="c8"><strong>EBS bandwidth (Gbps)</strong></td>
<td class="c8"><strong>Network bandwidth (Gbps)</strong></td>
</tr><tr class="c10"><td class="c7"><strong>g7e.2xlarge</strong></td>
<td class="c8">1</td>
<td class="c8">96</td>
<td class="c8">8</td>
<td class="c8">64</td>
<td class="c8">1.9 x 1</td>
<td class="c8">Up to 5</td>
<td class="c8">50</td>
</tr><tr class="c10"><td class="c7"><strong>g7e.4xlarge</strong></td>
<td class="c8">1</td>
<td class="c8">96</td>
<td class="c8">16</td>
<td class="c8">128</td>
<td class="c8">1.9 x 1</td>
<td class="c8">8</td>
<td class="c8">50</td>
</tr><tr class="c10"><td class="c7"><strong>g7e.8xlarge</strong></td>
<td class="c8">1</td>
<td class="c8">96</td>
<td class="c8">32</td>
<td class="c8">256</td>
<td class="c8">1.9 x 1</td>
<td class="c8">16</td>
<td class="c8">100</td>
</tr><tr class="c10"><td class="c7"><strong>g7e.12xlarge</strong></td>
<td class="c8">2</td>
<td class="c8">192</td>
<td class="c8">48</td>
<td class="c8">512</td>
<td class="c8">3.8 x 1</td>
<td class="c8">25</td>
<td class="c8">400</td>
</tr><tr class="c10"><td class="c7"><strong>g7e.24xlarge</strong></td>
<td class="c8">4</td>
<td class="c8">384</td>
<td class="c8">96</td>
<td class="c8">1024</td>
<td class="c8">3.8 x 2</td>
<td class="c8">50</td>
<td class="c8">800</td>
</tr><tr><td class="c7"><strong>g7e.48xlarge</strong></td>
<td class="c8">8</td>
<td class="c8">768</td>
<td class="c8">192</td>
<td class="c8">2048</td>
<td class="c8">3.8 x 4</td>
<td class="c8">100</td>
<td class="c8">1600</td>
</tr></tbody></table><p>To get started with G7e instances, you can use the <a href="https://aws.amazon.com/ai/machine-learning/amis/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">AWS Deep Learning AMIs (DLAMI)</a> for your machine learning (ML) workloads. To run instances, you can use <a href="https://console.aws.amazon.com/ec2?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">AWS Management Console</a>, <a href="https://aws.amazon.com/cli/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">AWS Command Line Interface (AWS CLI)</a> or <a href="http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/EC2.html?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">AWS SDKs</a>. For a managed experience, you can use G7e instances with <a href="https://aws.amazon.com/ecs/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">Amazon Elastic Container Service (Amazon ECS)</a>, <a href="https://aws.amazon.com/eks/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">Amazon Elastic Kubernetes Service (Amazon EKS)</a>. Support for <a href="https://aws.amazon.com/sagemaker-ai/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">Amazon SageMaker AI</a> is also coming soon.</p><p><strong class="c6">Now available</strong><br />Amazon EC2 G7e instances are available today in the US East (N. Virginia) and US East (Ohio) <a href="https://docs.aws.amazon.com/glossary/latest/reference/glos-chap.html#region?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">AWS Regions</a>. For Regional availability and a future roadmap, search the instance type in the <strong>CloudFormation</strong> resources tab of <a href="https://builder.aws.com/build/capabilities/explore?tab=cfn-resources&amp;trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">AWS Capabilities by Region</a>.</p><p>The instances can be purchased as <a href="https://aws.amazon.com/ec2/pricing/on-demand/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">On-Demand Instances</a>, <a href="https://aws.amazon.com/savingsplans/?trk=cc9e0036-98c5-4fa8-8df0-5281f75284ca&amp;sc_channel=el">Savings Plan</a>, and <a href="https://aws.amazon.com/ec2/spot/pricing/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">Spot Instances</a>. G7e instances are also available in <a href="https://aws.amazon.com/ec2/pricing/dedicated-instances/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">Dedicated Instances</a> and <a href="https://aws.amazon.com/ec2/dedicated-hosts/pricing/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">Dedicated Hosts</a>. To learn more, visit the <a href="https://aws.amazon.com/ec2/pricing">Amazon EC2 Pricing page</a>.</p><p>Give G7e instances a try in the <a href="https://console.aws.amazon.com/ec2/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">Amazon EC2 console</a>. To learn more, visit the <a href="https://aws.amazon.com/ec2/instance-types/g7e/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">Amazon EC2 G7e instances page</a> and send feedback to <a href="https://repost.aws/tags/TAO-wqN9fYRoyrpdULLa5y7g/amazon-ec-2?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">AWS re:Post for EC2</a> or through your usual AWS Support contacts.</p><p>— <a href="https://linkedin.com/in/channy/">Channy</a></p></section><aside id="Comments" class="blog-comments"><div data-lb-comp="aws-blog:cosmic-comments" data-env="prod" data-content-id="2d4d5458-3bf7-47db-8e0d-cd842b325466" data-title="Announcing Amazon EC2 G7e instances accelerated by NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs" data-url="https://aws.amazon.com/blogs/aws/announcing-amazon-ec2-g7e-instances-accelerated-by-nvidia-rtx-pro-6000-blackwell-server-edition-gpus/"><p data-failed-message="Comments cannot be loaded… Please refresh and try again.">Loading comments…</p></div>
</aside>]]></summary>
    <link href="https://aws.amazon.com/blogs/aws/announcing-amazon-ec2-g7e-instances-accelerated-by-nvidia-rtx-pro-6000-blackwell-server-edition-gpus/"/>
    <updated>2026-01-20T22:22:00+01:00</updated>
  </entry>
  <entry>
    <id>https://aws.amazon.com/blogs/aws/aws-weekly-roundup-kiro-cli-latest-features-aws-european-sovereign-cloud-ec2-x8i-instances-and-more-january-19-2026/</id>
    <title><![CDATA[AWS Weekly Roundup: Kiro CLI latest features, AWS European Sovereign Cloud, EC2 X8i instances, and more (January 19, 2026)]]></title>
    <summary><![CDATA[<section class="blog-post-content lb-rtxt"><table id="amazon-polly-audio-table"><tbody><tr><td id="amazon-polly-audio-tab">
<p></p></td></tr></tbody>


</table><p>At the end of 2025 I was happy to take a long break to enjoy the incredible summers that the southern hemisphere provides. I’m back and writing my first post in 2026 which also happens to be my last post for the AWS News Blog (more on this later).</p><p>The AWS community is starting the year strong with various AWS re:invent re:Caps being hosted around the globe, with some communities already hosting their <a href="https://aws.amazon.com/events/community-day/?trk=e82bba63-d46d-4435-acd0-7531b14ad817&amp;sc_channel=el">AWS Community Day events</a>, the <a href="https://luma.com/gr8eck5u">AWS Community Day Tel Aviv 2026</a> was hosted last week.</p><table><tbody><tr><td style="width: 50%;"><img class="aligncenter size-large wp-image-102837" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2026/01/19/IMG-20260116-WA0007-1024x682.jpg" alt="" width="1024" height="682" /></td>
<td><img class="aligncenter size-large wp-image-102838" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2026/01/19/IMG-20260116-WA0006-1024x682.jpg" alt="" width="1024" height="682" /></td>
</tr></tbody></table><p><strong>Last week’s launches</strong><br />Here are last week’s launches that caught my attention:</p><ul><li><a href="https://kiro.dev/changelog/cli/1-24/?sc_channel=sm&amp;sc_publisher=LINKEDIN&amp;sc_country=global&amp;sc_geo=GLOBAL&amp;sc_outcome=awareness">Kiro CLI latest features</a> – Kiro CLI now has granular controls for web fetch URLs, keyboard shortcuts for your custom agents, enhanced diff views, and much more. With these enhancements, you can now use allowlists or blocklists to restrict which URLs the agent can access, ensure a frictionless experience when working with multiple specialized agents in a single session, to name a few.</li>
<li><a href="https://aws.amazon.com/blogs/aws/opening-the-aws-european-sovereign-cloud/">AWS European Sovereign Cloud</a> – Following <a href="https://aws.amazon.com/blogs/aws/in-the-works-aws-european-sovereign-cloud/">an announcement in 2023 of plans to build a new, independent cloud infrastructure</a>, last week we announced the general availability of the AWS European Sovereign Cloud to all customers. The cloud is ready to meet the most stringent sovereignty requirements of European customers with a comprehensive set of AWS services.</li>
<li><a href="https://aws.amazon.com/blogs/aws/amazon-ec2-x8i-instances-powered-by-custom-intel-xeon-6-processors-are-generally-available-for-memory-intensive-workloads/">Amazon EC2 X8i instances</a> – <a href="https://aws.amazon.com/about-aws/whats-new/2025/12/amazon-ec2-x8i-instances-preview/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">Previously launched in preview at AWS re:Invent 2025</a>, last week we announced the general availability of new memory-optimized Amazon Elastic Compute Cloud (Amazon EC2) X8i instances. These instances are powered by custom Intel Xeon 6 processors with a sustained all-core turbo frequency of 3.9 GHz, available only on AWS. These SAP certified instances deliver the highest performance and fastest memory bandwidth among comparable Intel processors in the cloud.</li>
</ul><p><strong>Additional updates</strong><br />These projects, blog posts, and news articles also caught my attention:</p><ul><li><a href="https://www.linkedin.com/feed/update/urn:li:activity:7416901230293102592/">5 core features in Amazon Quick Suite</a> – AWS VP Agentic AI Swami Sivasubramanian talks about how he uses Amazon Quick Suite for just about everything. In October 2025 we <a href="https://aws.amazon.com/blogs/aws/reimagine-the-way-you-work-with-ai-agents-in-amazon-quick-suite/">announced Amazon Quick Suite</a>, a new agentic teammate that quickly answers your questions at work and turns insights into actions for you. Amazon Quick Suite has become one of my favorite productivity tools, helping me with my research on various topics in addition to providing me with multiple perspectives on a topic.</li>
<li><a href="https://aws.amazon.com/blogs/machine-learning/deploy-ai-agents-on-amazon-bedrock-agentcore-using-github-actions/">Deploy AI agents on Amazon Bedrock AgentCore using GitHub Actions</a> – Last year we announced Amazon Bedrock AgentCore, a flexible service that helps you seamlessly create and manage AI agents across different frameworks and models, whether hosted on Amazon Bedrock or other environments. Learn how to use a GitHub Actions workflow to automate the deployment of AI agents on AgentCore Runtime. This approach delivers a scalable solution with enterprise-level security controls, providing complete continuous integration and delivery (CI/CD) automation.</li>
</ul><p><img class="aligncenter size-full wp-image-102839" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2026/01/19/ml-19418-image-1-1.png" alt="" width="730" height="403" /></p><p><strong>Upcoming AWS events</strong><br />Join us January 28 or 29 (depending on your time zone) for <a href="https://aws.amazon.com/best-of-reinvent/">Best of AWS re:Invent</a>, a free virtual event where we bring you the most impactful announcements and top sessions from AWS re:Invent. Jeff Barr, AWS VP and Chief Evangelist, will share his highlights during the opening session.</p><p>There is still time until January 21 to compete for $250,000 in prizes and AWS credits in the <a href="https://builder.aws.com/connect/events/10000aideas?trk=e82bba63-d46d-4435-acd0-7531b14ad817&amp;sc_channel=el">Global 10,000 AIdeas Competition</a> (yes, the second letter is an I as in Idea, not an L as in like). No code required yet: simply submit your idea, and if you’re selected as a semifinalist, you’ll build your app using <a href="https://kiro.dev/?trk=e82bba63-d46d-4435-acd0-7531b14ad817&amp;sc_channel=el">Kiro</a> within <a href="https://aws.amazon.com/free?trk=e82bba63-d46d-4435-acd0-7531b14ad817&amp;sc_channel=el">AWS Free Tier</a> limits. Beyond the cash prizes and potential featured placement at <a href="https://reinvent.awsevents.com/?trk=39d9c26c-b157-46ae-bde6-9cf598f5c9e0&amp;sc_channel=el">AWS re:Invent 2026</a>, you’ll gain hands-on experience with next-generation AI tools and connect with innovators globally.</p><p>Earlier this month, <a href="https://builder.aws.com/community/community-builders/?trk=e82bba63-d46d-4435-acd0-7531b14ad817&amp;sc_channel=el">the 2026 application for the Community Builders program launched</a>. The application is open until January 21st, midnight PST so here’s your last chance to ensure that you don’t miss out.</p><p>If you’re interested in these opportunities, join the <a href="https://builder.aws.com/?trk=e82bba63-d46d-4435-acd0-7531b14ad817&amp;sc_channel=el">AWS Builder Center</a> to learn with builders in the AWS community.</p><p>With that, I close one of my most meaningful chapters here at AWS. It’s been an absolute pleasure to write for you and I thank you for taking the time to read the work that my team and I pour our absolute hearts into. I’ve grown from the close collaborations with the launch teams and the feedback from all of you. The Sub-Sahara Africa (SSA) community has grown significantly, and I want to dedicate more time focused on this community, I’m still at AWS and I look forward to meeting at an event near you!</p><p>Check back next Monday for another <a href="https://aws.amazon.com/blogs/aws/tag/week-in-review/?trk=e82bba63-d46d-4435-acd0-7531b14ad817&amp;sc_channel=el">Weekly Roundup</a>!</p><p>– <a href="https://linkedin.com/in/veliswa-boya">Veliswa Boya</a></p></section><aside id="Comments" class="blog-comments"><div data-lb-comp="aws-blog:cosmic-comments" data-env="prod" data-content-id="d979bb6a-85fb-49f3-acdb-d6b31ef6cbd7" data-title="AWS Weekly Roundup: Kiro CLI latest features, AWS European Sovereign Cloud, EC2 X8i instances, and more (January 19, 2026)" data-url="https://aws.amazon.com/blogs/aws/aws-weekly-roundup-kiro-cli-latest-features-aws-european-sovereign-cloud-ec2-x8i-instances-and-more-january-19-2026/"><p data-failed-message="Comments cannot be loaded… Please refresh and try again.">Loading comments…</p></div>
</aside>]]></summary>
    <link href="https://aws.amazon.com/blogs/aws/aws-weekly-roundup-kiro-cli-latest-features-aws-european-sovereign-cloud-ec2-x8i-instances-and-more-january-19-2026/"/>
    <updated>2026-01-20T01:24:00+01:00</updated>
  </entry>
  <entry>
    <id>https://aws.amazon.com/blogs/aws/amazon-ec2-x8i-instances-powered-by-custom-intel-xeon-6-processors-are-generally-available-for-memory-intensive-workloads/</id>
    <title><![CDATA[Amazon EC2 X8i instances powered by custom Intel Xeon 6 processors are generally available for memory-intensive workloads]]></title>
    <summary><![CDATA[<section class="blog-post-content lb-rtxt"><table id="amazon-polly-audio-table"><tbody><tr><td id="amazon-polly-audio-tab">
<p></p></td></tr></tbody>


</table><p>Since a <a href="https://aws.amazon.com/about-aws/whats-new/2025/12/amazon-ec2-x8i-instances-preview/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">preview launch</a> at AWS re:Invent 2025, we’re announcing the general availability of new memory-optimized <a href="https://aws.amazon.com/ec2/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">Amazon Elastic Compute Cloud (Amazon EC2)</a> X8i instances. These instances are powered by custom Intel Xeon 6 processors with a sustained all-core turbo frequency of 3.9 GHz, available only on AWS. These SAP certified instances deliver the highest performance and fastest memory bandwidth among comparable Intel processors in the cloud.</p><p>X8i instances are ideal for memory-intensive workloads including in-memory databases such as SAP HANA, traditional large-scale databases, data analytics, and electronic design automation (EDA), which require high compute performance and a large memory footprint.</p><p>These instances provide 1.5 times more memory capacity (up to 6 TB), and 3.4 times more memory bandwidth compared to previous generation <a href="https://aws.amazon.com/ko/ec2/instance-types/x2i/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">X2i instances</a>. These instances offer up to 43% higher performance compared to X2i instances, with higher gains on some of the real-world workloads. They deliver up to 50% higher SAP Application Performance Standard (SAPS) performance, up to 47% faster PostgreSQL performance, up to 88% faster Memcached performance, and up to 46% faster AI inference performance.</p><p>During the preview, customers like <a href="https://www.sap.com/products/erp/rise.html">RISE with SAP</a> utilized up to 6 TB of memory capacity with 50% higher compute performance compared to X2i instances. This enabled faster transaction processing and improved query response times for SAP HANA workloads. <a href="https://orion.com/">Orion</a> reduced the number of active cores on X8i instances compared to X2idn instances while maintaining performance thresholds, cutting SQL Server licensing costs by 50%.</p><p><strong class="c6">X8i instances</strong><br />X8i instances are available in 14 sizes including three larger instance sizes (<strong>48xlarge</strong>, <strong>64xlarge</strong>, and <strong>96xlarge</strong>), so you can choose the right size for your application to scale up, and two bare metal sizes (<strong>metal-48xl</strong> and <strong>metal-96xl</strong>) to deploy workloads that benefit from direct access to physical resources. X8i instances feature up to 100 Gbps of network bandwidth with support for the <a href="https://aws.amazon.com/hpc/efa/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">Elastic Fabric Adapter (EFA)</a> and up to 80 Gbps of throughput to <a href="https://aws.amazon.com/ebs/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">Amazon Elastic Block Store (Amazon EBS)</a>.</p><p>Here are the specs for X8i instances:</p><table class="c10"><tbody><tr class="c8"><td class="c7"><strong>Instance name</strong></td>
<td class="c7"><strong>vCPUs</strong></td>
<td class="c7"><strong>Memory<br /></strong> <strong>(GiB)</strong></td>
<td class="c7"><strong>Network bandwidth (Gbps)</strong></td>
<td class="c7"><strong>EBS bandwidth (Gbps)</strong></td>
</tr><tr class="c9"><td class="c7"><strong>x8i.large</strong></td>
<td class="c7">2</td>
<td class="c7">32</td>
<td class="c7">Up to 12.5</td>
<td class="c7">Up to 10</td>
</tr><tr class="c9"><td class="c7"><strong>x8i.xlarge</strong></td>
<td class="c7">4</td>
<td class="c7">64</td>
<td class="c7">Up to 12.5</td>
<td class="c7">Up to 10</td>
</tr><tr class="c9"><td class="c7"><strong>x8i.2xlarge</strong></td>
<td class="c7">8</td>
<td class="c7">128</td>
<td class="c7">Up to 15</td>
<td class="c7">Up to 10</td>
</tr><tr class="c9"><td class="c7"><strong>x8i.4xlarge</strong></td>
<td class="c7">16</td>
<td class="c7">256</td>
<td class="c7">Up to 15</td>
<td class="c7">Up to 10</td>
</tr><tr class="c9"><td class="c7"><strong>x8i.8xlarge</strong></td>
<td class="c7">32</td>
<td class="c7">512</td>
<td class="c7">15</td>
<td class="c7">10</td>
</tr><tr class="c9"><td class="c7"><strong>x8i.12xlarge</strong></td>
<td class="c7">48</td>
<td class="c7">768</td>
<td class="c7">22.5</td>
<td class="c7">15</td>
</tr><tr class="c9"><td class="c7"><strong>x8i.16xlarge</strong></td>
<td class="c7">64</td>
<td class="c7">1,024</td>
<td class="c7">30</td>
<td class="c7">20</td>
</tr><tr class="c9"><td class="c7"><strong>x8i.24xlarge</strong></td>
<td class="c7">96</td>
<td class="c7">1,536</td>
<td class="c7">40</td>
<td class="c7">30</td>
</tr><tr class="c9"><td class="c7"><strong>x8i.32xlarge</strong></td>
<td class="c7">128</td>
<td class="c7">2,048</td>
<td class="c7">50</td>
<td class="c7">40</td>
</tr><tr class="c9"><td class="c7"><strong>x8i.48xlarge</strong></td>
<td class="c7">192</td>
<td class="c7">3,072</td>
<td class="c7">75</td>
<td class="c7">60</td>
</tr><tr class="c9"><td class="c7"><strong>x8i.64xlarge</strong></td>
<td class="c7">256</td>
<td class="c7">4,096</td>
<td class="c7">80</td>
<td class="c7">70</td>
</tr><tr class="c9"><td class="c7"><strong>x8i.96xlarge</strong></td>
<td class="c7">384</td>
<td class="c7">6,144</td>
<td class="c7">100</td>
<td class="c7">80</td>
</tr><tr class="c9"><td class="c7"><strong>x8i.metal-48xl</strong></td>
<td class="c7">192</td>
<td class="c7">3,072</td>
<td class="c7">75</td>
<td class="c7">60</td>
</tr><tr class="c9"><td class="c7"><strong>x8i.metal-96xl</strong></td>
<td class="c7">384</td>
<td class="c7">6,144</td>
<td class="c7">100</td>
<td class="c7">80</td>
</tr></tbody></table><p>X8i instances support the <a href="https://docs.aws.amazon.com/ebs/latest/userguide/instance-bandwidth-configuration.html?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">instance bandwidth configuration (IBC)</a> feature like other eighth-generation instance types, offering flexibility to allocate resources between network and EBS bandwidth. You can scale network or EBS bandwidth by up to 25%, improving database performance, query processing speeds, and logging efficiency. These instances also use sixth-generation <a href="https://aws.amazon.com/ec2/nitro/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">AWS Nitro</a> cards, which offload CPU virtualization, storage, and networking functions to dedicated hardware and software, enhancing performance and security for your workloads.</p><p><strong class="c6">Now available</strong><br />Amazon EC2 X8i instances are now available in US East (N. Virginia), US East (Ohio), US West (Oregon), and Europe (Frankfurt) <a href="https://docs.aws.amazon.com/glossary/latest/reference/glos-chap.html#region?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">AWS Regions</a>. For Regional availability and a future roadmap, search the instance type in the <strong>CloudFormation</strong> resources tab of <a href="https://builder.aws.com/build/capabilities/explore?tab=cfn-resources&amp;trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">AWS Capabilities by Region</a>.</p><p>You can purchase these instances as <a href="https://aws.amazon.com/ec2/pricing/on-demand/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">On-Demand Instances</a>, <a href="https://aws.amazon.com/savingsplans/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">Savings Plan</a>, and <a href="https://aws.amazon.com/ec2/spot/pricing/?trk=trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">Spot Instances</a>. To learn more, visit the <a href="https://aws.amazon.com/ec2/pricing/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">Amazon EC2 Pricing page</a>.</p><p>Give X8i instances a try in the <a href="https://console.aws.amazon.com/ec2/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">Amazon EC2 console</a>. To learn more, visit the <a href="https://aws.amazon.com/ec2/instance-types/x8i/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">Amazon EC2 X8i instances page</a> and send feedback to <a href="https://repost.aws/tags/TAO-wqN9fYRoyrpdULLa5y7g/amazon-ec-2?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">AWS re:Post for EC2</a> or through your usual AWS Support contacts.</p><p>— <a href="https://linkedin.com/in/channy/">Channy</a></p></section><aside id="Comments" class="blog-comments"><div data-lb-comp="aws-blog:cosmic-comments" data-env="prod" data-content-id="a7bedda7-d110-48b9-8d38-2ec87687a96c" data-title="Amazon EC2 X8i instances powered by custom Intel Xeon 6 processors are generally available for memory-intensive workloads" data-url="https://aws.amazon.com/blogs/aws/amazon-ec2-x8i-instances-powered-by-custom-intel-xeon-6-processors-are-generally-available-for-memory-intensive-workloads/"><p data-failed-message="Comments cannot be loaded… Please refresh and try again.">Loading comments…</p></div>
</aside>]]></summary>
    <link href="https://aws.amazon.com/blogs/aws/amazon-ec2-x8i-instances-powered-by-custom-intel-xeon-6-processors-are-generally-available-for-memory-intensive-workloads/"/>
    <updated>2026-01-15T23:52:00+01:00</updated>
  </entry>
  <entry>
    <id>https://aws.amazon.com/blogs/aws/opening-the-aws-european-sovereign-cloud/</id>
    <title><![CDATA[Opening the AWS European Sovereign Cloud]]></title>
    <summary><![CDATA[<table id="amazon-polly-audio-table"><tbody><tr><td id="amazon-polly-audio-tab">
<p></p></td></tr></tbody>


</table><p class="c6"><a href="#german">Deutsch</a> | English | <a href="#spanish">Español</a> | <a href="#french">Français</a> | <a href="#italian">Italiano</a></p><p>As a European citizen, I understand first-hand the importance of digital sovereignty, especially for our public sector organisations and highly regulated industries. Today, I’m delighted to share that the <a href="https://aws.eu/">AWS European Sovereign Cloud</a> is now generally available to all customers. <a href="https://aws.amazon.com/blogs/aws/in-the-works-aws-european-sovereign-cloud/">We first announced our plans to build this new independent cloud infrastructure in 2023</a>, and today it’s ready to meet the most stringent sovereignty requirements of European customers <a href="https://aws.amazon.com/blogs/security/announcing-initial-services-available-in-the-aws-european-sovereign-cloud-backed-by-the-full-power-of-aws/">with a comprehensive set of AWS services</a>.</p><div id="attachment_102650" class="wp-caption aligncenter c7"><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/12/13/AdobeStock_426378211.jpeg"><img aria-describedby="caption-attachment-102650" class="size-large wp-image-102650" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/12/13/AdobeStock_426378211-1024x341.jpeg" alt="Brandenburg Gate" width="1024" height="341" /></a><p id="caption-attachment-102650" class="wp-caption-text">Berlin, Brandenburg Gate at sunset</p></div><p><strong>Meeting European sovereignty requirements<br /></strong> Organizations across Europe face increasingly complex regulatory requirements around data residency, operational control, and governance independence. Too often today, European organisations with the highest sovereignty requirements are stuck in legacy on-premises environments or offerings with reduced services and functionality. In response to this critical need, the AWS European Sovereign Cloud is the only fully featured and independently operated sovereign cloud backed by strong technical controls, sovereign assurances, and legal protections. Public sector entities and businesses in highly regulated industries need cloud infrastructure that provides enhanced sovereignty controls that maintain the innovation, security, and reliability they expect from modern cloud services. These organisations require assurance that their data and operations remain under European jurisdiction, with clear governance structures and operational autonomy within the European Union (EU).</p><p><strong>A new independent cloud infrastructure for Europe</strong><br />The AWS European Sovereign Cloud represents a physically and logically separate cloud infrastructure, with all components located entirely within the EU. The first <a href="https://docs.aws.amazon.com/glossary/latest/reference/glos-chap.html#region">AWS Region</a> in the AWS European Sovereign Cloud is located in the state of Brandenburg, Germany, and is generally available today. This Region operates independently from existing AWS Regions. The infrastructure features multiple Availability Zones with redundant power and networking, designed to operate continuously even if connectivity with the rest of the world is interrupted.</p><p>We plan to extend the AWS European Sovereign Cloud footprint from Germany across the EU to support stringent isolation, in-country data residency, and low latency requirements. This will start with new sovereign <a href="https://docs.aws.amazon.com/local-zones/latest/ug/what-is-aws-local-zones.html">Local Zones</a> located in Belgium, the Netherlands, and Portugal. In addition, you will be able to extend the AWS European Sovereign Cloud infrastructure with <a href="https://aws.amazon.com/dedicatedlocalzones/">AWS Dedicated Local Zones</a>, <a href="https://aws.amazon.com/about-aws/global-infrastructure/ai-factories/">AWS AI Factories</a>, or <a href="https://aws.amazon.com/outposts/">AWS Outposts</a> in locations you select, including your own on-premises data centres.</p><p>The AWS European Sovereign Cloud and its Local Zones provide enhanced sovereign controls through its unique operational model. The AWS European Sovereign Cloud will be operated exclusively by EU residents located in the EU. This covers activities such as day-to-day operations, technical support, and customer service. We’re gradually transitioning the AWS European Sovereign Cloud <a href="https://www.aboutamazon.eu/news/aws/aws-european-sovereign-cloud-to-be-operated-by-eu-citizens">to be operated exclusively by EU citizens located in the EU</a>. During this transition period, we will continue to work with a blended team of EU residents and EU citizens located in the EU.</p><p>The infrastructure is managed through dedicated European legal entities established under German law. In October 2025, AWS appointed <a href="https://www.aboutamazon.eu/news/aws/stephane-israel-appointed-to-lead-the-aws-european-sovereign-cloud">Stéphane Israël</a>, an EU citizen residing in the EU, as managing director. Stéphane will be responsible for the management and operations of the AWS European Sovereign Cloud, including infrastructure, technology, and services, as well as leading AWS broader digital sovereignty efforts. In January 2026, AWS also appointed <a href="https://www.linkedin.com/in/stefanhoechbaue">Stefan Hoechbauer</a> (Vice President, Germany and Central Europe, AWS) as a managing director of the AWS European Sovereign Cloud. He will work alongside Stéphane Israel to lead the AWS European Sovereign Cloud.</p><p>An advisory board comprised exclusively of EU citizens, and including two independent third-party representatives, provides additional oversight and expertise on sovereignty matters.</p><p><strong>Enhanced data residency and control</strong><br />The AWS European Sovereign Cloud provides comprehensive data residency assurances so you can meet the most stringent data residency requirements. As with our existing AWS Regions around the world, all your content remains within the Region you select unless you choose otherwise. Beyond content, customer-created metadata including roles, permissions, resource labels, and configurations also stays within the EU. The infrastructure features its own dedicated <a href="https://aws.amazon.com/iam/">AWS Identity and Access Management (IAM)</a> and billing system, all operating independently within European borders.</p><p>Technical controls built into the infrastructure <a href="https://docs.aws.amazon.com/whitepapers/latest/overview-aws-european-sovereign-cloud/design-goals.html">prevent access to the AWS European Sovereign Cloud from outside the EU</a>. The infrastructure includes <a href="https://aws.amazon.com/blogs/security/establishing-a-european-trust-service-provider-for-the-aws-european-sovereign-cloud/">a dedicated European trust service provider for certificate authority operations</a> and uses dedicated <a href="https://aws.amazon.com/route53/">Amazon Route 53</a> name servers. These servers will only use <a href="https://aws.eu/faq/#operational-autonomy">European Top-Level Domains (TLDs) for their own names</a>. The AWS European Sovereign Cloud has no critical dependencies on non-EU personnel or infrastructure.</p><p><strong>Security and compliance framework<br /></strong> The AWS European Sovereign Cloud maintains the same core security capabilities you expect from AWS, including encryption, key management, access governance, and the <a href="https://aws.amazon.com/ec2/nitro/">AWS Nitro System</a> for compute isolation. This means your EC2 instances benefit from cryptographically verified platform integrity and hardware-enforced boundaries that prevent unauthorized access to your data without compromising on performance, giving you both the sovereignty controls and the computational power your workloads require. The infrastructure undergoes <a href="https://aws.amazon.com/blogs/compute/aws-nitro-system-gets-independent-affirmation-of-its-confidential-compute-capabilities/">independent third-party audits</a>, with compliance programs including ISO/IEC 27001:2013, SOC 1/2/3 reports, and <a href="https://www.bsi.bund.de/EN/Home/home_node.html">Federal Office for Information Security (BSI)</a> C5 attestation.</p><p>The <a href="https://aws.amazon.com/blogs/security/exploring-the-new-aws-european-sovereign-cloud-sovereign-reference-framework/">AWS European Sovereign Cloud: Sovereign Reference Framework</a> defines the specific sovereignty controls across governance independence, operational control, data residency, and technical isolation. This framework is available in <a href="https://aws.amazon.com/artifact/">AWS Artifact</a> and provides end-to-end visibility through SOC 2 attestation.</p><p><strong>Comprehensive service availability<br /></strong> You can access a broad range of AWS services in the AWS European Sovereign Cloud from launch, including <a href="https://aws.amazon.com/sagemaker/">Amazon SageMaker</a> and <a href="https://aws.amazon.com/bedrock/">Amazon Bedrock</a> for artificial intelligence and machine learning (AI/ML) workloads. For compute, you can use <a href="https://aws.amazon.com/ec2/">Amazon Elastic Compute Cloud (Amazon EC2)</a> and <a href="https://aws.amazon.com/lambda/">AWS Lambda</a>. Container orchestration is available through <a href="https://aws.amazon.com/eks/">Amazon Elastic Kubernetes Service (Amazon EKS)</a> and <a href="https://aws.amazon.com/ecs/">Amazon Elastic Container Service (Amazon ECS)</a>. Database services include <a href="https://aws.amazon.com/rds/aurora/">Amazon Aurora</a>, <a href="https://aws.amazon.com/dynamodb/">Amazon DynamoDB</a>, and <a href="https://aws.amazon.com/rds/">Amazon Relational Database Service (Amazon RDS)</a>. Storage options include <a href="https://aws.amazon.com/s3/">Amazon Simple Storage Service (Amazon S3)</a> and <a href="https://aws.amazon.com/ebs/">Amazon Elastic Block Store (Amazon EBS)</a>, with networking through <a href="https://aws.amazon.com/vpc/">Amazon Virtual Private Cloud (Amazon VPC)</a> and security services including <a href="https://aws.amazon.com/kms/">AWS Key Management Service (AWS KMS)</a> and <a href="https://aws.amazon.com/private-ca/">AWS Private Certificate Authority</a>. For an up-to-date list of services, refer to the <a href="https://builder.aws.com/build/capabilities/explore?f=eJyrVipOzUlNLklNCUpNz8zPK1ayUoqOUUotLU7WTUnVTU0sLtE1jFGKVdKBK3QsS8zMSUzKzMksqQSqdsyrVEARqgUA4l8dog&amp;tab=service-feature">AWS Capabilities matrix</a> recently published on the AWS Builder Center.</p><p>The AWS European Sovereign Cloud is <a href="https://aws.amazon.com/blogs/apn/range-of-aws-partner-solutions-set-to-launch-on-the-aws-european-sovereign-cloud/">supported by AWS Partners</a> who are committed to helping you meet your sovereignty requirements. Partners including Adobe, Cisco, Cloudera, Dedalus, Esri, Genesys, GitLab, Mendix, Pega, SAP, SnowFlake, Trend Micro, and Wiz are making their solutions available in the AWS European Sovereign Cloud, providing you with the tools and services you need across security, data analytics, application development, and industry-specific workloads. This broad partner support helps you build sovereign solutions that combine AWS services with trusted partner technologies.</p><p><strong>Significant investment in European infrastructure<br /></strong> The AWS European Sovereign Cloud is backed by a €7.8 billion investment in infrastructure, jobs creation, and skills development. This investment is expected to contribute €17.2 billion to the European economy through 2040 and support roughly 2,800 full-time equivalent jobs annually in local businesses.</p><p><strong>Some technical details<br /></strong> The AWS European Sovereign Cloud is available to all customers, regardless of where they are located. You can access the infrastructure using the <a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/reference-arns.html">partition</a> name <code>aws-eusc</code> and the Region name <code>eusc-de-east-1</code>. A partition is a group of AWS Regions. Each AWS account is scoped to one partition.</p><p>The infrastructure supports all standard AWS access methods including the <a href="https://console.amazonaws-eusc.eu/">AWS Management Console</a>, <a href="https://aws.amazon.com/tools/">AWS SDKs</a>, and the <a href="https://aws.amazon.com/cli/">AWS Command Line Interface (AWS CLI)</a>, making it straightforward to integrate into your existing workflows and automation. After having created a new root account for the AWS European Sovereign Cloud partition, you start by creating new IAM identities and roles specific to this infrastructure, giving you complete control over access management within the European sovereign environment.</p><p><strong>Getting started<br /></strong> The AWS European Sovereign Cloud provides European organisations with enhanced sovereignty controls whilst maintaining access to AWS innovation and capabilities. You can contract for services through Amazon Web Services EMEA SARL, with pricing in EUR and billing in any of <a href="https://aws.amazon.com/legal/aws-emea/">the eight currencies we support today</a>. The infrastructure uses familiar AWS architecture, service portfolio, and APIs, making it straightforward to build and migrate applications.</p><p>The <a href="https://aws.eu/de/esca">AWS European Sovereign Cloud addendum</a> contains the additional contractual commitments for the AWS European Sovereign Cloud.</p><p>For me as a European, this launch represents the AWS commitment to meeting the specific needs of our continent and providing the cloud capabilities that drive innovation across industries. I invite you to find out more about the AWS European Sovereign Cloud and how it can help your organisation meet its sovereignty requirements. Read <a href="https://docs.aws.amazon.com/whitepapers/latest/overview-aws-european-sovereign-cloud/introduction.html">Overview of the AWS European Sovereign Cloud</a> to learn more about the design goals and approach, <a href="https://eusc-de-east-1.signin.amazonaws-eusc.eu/signup?request_type=register">sign up for a new account</a>, and plan for the deployment of your first workload today.</p><a href="https://linktr.ee/sebsto">— seb</a><hr id="german" /><h4 class="c8">German version</h4><p><strong>Start der AWS European Sovereign Cloud</strong></p><p>Als Bürger Europas weiß ich aus eigener Erfahrung, wie wichtig digitale Souveränität ist, insbesondere für unsere öffentlichen Einrichtungen und stark regulierten Branchen. Ich freue mich, Ihnen heute mitteilen zu können, dass die <a href="https://aws.eu/">AWS European Sovereign Cloud</a> nun für alle Kunden allgemein verfügbar ist. <a href="https://aws.amazon.com/blogs/aws/in-the-works-aws-european-sovereign-cloud/">Wir haben unsere Pläne zum Aufbau dieser neuen unabhängigen Cloud-Infrastruktur erstmals im Jahr 2023 vorgestellt</a>. Heute ist diese Infrastruktur bereit, <a href="https://aws.amazon.com/blogs/security/announcing-initial-services-available-in-the-aws-european-sovereign-cloud-backed-by-the-full-power-of-aws/">mit einem umfassenden Angebot an AWS-Services</a> die strengsten Souveränitätsanforderungen europäischer Kunden zu erfüllen.</p><div class="wp-caption aligncenter c7"><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/12/13/AdobeStock_426378211.jpeg"><img aria-describedby="caption-attachment-102650" class="size-large wp-image-102650" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/12/13/AdobeStock_426378211-1024x341.jpeg" alt="Brandenburg Gate" width="1024" height="341" /></a><p class="wp-caption-text">Berlin, Brandenburger Tor bei Sonnenuntergang</p></div><p><strong class="c9">Erfüllung europäischer Souveränitätsanforderungen<br /></strong> Organisationen in ganz Europa sehen sich mit zunehmend komplexen regulatorischen Anforderungen in Bezug auf Datenresidenz, operative Kontrolle und Unabhängigkeit der Governance konfrontiert. Europäische Organisationen mit höchsten Souveränitätsanforderungen sind heutzutage allzu oft in veralteten lokalen Umgebungen oder Angeboten mit eingeschränkten Services und Funktionen gefangen. Die AWS European Sovereign Cloud ist die Antwort auf diesen dringenden Bedarf. Sie ist die einzige unabhängig betriebene souveräne Cloud mit vollem Funktionsumfang, die durch strenge technische Kontrollen, Souveränitätszusicherungen und rechtlichen Schutz abgesichert ist. Einrichtungen des öffentlichen Sektors und Unternehmen in stark regulierten Branchen benötigen eine Cloud-Infrastruktur, die erweiterte Souveränitätskontrollen bietet und gleichzeitig die von modernen Cloud-Services erwartete Innovation, Sicherheit und Zuverlässigkeit gewährleistet. Diese Organisationen benötigen die Zusicherung, dass ihre Daten und Aktivitäten unter europäischer Zuständigkeit bleiben, mit klaren Governance-Strukturen und operativer Autonomie innerhalb der Europäischen Union (EU).</p><p><strong class="c9">Eine neue unabhängige Cloud-Infrastruktur für Europa</strong><br />Die AWS European Sovereign Cloud ist eine physisch und logisch getrennte Cloud-Infrastruktur, deren Komponenten sich vollständig innerhalb der EU befinden. Die erste <a href="https://docs.aws.amazon.com/glossary/latest/reference/glos-chap.html#region">AWS-Region</a> in der AWS European Sovereign Cloud befindet sich im deutschen Bundesland Brandenburg und ist ab heute allgemein verfügbar. Diese Region arbeitet unabhängig von bestehenden AWS-Regionen. Die Infrastruktur umfasst mehrere Availability Zones mit redundanter Stromversorgung und Netzwerkverbindung, die auch bei einer Unterbrechung der Verbindung zum Rest der Welt einen kontinuierlichen Betrieb gewährleisten.</p><p>Wir beabsichtigen, die Präsenz der AWS European Sovereign Cloud von Deutschland aus EU-weit auszuweiten, um strenge Anforderungen hinsichtlich Isolierung, Datenresidenz innerhalb einzelner Länder und geringer Latenz zu erfüllen. Dies beginnt mit neuen souveränen <a href="https://docs.aws.amazon.com/local-zones/latest/ug/what-is-aws-local-zones.html">Local Zones</a> in Belgien, den Niederlanden und Portugal. Darüber hinaus können Sie die Infrastruktur der AWS European Sovereign Cloud mit <a href="https://aws.amazon.com/dedicatedlocalzones/">dedizierten AWS Local Zones</a>, <a href="https://aws.amazon.com/about-aws/global-infrastructure/ai-factories/">AWS AI Factories</a> oder <a href="https://aws.amazon.com/outposts/">AWS Outposts</a> an Standorten Ihrer Wahl, einschließlich Ihrer eigenen lokalen Rechenzentren, erweitern.</p><p>Dank ihres einzigartigen Betriebsmodells bieten die AWS European Sovereign Cloud und ihre Local Zones erweiterte Souveränitätskontrollen. Der Betrieb der AWS European Sovereign Cloud wird ausschließlich von EU-Bürgern mit Wohnsitz in der EU sichergestellt. Dies umfasst Aktivitäten wie den täglichen Betrieb, den technischen Support und den Kundenservice. Wir stellen die AWS European Sovereign Cloud schrittweise so um, dass <a href="https://www.aboutamazon.eu/news/aws/aws-european-sovereign-cloud-to-be-operated-by-eu-citizens">als Betriebspersonal ausschließlich EU-Bürger mit Wohnsitz in der EU zum Einsatz kommen</a>. Während dieser Übergangsphase werden wir weiterhin mit einem gemischten Team aus in der EU ansässigen Personen und in der EU lebenden EU-Bürgern arbeiten.</p><p>Die Infrastruktur wird durch spezielle europäische juristische Personen nach deutschem Recht verwaltet. Im Oktober 2025 berief AWS <a href="https://www.aboutamazon.eu/news/aws/stephane-israel-appointed-to-lead-the-aws-european-sovereign-cloud">Stéphane Israël</a>, einen in der EU ansässigen EU-Bürger, zum Geschäftsführer. Stéphane wird für das Management und den Betrieb der AWS European Sovereign Cloud verantwortlich zeichnen. Dies umfasst die Bereiche Infrastruktur, Technologie und Services sowie die Federführung bei den breit angelegten Initiativen von AWS auf dem Gebiet der digitalen Souveränität. Im Januar 2026 ernannte AWS zudem <a href="https://www.linkedin.com/in/stefanhoechbaue">Stefan Hoechbauer</a> (Vice President, Germany and Central Europe, AWS) zum Geschäftsführer der AWS European Sovereign Cloud. Er wird gemeinsam mit Stéphane Israel die Leitung der AWS European Sovereign Cloud innehaben.</p><p>Ein Beirat, dem ausschließlich EU-Bürger, einschließlich zwei unabhängigen externen Vertretern, angehören, fungiert als zusätzliche Kontrollinstanz und bringt Fachwissen in Fragen der Souveränität ein.</p><p><strong class="c9">Verbesserte Datenresidenz und -kontrolle</strong><br />Die AWS European Sovereign Cloud bietet umfassende Garantien hinsichtlich der Datenresidenz, sodass Sie selbst die strengsten Anforderungen in diesem Bereich erfüllen können. Wie auch bei unseren bestehenden AWS-Regionen weltweit verbleiben alle Ihre Inhalte in der von Ihnen ausgewählten Region, sofern Sie keine anderen Einstellungen vornehmen. Neben den Inhalten verbleiben auch die vom Kunden erstellten Metadaten, einschließlich Rollen, Berechtigungen, Ressourcenbezeichnungen und Konfigurationen, innerhalb der EU. Die Infrastruktur verfügt über ein eigenes <a href="https://aws.amazon.com/iam/">AWS Identity and Access Management (IAM)</a> und ein eigenes Abrechnungssystem – beides wird innerhalb der europäischen Grenzen unabhängig betrieben.</p><p>In die Infrastruktur integrierte technische Kontrollen <a href="https://docs.aws.amazon.com/whitepapers/latest/overview-aws-european-sovereign-cloud/design-goals.html">verhindern den Zugriff auf die AWS European Sovereign Cloud von außerhalb der EU</a>. Die Infrastruktur umfasst <a href="https://aws.amazon.com/blogs/security/establishing-a-european-trust-service-provider-for-the-aws-european-sovereign-cloud/">einen dedizierten europäischen Trust Service Provider</a> für Zertifizierungsstellen und nutzt dedizierte <a href="https://aws.amazon.com/route53/">Amazon-Route-53</a>-Namenserver. Diese Server verwenden ausschließlich <a href="https://aws.eu/faq/#operational-autonomy">europäische Top-Level-Domains (TLDs) für ihre eigenen Namen</a>. Die AWS European Sovereign Cloud unterliegt keinen kritischen Abhängigkeiten hinsichtlich Personal oder Infrastruktur außerhalb der EU.</p><p><strong class="c9">Sicherheits- und Compliance-Framework<br /></strong> Die AWS European Sovereign Cloud bietet dieselben zentralen Sicherheitsfunktionen, die Sie von AWS erwarten. Dazu gehören Verschlüsselung, Schlüsselverwaltung, Zugriffskontrolle und das <a href="https://aws.amazon.com/ec2/nitro/">AWS Nitro System</a> für die Isolierung von Rechenressourcen. Dies bedeutet, dass Ihre EC2-Instanzen von einer kryptografisch verifizierten Plattformintegrität und hardwaregestützten Grenzen profitieren, die unbefugten Zugriff auf Ihre Daten verhindern, ohne die Leistung zu beeinträchtigen. So erhalten Sie sowohl die Souveränitätskontrollen als auch die Rechenleistung, die Ihre Workloads erfordern. Die Infrastruktur wird <a href="https://aws.amazon.com/blogs/compute/aws-nitro-system-gets-independent-affirmation-of-its-confidential-compute-capabilities/">unabhängigen Audits durch Dritt</a>e unterzogen. Die Compliance-Programme umfassen ISO/IEC 27001:2013, SOC-1/2/3-Berichte und das C5-Zertifikat des <a href="https://www.bsi.bund.de/EN/Home/home_node.html">Bundesamtes für Sicherheit in der Informationstechnik (BSI)</a>.</p><p>Das <a href="https://aws.amazon.com/blogs/security/exploring-the-new-aws-european-sovereign-cloud-sovereign-reference-framework/">AWS European Sovereign Cloud: Sovereign Reference Framework</a> definiert spezifische Souveränitätskontrollen in den Bereichen Governance-Unabhängigkeit, operative Kontrolle, Datenresidenz und technische Isolierung. Dieses Framework ist in <a href="https://aws.amazon.com/artifact/">AWS Artifact</a> verfügbar und bietet durch SOC-2-Zertifizierung durchgängige Transparenz.</p><p><strong class="c9">Umfassende Serviceverfügbarkeit<br /></strong> Von Beginn an steht Ihnen in der AWS European Sovereign Cloud eine breite Palette von AWS-Services zur Verfügung – darunter <a href="https://aws.amazon.com/sagemaker/">Amazon SageMaker</a> und <a href="https://aws.amazon.com/bedrock/">Amazon Bedrock</a> für Workloads im Bereich Künstliche Intelligenz und Machine Learning (KI/ML). Für die Rechenleistung stehen Ihnen <a href="https://aws.amazon.com/ec2/">Amazon Elastic Compute Cloud (Amazon EC2)</a> und <a href="https://aws.amazon.com/lambda/">AWS Lambda</a> zur Verfügung. Die Container-Orchestrierung ist über den <a href="https://aws.amazon.com/eks/">Amazon Elastic Kubernetes Service (Amazon EKS)</a> und den <a href="https://aws.amazon.com/ecs/">Amazon Elastic Container Service (Amazon ECS)</a> verfügbar. Zu den Datenbank-Services gehören <a href="https://aws.amazon.com/rds/aurora/">Amazon Aurora</a>, <a href="https://aws.amazon.com/dynamodb/">Amazon DynamoDB</a> und <a href="https://aws.amazon.com/rds/">Amazon Relational Database Service (Amazon RDS)</a>. Die Speicheroptionen umfassen <a href="https://aws.amazon.com/s3/">Amazon Simple Storage Service (Amazon S3)</a> und <a href="https://aws.amazon.com/ebs/">Amazon Elastic Block Store (Amazon EBS)</a> mit Vernetzung über <a href="https://aws.amazon.com/vpc/">Amazon Virtual Private Cloud (Amazon VPC)</a> und Sicherheits-Services wie <a href="https://aws.amazon.com/kms/">AWS Key Management Service (AWS KMS)</a> und <a href="https://aws.amazon.com/private-ca/">AWS Private Certificate Authority</a>. Eine aktuelle Liste der Services finden Sie in der <a href="https://builder.aws.com/build/capabilities/explore?f=eJyrVipOzUlNLklNCUpNz8zPK1ayUoqOUUotLU7WTUnVTU0sLtE1jFGKVdKBK3QsS8zMSUzKzMksqQSqdsyrVEARqgUA4l8dog&amp;tab=service-feature">AWS-Funktionsmatrix</a>, die kürzlich im AWS Builder Center veröffentlicht wurde.</p><p>Die AWS European Sovereign Cloud wird von <a href="https://aws.amazon.com/blogs/apn/range-of-aws-partner-solutions-set-to-launch-on-the-aws-european-sovereign-cloud/">AWS-Partnern unterstützt</a>, die es sich zur Aufgabe gemacht haben, Ihnen bei der Erfüllung Ihrer Souveränitätsanforderungen zur Seite zu stehen. Partner wie Adobe, Cisco, Cloudera, Dedalus, Esri, Genesys, GitLab, Mendix, Pega, SAP, SnowFlake, Trend Micro und Wiz stellen ihre Lösungen in der AWS European Sovereign Cloud zur Verfügung und bieten Ihnen die Tools und Services, die Sie in den Bereichen Sicherheit, Datenanalyse, Entwicklung von Anwendungen und branchenspezifische Workloads benötigen. Dank dieser umfassenden Unterstützung durch Partner können Sie eigenständige Lösungen, die AWS-Services mit bewährten Partnertechnologien kombinieren, entwickeln.</p><p><strong class="c9">Erhebliche Investitionen in die europäische Infrastruktur<br /></strong> Hinter der AWS European Sovereign Cloud steht eine Investition in Höhe von 7,8 Milliarden Euro in Infrastruktur, die Schaffung von Arbeitsplätzen und die Entwicklung von Kompetenzen. Diese Investition wird bis 2040 voraussichtlich 17,2 Milliarden Euro zur europäischen Wirtschaftsleistung beitragen und jährlich rund 2 800 Vollzeitstellen in lokalen Unternehmen sichern.</p><p><strong class="c9">Einige technische Details<br /></strong> Die AWS European Sovereign Cloud steht allen Kunden zur Verfügung, unabhängig davon, wo sie sich befinden. Sie können mit dem <a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/reference-arns.html">Partitionsnamen</a> aws-eusc und dem Regionsnamen eusc-de-east-1 auf die Infrastruktur zugreifen. Eine Partition ist eine Gruppe von AWS-Regionen. Jedes AWS-Konto ist auf eine Partition beschränkt.</p><p>Die Infrastruktur unterstützt alle gängigen AWS-Zugriffsmethoden, einschließlich der <a href="https://console.amazonaws-eusc.eu/">AWS Management Console</a>, <a href="https://aws.amazon.com/tools/">AWS SDKs</a> und der <a href="https://aws.amazon.com/cli/">AWS Command Line Interface (AWS CLI)</a>, sodass sie sich problemlos in Ihre bestehenden Workflows und Automatisierungsprozesse integrieren lässt. Nachdem Sie ein neues Root-Konto für die Partition der AWS European Sovereign Cloud erstellt haben, beginnen Sie mit der Erstellung neuer IAM-Identitäten und -Rollen, die speziell für diese Infrastruktur vorgesehen sind. Dadurch erhalten Sie die vollständige Kontrolle über die Zugriffsverwaltung innerhalb der europäischen souveränen Umgebung.</p><p><strong class="c9">Erste Schritte<br /></strong> Die AWS European Sovereign Cloud bietet europäischen Organisationen erweiterte Souveränitätskontrollen und gewährleistet gleichzeitig den Zugriff auf die Innovationen und Funktionen von AWS. Sie können Services über Amazon Web Services EMEA SARL in Auftrag geben. Die Preise werden in Euro angegeben und die Abrechnung erfolgt in <a href="https://aws.amazon.com/legal/aws-emea/">einer der acht Währungen, die wir derzeit unterstützen</a>. Die Infrastruktur basiert auf der bekannten AWS-Architektur, dem AWS-Serviceportfolio und den AWS-APIs, wodurch die Entwicklung und Migration von Anwendungen vereinfacht wird.</p><p>Der <a href="https://aws.eu/de/esca">AWS European Sovereign Cloud Addendum</a> enthält die zusätzlichen vertraglichen Verpflichtungen für die AWS European Sovereign Cloud.</p><p>Für mich als Europäer symbolisiert dieser Launch das Engagement von AWS, den spezifischen Anforderungen unseres Kontinents gerecht zu werden und Cloud-Funktionen bereitzustellen, die Innovationen in allen Branchen vorantreiben. Ich lade Sie ein, mehr über die AWS European Sovereign Cloud zu erfahren und zu entdecken, wie sie Ihrer Organisation dabei helfen kann, ihre Souveränitätsanforderungen zu erfüllen. Lesen Sie die <a href="https://docs.aws.amazon.com/whitepapers/latest/overview-aws-european-sovereign-cloud/introduction.html">Übersicht über die AWS European Sovereign Cloud</a> und erfahren Sie mehr über die Designziele und den Ansatz. <a href="https://eusc-de-east-1.signin.amazonaws-eusc.eu/signup?request_type=register">Registrieren Sie sich für ein neues Konto</a> und planen Sie noch heute die Bereitstellung Ihrer ersten Workload.</p><p><a href="https://linktr.ee/sebsto">— seb</a></p><hr id="french" /><h4 class="c8">French version</h4><p><strong>Ouverture de l’AWS European Souvereign Cloud</strong></p><p>En tant que citoyen européen, je mesure personnellement l’importance de la souveraineté numérique, en particulier pour nos organisations du secteur public et les industries fortement réglementées. Aujourd’hui, j’ai le plaisir d’annoncer que l’<a href="https://aws.eu/">AWS European Sovereign Cloud</a> est désormais disponible pour l’ensemble de nos clients. <a href="https://aws.amazon.com/blogs/aws/in-the-works-aws-european-sovereign-cloud/">Nous avions annoncé pour la première fois notre projet de construction de cette nouvelle infrastructure cloud indépendante en 2023</a>, et elle est aujourd’hui prête à répondre aux exigences de souveraineté les plus strictes des clients européens, <a href="https://aws.amazon.com/blogs/security/announcing-initial-services-available-in-the-aws-european-sovereign-cloud-backed-by-the-full-power-of-aws/">avec un large ensemble de services AWS</a>.</p><div class="wp-caption aligncenter c7"><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/12/13/AdobeStock_426378211.jpeg"><img aria-describedby="caption-attachment-102650" class="size-large wp-image-102650" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/12/13/AdobeStock_426378211-1024x341.jpeg" alt="Brandenburg Gate" width="1024" height="341" /></a><p class="wp-caption-text">Berlin, porte de Brandebourg au coucher du soleil</p></div><p><strong>Répondre aux exigences européennes en matière de souveraineté<br /></strong> Partout en Europe, les organisations sont confrontées à des exigences réglementaires de plus en plus complexes en matière de résidence des données, de contrôle opérationnel et d’indépendance de la gouvernance. Trop souvent aujourd’hui, les organisations européennes ayant les besoins de souveraineté les plus élevés se retrouvent contraintes de rester sur des environnements sur site, ou de recourir à des offres cloud aux services et fonctionnalités limités. Pour répondre à cet enjeu critique, l’AWS European Sovereign Cloud est le seul cloud souverain entièrement fonctionnel, exploité de manière indépendante, et reposant sur des contrôles techniques robustes, des garanties de souveraineté et des protections juridiques solides. Les acteurs du secteur public et les entreprises des secteurs fortement réglementés ont besoin d’une infrastructure cloud offrant des contrôles de souveraineté renforcés, sans renoncer à l’innovation, à la sécurité et à la fiabilité attendues des services cloud modernes. Ces organisations doivent avoir l’assurance que leurs données et leurs opérations restent sous juridiction européenne, avec des structures de gouvernance claires et une autonomie opérationnelle au sein de l’Union européenne (UE).</p><p><strong>Une nouvelle infrastructure cloud indépendante pour l’Europe</strong><br />L’AWS European Sovereign Cloud repose sur une infrastructure cloud physiquement et logiquement distincte, dont l’ensemble des composants est situé exclusivement au sein de l’UE. La première <a href="https://docs.aws.amazon.com/glossary/latest/reference/glos-chap.html#region">région AWS</a> de l’AWS European Sovereign Cloud est implantée dans le Land de Brandebourg, en Allemagne, et est disponible dès aujourd’hui. Cette région fonctionne de façon indépendante par rapport aux autres régions AWS existantes. L’infrastructure comprend plusieurs zones de disponibilité, avec des systèmes redondants d’alimentation et de réseau, conçus pour fonctionner en continu même en cas d’interruption de la connectivité avec le reste du monde.</p><p>Nous prévoyons d’étendre l’empreinte de l’AWS European Sovereign Cloud depuis l’Allemagne à l’ensemble de l’UE, afin de répondre aux exigences strictes d’isolation, de résidence des données dans certains pays et de faible latence. Cette extension débutera avec de nouvelles <a href="https://docs.aws.amazon.com/local-zones/latest/ug/what-is-aws-local-zones.html">Local Zones</a> souveraines situées en Belgique, aux Pays-Bas et au Portugal. En complément, vous pourrez étendre l’infrastructure de l’AWS European Sovereign Cloud à l’aide des <a href="https://aws.amazon.com/dedicatedlocalzones/">AWS Dedicated Local Zones</a>, des <a href="https://aws.amazon.com/about-aws/global-infrastructure/ai-factories/">AWS AI Factories</a> ou d’<a href="https://aws.amazon.com/outposts/">AWS Outposts</a>, dans les sites de votre choix, y compris au sein de vos propres centres de données sur site.</p><p>L’AWS European Sovereign Cloud et ses Local Zones offrent des contrôles de souveraineté renforcés grâce à un modèle opérationnel unique. L’AWS European Sovereign Cloud est exploité exclusivement par des résidents de l’UE basés dans l’UE. Cela couvre notamment les opérations quotidiennes, le support technique et le service client. Nous sommes en train d’opérer une transition progressive afin que l’AWS European Sovereign Cloud soit <a href="https://www.aboutamazon.eu/news/aws/aws-european-sovereign-cloud-to-be-operated-by-eu-citizens">exploité exclusivement par des citoyens de l’UE résidant dans l’UE</a>. Durant cette période de transition, nous continuons de travailler avec une équipe mixte composée de résidents de l’UE et de citoyens de l’UE basés dans l’UE.</p><p>L’infrastructure est gérée par des entités juridiques européennes dédiées, établies conformément au droit allemand. En octobre 2025, AWS a nommé <a href="https://www.aboutamazon.eu/news/aws/stephane-israel-appointed-to-lead-the-aws-european-sovereign-cloud">Stéphane Israël</a>, citoyen de l’UE résidant dans l’UE, au poste de directeur général. Stéphane est responsable de la gestion et de l’exploitation de l’AWS European Sovereign Cloud, couvrant l’infrastructure, la technologie et les services, ainsi que de la direction des initiatives plus larges d’AWS en matière de souveraineté numérique. En janvier 2026, AWS a également nommé <a href="https://www.linkedin.com/in/stefanhoechbaue">Stefan Hoechbauer</a> (Vice-président, Allemagne et Europe centrale, AWS) comme directeur général de l’AWS European Sovereign Cloud. Il travaillera aux côtés de Stéphane Israël pour piloter l’AWS European Sovereign Cloud.</p><p>Un conseil consultatif, composé exclusivement de citoyens de l’UE et incluant deux représentants indépendants, apporte un niveau supplémentaire de supervision et d’expertise sur les questions de souveraineté.</p><p><strong>Résidence des données et contrôle renforcés</strong><br />L’AWS European Sovereign Cloud fournit des garanties complètes en matière de résidence des données afin de répondre aux exigences les plus strictes. Comme dans les régions AWS existantes à travers le monde, l’ensemble de vos contenus reste dans la région que vous sélectionnez, sauf indication contraire de votre part. Au-delà des contenus, les métadonnées créées par les clients — telles que les rôles, les autorisations, les étiquettes de ressources et les configurations — restent également au sein de l’UE. L’infrastructure dispose de son propre système dédié de <a href="https://aws.amazon.com/iam/">AWS Identity and Access Management (IAM)</a> et de facturation, opérant de manière totalement indépendante à l’intérieur des frontières européennes.</p><p>Des contrôles techniques intégrés à l’infrastructure <a href="https://docs.aws.amazon.com/whitepapers/latest/overview-aws-european-sovereign-cloud/design-goals.html">empêchent tout accès à l’AWS European Sovereign Cloud depuis l’extérieur de l’UE</a>. L’infrastructure comprend <a href="https://aws.amazon.com/blogs/security/establishing-a-european-trust-service-provider-for-the-aws-european-sovereign-cloud/">un prestataire européen de services de confiance dédié pour les opérations d’autorité de certification</a> et utilise des serveurs de noms <a href="https://aws.amazon.com/route53/">Amazon Route 53</a> dédiés. Ces serveurs n’utilisent <a href="https://aws.eu/faq/#operational-autonomy">que des domaines de premier niveau (TLD) européens pour leurs propres noms</a>. L’AWS European Sovereign Cloud ne présente aucune dépendance critique vis-à-vis de personnels ou d’infrastructures situés en dehors de l’UE.</p><p><strong>Cadre de sécurité et de conformité<br /></strong> L’AWS European Sovereign Cloud conserve les capacités de sécurité fondamentales attendues d’AWS, notamment le chiffrement, la gestion des clés, la gouvernance des accès et le <a href="https://aws.amazon.com/ec2/nitro/">système AWS Nitro</a> pour l’isolation des charges de calcul. Concrètement, vos instances EC2 bénéficient d’une intégrité de plateforme vérifiée cryptographiquement et de frontières matérielles, empêchant tout accès non autorisé à vos données sans compromis sur les performances. Vous bénéficiez ainsi à la fois des contrôles de souveraineté et de la puissance de calcul nécessaires à vos charges de travail. L’infrastructure fait l’objet <a href="https://aws.amazon.com/blogs/compute/aws-nitro-system-gets-independent-affirmation-of-its-confidential-compute-capabilities/">d’audits indépendants réalisés par des tiers</a>, et s’inscrit dans des programmes de conformité incluant ISO/IEC 27001:2013, les rapports SOC 1/2/3, ainsi que l’attestation C5 de l’<a href="https://www.bsi.bund.de/EN/Home/home_node.html">Office fédéral allemand pour la sécurité de l’information (BSI)</a>.</p><p>Le <a href="https://aws.amazon.com/blogs/security/exploring-the-new-aws-european-sovereign-cloud-sovereign-reference-framework/">AWS European Sovereign Cloud: Sovereign Reference Framework</a> définit précisément les contrôles de souveraineté couvrant l’indépendance de la gouvernance, le contrôle opérationnel, la résidence des données et l’isolation technique. Ce cadre est disponible dans <a href="https://aws.amazon.com/artifact/">AWS Artifact</a> et offre une visibilité de bout en bout via une attestation SOC 2.</p><p><strong>Disponibilité étendue des services<br /></strong> Dès son lancement, l’AWS European Sovereign Cloud donne accès à un large éventail de services AWS, notamment <a href="https://aws.amazon.com/sagemaker/">Amazon SageMaker</a> et <a href="https://aws.amazon.com/bedrock/">Amazon Bedrock</a> pour les charges de travail d’intelligence artificielle et de machine learning (IA/ML). Pour le calcul, vous pouvez utiliser <a href="https://aws.amazon.com/ec2/">Amazon Elastic Compute Cloud (Amazon EC2)</a> et <a href="https://aws.amazon.com/lambda/">AWS Lambda</a>. L’orchestration de conteneurs est disponible via <a href="https://aws.amazon.com/eks/">Amazon Elastic Kubernetes Service (Amazon EKS)</a> et <a href="https://aws.amazon.com/ecs/">Amazon Elastic Container Service (Amazon ECS)</a>. Les services de bases de données incluent <a href="https://aws.amazon.com/rds/aurora/">Amazon Aurora</a>, <a href="https://aws.amazon.com/dynamodb/">Amazon DynamoDB</a> et <a href="https://aws.amazon.com/rds/">Amazon Relational Database Service (Amazon RDS)</a>. Les options de stockage comprennent <a href="https://aws.amazon.com/s3/">Amazon Simple Storage Service (Amazon S3)</a> et <a href="https://aws.amazon.com/ebs/">Amazon Elastic Block Store (Amazon EBS)</a>, avec des capacités réseau via <a href="https://aws.amazon.com/vpc/">Amazon Virtual Private Cloud (Amazon VPC)</a> et des services de sécurité tels que <a href="https://aws.amazon.com/kms/">AWS Key Management Service (AWS KMS)</a> et <a href="https://aws.amazon.com/private-ca/">AWS Private Certificate Authority</a>. Pour obtenir la liste la plus à jour des services disponibles, consultez la <a href="https://builder.aws.com/build/capabilities/explore?f=eJyrVipOzUlNLklNCUpNz8zPK1ayUoqOUUotLU7WTUnVTU0sLtE1jFGKVdKBK3QsS8zMSUzKzMksqQSqdsyrVEARqgUA4l8dog&amp;tab=service-feature">matrice des capacités AWS</a> récemment publiée sur l’AWS Builder Center.</p><p>L’AWS European Sovereign Cloud est <a href="https://aws.amazon.com/blogs/apn/range-of-aws-partner-solutions-set-to-launch-on-the-aws-european-sovereign-cloud/">supporté par de nombreux partenaires AWS</a> engagés à vous aider à répondre à vos exigences de souveraineté. Des partenaires tels qu’Adobe, Cisco, Cloudera, Dedalus, Esri, Genesys, GitLab, Mendix, Pega, SAP, SnowFlake, Trend Micro et Wiz rendent leurs solutions disponibles sur l’AWS European Sovereign Cloud, vous offrant ainsi les outils et services nécessaires dans les domaines de la sécurité, de l’analyse de données, du développement applicatif et des charges de travail spécifiques à certains secteurs industriels. Cet ensemble de partenaires vous permet de construire des solutions souveraines combinant les services AWS et des technologies de partenaires de confiance.</p><p><strong>Un investissement majeur dans l’infrastructure européenne<br /></strong> L’AWS European Sovereign Cloud s’appuie sur un investissement de 7,8 milliards d’euros dans l’infrastructure, la création d’emplois et le développement des compétences. Cet investissement devrait contribuer à hauteur de 17,2 milliards d’euros à l’économie européenne d’ici 2040 et soutenir environ 2.800 emplois équivalent temps plein par an au sein des entreprises locales.</p><p><strong>Quelques détails techniques<br /></strong> L’AWS European Sovereign Cloud est accessible à tous les clients, quel que soit leur lieu d’implantation. Vous pouvez accéder à l’infrastructure en utilisant le nom de <a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/reference-arns.html">partition</a> <code>aws-eusc</code> et le nom de région <code>eusc-de-east-1</code>. Une partition correspond à un ensemble de régions AWS. Chaque compte AWS est rattaché à une seule partition.</p><p>L’infrastructure prend en charge toutes les méthodes d’accès AWS standards, y compris la <a href="https://console.amazonaws-eusc.eu/">console de gestion AWS</a>, les <a href="https://aws.amazon.com/tools/">AWS SDKs</a> et la <a href="https://aws.amazon.com/cli/">AWS Command Line Interface (AWS CLI)</a>, ce qui facilite son intégration dans vos flux de travail et vos automatisations existants. Après avoir créé un nouveau compte racine pour la partition AWS European Sovereign Cloud, vous commencez par définir de nouvelles identités et rôles IAM spécifiques à cette infrastructure, vous donnant un contrôle total sur la gestion des accès au sein de l’environnement souverain européen.</p><p><strong>Pour commencer<br /></strong> L’AWS European Sovereign Cloud offre aux organisations européennes des contrôles de souveraineté renforcés tout en leur permettant de continuer à bénéficier de l’innovation et des capacités d’AWS. Vous pouvez contractualiser les services via Amazon Web Services EMEA SARL, avec une tarification en euros et une facturation possible dans l’une des <a href="https://aws.amazon.com/legal/aws-emea/">huit devises que nous prenons en charge aujourd’hui</a>. L’infrastructure repose sur une architecture, un portefeuille de services et des API AWS familiers, ce qui simplifie le développement et la migration des applications.</p><p>L’<a href="https://aws.eu/de/esca">avenant AWS European Sovereign Cloud</a> précise les engagements contractuels supplémentaires propres à l’AWS European Sovereign Cloud.</p><p>En tant qu’Européen, ce lancement illustre l’engagement d’AWS à répondre aux besoins spécifiques de notre continent et à fournir les capacités cloud qui stimulent l’innovation dans tous les secteurs. Je vous invite à découvrir l’AWS European Sovereign Cloud et à comprendre comment il peut aider votre organisation à satisfaire ses exigences de souveraineté. Consultez <a href="https://docs.aws.amazon.com/whitepapers/latest/overview-aws-european-sovereign-cloud/introduction.html">Overview of the AWS European Sovereign Cloud</a> (en anglais) pour en savoir plus sur les objectifs de conception et l’approche retenue, <a href="https://eusc-de-east-1.signin.amazonaws-eusc.eu/signup?request_type=register">créez un nouveau compte</a> et planifiez dès aujourd’hui le déploiement de votre première charge de travail.</p><a href="https://linktr.ee/sebsto">— seb</a><hr id="italian" /><h4 class="c8">Italian version</h4><p><strong>Lancio di AWS European Sovereign Cloud</strong></p><p>Come cittadino europeo, conosco benissimo l’importanza della sovranità digitale, in particolare per le nostre organizzazioni del settore pubblico e dei settori altamente regolamentati. Oggi sono lieto di annunciare che <a href="https://aws.eu/">AWS European Sovereign Cloud</a> è ora generalmente disponibile per tutti i clienti. <a href="https://aws.amazon.com/blogs/aws/in-the-works-aws-european-sovereign-cloud/">Abbiamo annunciato per la prima volta i nostri piani per la creazione di questa nuova infrastruttura cloud indipendente nel 2023.</a> Finalmente oggi è pronta a soddisfare i più rigorosi requisiti di sovranità dei clienti europei <a href="https://aws.amazon.com/blogs/security/announcing-initial-services-available-in-the-aws-european-sovereign-cloud-backed-by-the-full-power-of-aws/">con un set completo di servizi AWS</a>.</p><div class="wp-caption aligncenter c7"><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/12/13/AdobeStock_426378211.jpeg"><img aria-describedby="caption-attachment-102650" class="size-large wp-image-102650" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/12/13/AdobeStock_426378211-1024x341.jpeg" alt="Brandenburg Gate" width="1024" height="341" /></a><p class="wp-caption-text">Berlino, Porta di Brandeburgo al tramonto</p></div><p><strong class="c9">Soddisfare i requisiti di sovranità europea<br /></strong> Le organizzazioni di tutta Europa devono far fronte a requisiti normativi sempre più complessi in materia di residenza dei dati, controllo operativo e indipendenza della governance. Troppo spesso, le organizzazioni europee con i più elevati requisiti di sovranità sono bloccate a causa di offerte o ambienti on-premises legacy con funzionalità e servizi ridotti. In risposta a questa esigenza fondamentale, l’AWS European Sovereign Cloud rappresenta l’unico cloud sovrano con funzionalità complete e a gestione autonoma. supportato da solidi controlli tecnici, garanzie di sovranità e protezioni legali. Gli enti pubblici e le aziende di settori altamente regolamentati necessitano di un’infrastruttura cloud che fornisca controlli di sovranità avanzati che garantiscano l’innovazione, la sicurezza e l’affidabilità che si aspettano dai moderni servizi cloud. Queste organizzazioni devono essere certe che i propri dati e le proprie operazioni restino sotto la giurisdizione europea, con chiare strutture di governance e autonomia operativa nell’ambito dell’Unione europea (UE).</p><p><strong class="c9">Una nuova infrastruttura cloud indipendente per l’Europa</strong><br />L’AWS European Sovereign Cloud rappresenta un’infrastruttura cloud separata fisicamente e logicamente, con tutti i componenti situati interamente all’interno dell’UE. La prima <a href="https://docs.aws.amazon.com/glossary/latest/reference/glos-chap.html#region">Regione AWS</a> nell’AWS European Sovereign Cloud, situata nello stato di Brandeburgo (Germania) e ora disponibile a livello generale, opera indipendentemente dalle Regioni AWS esistenti. L’infrastruttura presenta diverse zone di disponibilità con risorse di alimentazione e rete ridondanti, progettate per funzionare continuamente anche in caso di interruzione della connettività con il resto del mondo.</p><p>Abbiamo intenzione di estendere la presenza dell’AWS European Sovereign Cloud dalla Germania all’intera UE per supportare i rigorosi requisiti di isolamento, residenza dei dati all’interno di un determinato Paese e bassa latenza. Inizieremo con nuove <a href="https://docs.aws.amazon.com/local-zones/latest/ug/what-is-aws-local-zones.html">zone locali</a> sovrane situate in Belgio, nei Paesi Bassi e in Portogallo. Inoltre, sarà possibile estendere l’infrastruttura AWS European Sovereign Cloud con <a href="https://aws.amazon.com/dedicatedlocalzones/">Zone locali AWS dedicate</a>, <a href="https://aws.amazon.com/about-aws/global-infrastructure/ai-factories/">AWS AI Factories</a> o <a href="https://aws.amazon.com/outposts/">AWS Outposts</a> in posizioni selezionate, inclusi i data center on-premises.</p><p>L’AWS European Sovereign Cloud e le relative zone locali forniscono controlli sovrani avanzati tramite un modello operativo esclusivo. L’AWS European Sovereign Cloud sarà gestito esclusivamente da residenti UE che si trovano nell’UE. Ciò copre attività come operazioni quotidiane, supporto tecnico e servizio clienti. Stiamo gradualmente trasformando l’AWS European Sovereign Cloud <a href="https://www.aboutamazon.eu/news/aws/aws-european-sovereign-cloud-to-be-operated-by-eu-citizens">in modo che sia gestito esclusivamente da cittadini UE che si trovano nell’UE</a>. Durante questo periodo di transizione, continueremo a lavorare con un team misto di residenti e cittadini comunitari che si trovano nell’UE.</p><p>L’infrastruttura è gestita da entità giuridiche europee dedicate, costituite secondo il diritto tedesco. Nell’ottobre 2025, AWS ha assegnato l’incarico di managing director a <a href="https://www.aboutamazon.eu/news/aws/stephane-israel-appointed-to-lead-the-aws-european-sovereign-cloud">Stéphane Israël</a>, cittadino comunitario residente nell’UE. Sarà responsabile della gestione e delle operazioni dell’AWS European Sovereign Cloud, inclusi infrastruttura, tecnologia e servizi, oltre a guidare le più ampie iniziative di sovranità digitale di AWS. Nel gennaio 2026, AWS ha inoltre nominato managing director dell’AWS European Sovereign Cloud <a href="https://www.linkedin.com/in/stefanhoechbaue">Stefan Höchbauer</a> (vicepresidente, Germania ed Europa centrale, AWS). Collaborerà con Stéphane Israël per guidare l’AWS European Sovereign Cloud.</p><p>Un comitato consultivo composto esclusivamente da cittadini comunitari, inclusi due rappresentanti terzi indipendenti, fornirà ulteriore supervisione e competenza in materia di sovranità.</p><p><strong class="c9">Ottimizzazione del controllo e della residenza dei dati</strong><br />L’AWS European Sovereign Cloud offre garanzie complete sulla residenza dei dati in modo da poter soddisfare i requisiti più rigorosi in materia. Come per le nostre Regioni AWS esistenti a livello mondiale, tutti i contenuti restano all’interno della Regione selezionata, a meno che non si scelga diversamente. Oltre ai contenuti, anche i metadati creati dai clienti, tra cui ruoli, autorizzazioni, etichette delle risorse e configurazioni, restano nell’ambito dell’UE. L’infrastruttura è dotata di un proprio sistema <a href="https://aws.amazon.com/iam/">AWS Identity and Access Management (AWS IAM)</a> e di fatturazione dedicato, completamente gestito in modo indipendente all’interno dei confini europei.</p><p>I controlli tecnici integrati nell’infrastruttura <a href="https://docs.aws.amazon.com/whitepapers/latest/overview-aws-european-sovereign-cloud/design-goals.html">impediscono l’accesso all’AWS European Sovereign Cloud dall’esterno dell’UE</a>. L’infrastruttura include <a href="https://aws.amazon.com/blogs/security/establishing-a-european-trust-service-provider-for-the-aws-european-sovereign-cloud/">un provider di servizi fiduciari europeo dedicato per le operazioni delle autorità di certificazione</a> e utilizza nameserver <a href="https://aws.amazon.com/route53/">Amazon Route 53</a> dedicati. Questi server utilizzeranno solo <a href="https://aws.eu/faq/#operational-autonomy">domini di primo livello (TLD) europei per i propri nomi</a>. L’AWS European Sovereign Cloud non ha dipendenze fondamentali da personale o infrastrutture non UE.</p><p><strong class="c9">Framework di sicurezza e conformità<br /></strong> L’AWS European Sovereign Cloud mantiene le stesse funzionalità di sicurezza di base di AWS, tra cui crittografia, gestione delle chiavi, governance degli accessi e <a href="https://aws.amazon.com/ec2/nitro/">AWS Nitro System</a> per l’isolamento computazionale. Ciò significa che le istanze EC2 beneficiano dell’integrità della piattaforma verificata a livello di crittografia e dei limiti imposti dall’hardware che impediscono l’accesso non autorizzato ai dati senza compromettere le prestazioni, offrendo i controlli di sovranità e la potenza di calcolo richiesti dai carichi di lavoro. L’infrastruttura è sottoposta ad <a href="https://aws.amazon.com/blogs/compute/aws-nitro-system-gets-independent-affirmation-of-its-confidential-compute-capabilities/">audit di terze parti indipendenti</a>, con programmi di conformità che includono ISO/IEC 27001:2013, report SOC 1/2/3 e attestazione C5 dell’<a href="https://www.bsi.bund.de/EN/Home/home_node.html">Ufficio Federale per la Sicurezza Informatica (BSI)</a>.</p><p>L’<a href="https://aws.amazon.com/blogs/security/exploring-the-new-aws-european-sovereign-cloud-sovereign-reference-framework/">AWS European Sovereign Cloud: Sovereign Reference Framework</a> definisce i controlli di sovranità specifici in termini di indipendenza della governance, controllo operativo, residenza dei dati e isolamento tecnico. Questo framework è disponibile in <a href="https://aws.amazon.com/artifact/">AWS Artifact</a> e fornisce visibilità end-to-end tramite l’attestazione SOC 2.</p><p><strong class="c9">Disponibilità completa del servizio<br /></strong> Dal momento del lancio, sarà possibile accedere a un’ampia gamma di servizi AWS nell’AWS European Sovereign Cloud, tra cui <a href="https://aws.amazon.com/sagemaker/">Amazon SageMaker</a> e <a href="https://aws.amazon.com/bedrock/">Amazon Bedrock</a> per carichi di lavoro di intelligenza artificiale e machine learning (IA/ML). Per i calcoli, è possibile utilizzare <a href="https://aws.amazon.com/ec2/">Amazon Elastic Compute Cloud (Amazon EC2)</a> e <a href="https://aws.amazon.com/lambda/">AWS Lambda</a>. L’orchestrazione di container è disponibile tramite <a href="https://aws.amazon.com/eks/">Amazon Elastic Kubernetes Service (Amazon EKS)</a> e <a href="https://aws.amazon.com/ecs/">Amazon Elastic Container Service (Amazon ECS)</a>. I servizi di database includono <a href="https://aws.amazon.com/rds/aurora/">Amazon Aurora</a>, <a href="https://aws.amazon.com/dynamodb/">Amazon DynamoDB</a> e <a href="https://aws.amazon.com/rds/">Amazon Relational Database Service (Amazon RDS)</a>. Le opzioni di storage includono <a href="https://aws.amazon.com/s3/">Amazon Simple Storage Service (Amazon S3)</a> e <a href="https://aws.amazon.com/ebs/">Amazon Elastic Block Store (Amazon EBS)</a>, con connessione in rete tramite <a href="https://aws.amazon.com/vpc/">Amazon Virtual Private Cloud (Amazon VPC)</a> e servizi di sicurezza tra cui <a href="https://aws.amazon.com/kms/">Servizio AWS di gestione delle chiavi (AWS KMS)</a> e <a href="https://aws.amazon.com/private-ca/">Autorità di certificazione privata AWS (AWS Private CA)</a>. Per un elenco aggiornato dei servizi, è possibile consultare l’<a href="https://builder.aws.com/build/capabilities/explore?f=eJyrVipOzUlNLklNCUpNz8zPK1ayUoqOUUotLU7WTUnVTU0sLtE1jFGKVdKBK3QsS8zMSUzKzMksqQSqdsyrVEARqgUA4l8dog&amp;tab=service-feature">elenco delle funzionalità AWS</a> pubblicato di recente su AWS Builder Center.</p><p>L’AWS European Sovereign Cloud è <a href="https://aws.amazon.com/blogs/apn/range-of-aws-partner-solutions-set-to-launch-on-the-aws-european-sovereign-cloud/">supportato dai partner AWS</a>, che aiutano a soddisfare i propri requisiti di sovranità. Partner come Adobe, Cisco, Cloudera, Dedalus, Esri, Genesys, GitLab, Mendix, Pega, SAP, SnowFlake, Trend Micro e Wiz stanno rendendo disponibili le loro soluzioni nell’AWS European Sovereign Cloud, fornendo gli strumenti e i servizi necessari per la sicurezza, l’analisi dei dati, lo sviluppo di applicazioni e i carichi di lavoro specifici del settore. Questo ampio supporto dei partner aiuta a creare soluzioni sovrane che combinano i servizi AWS con tecnologie di partner affidabili.</p><p><strong class="c9">Investimenti significativi nelle infrastrutture europee<br /></strong> L’AWS European Sovereign Cloud è sostenuto da un investimento di 7,8 miliardi di euro destinati a infrastrutture, creazione di posti di lavoro e sviluppo delle competenze. Si prevede che questo investimento contribuirà con 17,2 miliardi di euro all’economia europea fino al 2040 e sosterrà circa 2.800 posti di lavoro equivalenti a tempo pieno all’anno nelle aziende locali.</p><p><strong class="c9">Alcuni dettagli tecnici<br /></strong> L’AWS European Sovereign Cloud è disponibile per tutti i clienti, indipendentemente da dove si trovino. È possibile accedere all’infrastruttura utilizzando il nome della <a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/reference-arns.html">partizione</a> aws-eusc e il nome della Regione eusc-de-east-1. Una partizione è un gruppo di Regioni AWS. Ogni account AWS è associato a una partizione.</p><p>L’infrastruttura supporta tutti i metodi di accesso AWS standard, tra cui la <a href="https://console.amazonaws-eusc.eu/">Console di gestione AWS</a>, gli <a href="https://aws.amazon.com/tools/">SDK AWS</a> e l’<a href="https://aws.amazon.com/cli/">interfaccia della linea di comando AWS (AWS CLI)</a>, semplificando l’integrazione nell’automazione e nei flussi di lavoro esistenti. Dopo aver creato un nuovo account root per la partizione Di AWS European Sovereign Cloud, si inizia creando nuove identità e ruoli IAM specifici per questa infrastruttura, che consentiranno di avere il controllo completo sulla gestione degli accessi all’interno dell’ambiente sovrano europeo.</p><p><strong class="c9">Nozioni di base<br /></strong> L’AWS European Sovereign Cloud fornisce alle organizzazioni europee controlli di sovranità avanzati pur mantenendo l’accesso all’innovazione e alle funzionalità di AWS. È possibile contrattare servizi tramite Amazon Web Services EMEA SARL, con prezzi in EUR e fatturazione in una <a href="https://aws.amazon.com/legal/aws-emea/">delle otto valute supportate oggi</a>. L’infrastruttura utilizza l’architettura AWS, il portafoglio di servizi e le API tradizionali, semplificando la creazione e la migrazione delle applicazioni.</p><p>L’<a href="https://aws.eu/de/esca">addendum di AWS European Sovereign Cloud</a> include gli impegni contrattuali aggiuntivi per l’AWS European Sovereign Cloud.</p><p>Per me come europeo, questo lancio rappresenta l’impegno di AWS per soddisfare le esigenze specifiche del nostro continente e fornire le funzionalità cloud che guidano l’innovazione in tutti i settori. Invito tutti a scoprire di più sull’AWS European Sovereign Cloud e su come può aiutare le organizzazioni a soddisfare i requisiti di sovranità. Nella <a href="https://docs.aws.amazon.com/whitepapers/latest/overview-aws-european-sovereign-cloud/introduction.html">Panoramica di AWS European Sovereign Cloud</a> sono disponibili maggiori informazioni sugli obiettivi e sull’approccio di progettazione, <a href="https://eusc-de-east-1.signin.amazonaws-eusc.eu/signup?request_type=register">creare un nuovo account</a>  e pianificare l’implementazione del primo carico di lavoro oggi stesso.</p><p><a href="https://linktr.ee/sebsto">— seb</a></p><hr id="spanish" /><h4 class="c8">Spanish version</h4><p><strong>Apertura de AWS European Sovereign Cloud</strong></p><p>Como ciudadano europeo, comprendo de primera mano la importancia de la soberanía digital, especialmente para las organizaciones de nuestro sector público y las industrias altamente reguladas. Hoy me complace anunciar que la <a href="https://aws.eu/">AWS European Sovereign Cloud</a> ya está disponible de forma generalizada para todos los clientes. <a href="https://aws.amazon.com/blogs/aws/in-the-works-aws-european-sovereign-cloud/">Anunciamos por primera vez nuestros planes de crear esta nueva infraestructura de nube independiente en 2023</a>, y hoy está lista para cumplir los requisitos de soberanía más estrictos de los clientes europeos <a href="https://aws.amazon.com/blogs/security/announcing-initial-services-available-in-the-aws-european-sovereign-cloud-backed-by-the-full-power-of-aws/">con un exhaustivo conjunto de servicios de AWS</a>.</p><p>Berlín, Puerta de Brandeburgo al atardecer</p><p><strong class="c9">Cumplimiento de los requisitos de soberanía europeos<br /></strong> Las organizaciones de toda Europa se enfrentan a unos requisitos normativos cada vez más complejos en relación con la residencia de los datos, el control operativo y la independencia de la gobernanza. Hoy en día, con demasiada frecuencia, las organizaciones europeas con los requisitos de soberanía más estrictos se ven atrapadas en entornos locales heredados u ofertas con servicios y funcionalidades reducidos. En respuesta a esta necesidad crítica, la AWS European Sovereign Cloud es la única nube soberana independiente con todas las características que está respaldada por sólidos controles técnicos, garantías de soberanía y protecciones legales. Las entidades del sector público y las empresas de industrias altamente reguladas necesitan una infraestructura en la nube que proporcione controles de soberanía mejorados que mantengan la innovación, la seguridad y la fiabilidad que se esperan de los servicios modernos en la nube. Estas organizaciones necesitan la garantía de que sus datos y operaciones permanecen bajo la jurisdicción europea, con estructuras de gobernanza claras y autonomía operativa dentro de la Unión Europea (UE).</p><p><strong class="c9">Una nueva infraestructura de nube independiente para Europa</strong><br />La AWS European Sovereign Cloud es una infraestructura de nube separada de manera física y lógica, en la que todos los componentes están ubicados íntegramente dentro de la UE. La primera <a href="https://docs.aws.amazon.com/glossary/latest/reference/glos-chap.html#region">región de AWS</a> de la AWS European Sovereign Cloud se encuentra en el estado de Brandeburgo (Alemania) y ya está disponible para el público en general. Esta región opera de forma independiente de las regiones de AWS existentes. La infraestructura cuenta con varias zonas de disponibilidad con fuentes de alimenacion y redes redundantes, diseñadas para funcionar de forma continua incluso si se interrumpe la conectividad con el resto del mundo.</p><p>Tenemos previsto ampliar la presencia de la AWS European Sovereign Cloud de Alemania a toda la UE para cumplir los requisitos de aismlamiento estrictos, residencia de los datos dentro del país y baja latencia. Esto comenzará con nuevas <a href="https://docs.aws.amazon.com/local-zones/latest/ug/what-is-aws-local-zones.html">zonas locales</a> soberanas ubicadas en Bélgica, los Países Bajos y Portugal. Además, podrá ampliar la infraestructura de la AWS European Sovereign Cloud con <a href="https://aws.amazon.com/dedicatedlocalzones/">zonas locales dedicadas de AWS</a>, <a href="https://aws.amazon.com/about-aws/global-infrastructure/ai-factories/">AWS AI Factories</a> o <a href="https://aws.amazon.com/outposts/">AWS Outposts</a> en las ubicaciones que elija, incluidos sus propios centros de datos locales.</p><p>La AWS European Sovereign Cloud y sus zonas locales proporcionan controles soberanos mejorados a través de su modelo operativo único. La AWS European Sovereign Cloud será operada exclusivamente por residentes de la UE ubicados en la UE. Abarca actividades como las operaciones diarias, la asistencia técnica y el servicio de atención al cliente. Estamos realizando una transición gradual de la AWS European Sovereign Cloud para que <a href="https://www.aboutamazon.eu/news/aws/aws-european-sovereign-cloud-to-be-operated-by-eu-citizens">se opere exclusivamente por ciudadanos de la UE ubicados en la UE</a>. Durante este período de transición, seguiremos trabajando con un equipo mixto de residentes de la UE y ciudadanos de la UE ubicados en la UE.</p><p>La infraestructura se administra a través de entidades jurídicas europeas especializadas constituidas en el marco de la legislación alemana. En octubre de 2025, AWS nombró director general a <a href="https://www.aboutamazon.eu/news/aws/stephane-israel-appointed-to-lead-the-aws-european-sovereign-cloud">Stéphane Israël</a>, ciudadano de la UE que reside en la UE. Stéphane será responsable de la administración y las operaciones de la AWS European Sovereign Cloud, lo que incluye la infraestructura, la tecnología y los servicios, además de liderar los esfuerzos más amplios de AWS en materia de soberanía digital. En enero de 2026, AWS también nombró a <a href="https://www.linkedin.com/in/stefanhoechbaue">Stefan Hoechbauer</a> (vicepresidente de AWS para Alemania y Europa Central) director general de la AWS European Sovereign Cloud. Stefan dirigirá la AWS European Sovereign Cloud junto con Stéphane Israël.</p><p>Un consejo consultivo compuesto exclusivamente por ciudadanos de la UE, que incluye a dos representantes independientes externos, proporciona supervisión y experiencia adicional en materia de soberanía.</p><p><strong class="c9">Mejor control y residencia de datos</strong><br />La AWS European Sovereign Cloud ofrece amplias garantías de residencia de datos para que pueda cumplir los requisitos más estrictos en materia de residencia de datos. Al igual que ocurre con nuestras regiones de AWS existentes en todo el mundo, todo el contenido permanece dentro la región que elija, a menos que decida lo contrario. Además del contenido, los metadatos creados por los clientes, incluidos los roles, los permisos, las etiquetas de recursos y las configuraciones, también permanecen dentro de la UE. La infraestructura cuenta con su propio sistema dedicado de <a href="https://aws.amazon.com/iam/">AWS Identity and Access Management (AWS IAM)</a> y facturación, que funciona de forma independiente dentro de las fronteras europeas.</p><p>Los controles técnicos integrados en la infraestructura <a href="https://docs.aws.amazon.com/whitepapers/latest/overview-aws-european-sovereign-cloud/design-goals.html">impiden el acceso a la AWS European Sovereign Cloud desde fuera de la UE</a>. La infraestructura incluye <a href="https://aws.amazon.com/blogs/security/establishing-a-european-trust-service-provider-for-the-aws-european-sovereign-cloud/">un proveedor de servicios de confianza europeo dedicado para las operaciones de las autoridades de certificación</a> y utiliza servidores de nombres de <a href="https://aws.amazon.com/route53/">Amazon Route 53</a> dedicados. Estos servidores solo usarán <a href="https://aws.eu/faq/#operational-autonomy">dominios de nivel superior europeos para sus propios nombres</a>. La AWS European Sovereign Cloud no tiene dependencias críticas de personal o infraestructura fuera de la UE.</p><p><strong class="c9">Marco de seguridad y cumplimiento<br /></strong> La AWS European Sovereign Cloud mantiene las mismas capacidades de seguridad básicas que cabe esperar de AWS, como el cifrado, la administración de claves, la gobernanza del acceso y <a href="https://aws.amazon.com/ec2/nitro/">AWS Nitro System</a> para el aislamiento de la computación. Esto significa que sus instancias de EC2 se benefician de la integridad de la plataforma verificada criptográficamente y de los límites impuestos por el hardware que impiden el acceso no autorizado a sus datos sin comprometer el rendimiento, lo que le proporciona tanto los controles de soberanía como la potencia computacional que requieren sus cargas de trabajo. La infraestructura se somete a <a href="https://aws.amazon.com/blogs/compute/aws-nitro-system-gets-independent-affirmation-of-its-confidential-compute-capabilities/">auditorías independientes externas</a>, con programas de cumplimiento que incluyen la norma ISO/IEC 27001:2013, informes SOC 1/2/3 y la certificación C5 de la <a href="https://www.bsi.bund.de/EN/Home/home_node.html">Oficina Federal de Seguridad de la Información</a>.</p><p>El <a href="https://aws.amazon.com/blogs/security/exploring-the-new-aws-european-sovereign-cloud-sovereign-reference-framework/">marco de referencia de soberanía de la AWS European Sovereign Cloud</a> define los controles de soberanía específicos en relación con la independencia de la gobernanza, el control operativo, la residencia de datos y el aislamiento técnico. Este marco está disponible en <a href="https://aws.amazon.com/artifact/">AWS Artifact</a> y proporciona visibilidad total a través de la certificación SOC 2.</p><p><strong class="c9">Disponibilidad total del servicio<br /></strong> Puede acceder a una amplia variedad de servicios de AWS en la AWS European Sovereign Cloud desde su lanzamiento, incluidos <a href="https://aws.amazon.com/sagemaker/">Amazon SageMaker</a> y <a href="https://aws.amazon.com/bedrock/">Amazon Bedrock</a> para cargas de trabajo de inteligencia artificial y machine learning. Para el procesamiento, puede usar <a href="https://aws.amazon.com/ec2/">Amazon Elastic Compute Cloud (Amazon EC2)</a> y <a href="https://aws.amazon.com/lambda/">AWS Lambda</a>. La orquestación de contenedores está disponible a través de <a href="https://aws.amazon.com/eks/">Amazon Elastic Kubernetes Service (Amazon EKS)</a> y <a href="https://aws.amazon.com/ecs/">Amazon Elastic Container Service (Amazon ECS)</a>. Los servicios de bases de datos incluyen <a href="https://aws.amazon.com/rds/aurora/">Amazon Aurora</a>, <a href="https://aws.amazon.com/dynamodb/">Amazon DynamoDB</a> y <a href="https://aws.amazon.com/rds/">Amazon Relational Database Service (Amazon RDS)</a>. Dispone de opciones de almacenamiento como <a href="https://aws.amazon.com/s3/">Amazon Simple Storage Service (Amazon S3)</a> y <a href="https://aws.amazon.com/ebs/">Amazon Elastic Block Store (Amazon EBS)</a>, con redes a través de <a href="https://aws.amazon.com/vpc/">Amazon Virtual Private Cloud (Amazon VPC)</a> y servicios de seguridad como <a href="https://aws.amazon.com/kms/">AWS Key Management Service (AWS KMS)</a> y <a href="https://aws.amazon.com/private-ca/">AWS Private Certificate Authority</a>. Para obtener una lista actualizada de los servicios, consulte la <a href="https://builder.aws.com/build/capabilities/explore?f=eJyrVipOzUlNLklNCUpNz8zPK1ayUoqOUUotLU7WTUnVTU0sLtE1jFGKVdKBK3QsS8zMSUzKzMksqQSqdsyrVEARqgUA4l8dog&amp;tab=service-feature">matriz de capacidades de AWS</a> que se ha publicado recientemente en AWS Builder Center.</p><p>La AWS European Sovereign Cloud <a href="https://aws.amazon.com/blogs/apn/range-of-aws-partner-solutions-set-to-launch-on-the-aws-european-sovereign-cloud/">cuenta con el respaldo de los socios de AWS</a>, que se comprometen a ayudarle a cumplir sus requisitos de soberanía. Socios como Adobe, Cisco, Cloudera, Dedalus, Esri, Genesys, GitLab, Mendix, Pega, SAP, SnowFlake, Trend Micro y Wiz ofrecen sus soluciones en la AWS European Sovereign Cloud, lo que le proporciona las herramientas y los servicios que necesita en materia de seguridad, análisis de datos, desarrollo de aplicaciones y cargas de trabajo específicas del sector. Este amplio apoyo de los socios le ayuda a crear soluciones soberanas que combinan los servicios de AWS con tecnologías de socios de confianza.</p><p><strong class="c9">Inversión importante. en la infraestructura europea<br /></strong> La AWS European Sovereign Cloud está respaldada por una inversión de 7.800 millones de EUR en infraestructura, creación de empleo y desarrollo de habilidades. Se espera que esta inversión aporte 17.200 millones de EUR a la economía europea para 2040 y ayude a crear el equivalente a aproximadamente 2.800 puestos de trabajo a tiempo completo por año en empresas locales.</p><p><strong class="c9">Algunos detalles técnicos<br /></strong> La AWS European Sovereign Cloud está disponible para todos los clientes, independientemente de dónde se encuentren. Puede acceder a la infraestructura utilizando el nombre de <a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/reference-arns.html">partición</a> aws-eusc y el nombre de región eusc-de-east-1. Una partición es un grupo de regiones de AWS. Cada cuenta de AWS tiene limitado su alcance a una sola partición.</p><p>La infraestructura admite todos los métodos de acceso estándar de AWS, como la <a href="https://console.amazonaws-eusc.eu/">Consola de administración de AWS</a>, los <a href="https://aws.amazon.com/tools/">AWS SDK</a> y la <a href="https://aws.amazon.com/cli/">Interfaz de la línea de comandos de AWS (AWS CLI)</a>, lo que facilita la integración en sus flujos de trabajo y automatización existentes. Tras crear una nueva cuenta raíz para la partición de la AWS European Sovereign Cloud, primero deberá crear nuevas identidades y roles de IAM específicos para esta infraestructura, lo que le permitirá tener un control total sobre la administración del acceso en el entorno soberano europeo.</p><p><strong class="c9">Cómo empezar<br /></strong> La AWS European Sovereign Cloud proporciona a las organizaciones europeas controles de soberanía mejorados, al tiempo que mantiene el acceso a la innovación y las capacidades de AWS. Puede contratar los servicios a través de Amazon Web Services EMEA SARL, con precios en EUR y facturación en cualquiera de <a href="https://aws.amazon.com/legal/aws-emea/">las ocho divisas que admitimos actualmente</a>. La infraestructura utiliza la arquitectura, la cartera de servicios y las API habituales de AWS, lo que facilita la creación y la migración de aplicaciones.</p><p>La <a href="https://aws.eu/de/esca">adenda de la AWS European Sovereign Cloud</a> contiene los compromisos contractuales adicionales para la AWS European Sovereign Cloud.</p><p>Para mí, como europeo, este lanzamiento representa el compromiso de AWS de satisfacer las necesidades específicas de nuestro continente y proporcionar las capacidades en la nube que impulsan la innovación en todos los sectores. Le invito a descubrir más sobre la AWS European Sovereign Cloud y cómo puede ayudar a su organización a cumplir sus requisitos de soberanía. Lea la <a href="https://docs.aws.amazon.com/whitepapers/latest/overview-aws-european-sovereign-cloud/introduction.html">descripción general de la AWS European Sovereign Cloud</a> para obtener más información sobre los objetivos y el enfoque del diseño, <a href="https://eusc-de-east-1.signin.amazonaws-eusc.eu/signup?request_type=register">regístrese para obtener una nueva cuenta</a> y planifique hoy mismo el despliegue de su primera carga de trabajo.</p><p><a href="https://linktr.ee/sebsto">— seb</a></p>]]></summary>
    <link href="https://aws.amazon.com/blogs/aws/opening-the-aws-european-sovereign-cloud/"/>
    <updated>2026-01-15T08:12:00+01:00</updated>
  </entry>
  <entry>
    <id>https://aws.amazon.com/blogs/aws/aws-weekly-roundup-aws-lambda-for-net-10-aws-client-vpn-quickstart-best-of-aws-reinvent-and-more-january-12-2026/</id>
    <title><![CDATA[AWS Weekly Roundup: AWS Lambda for .NET 10, AWS Client VPN quickstart, Best of AWS re:Invent, and more (January 12, 2026)]]></title>
    <summary><![CDATA[<section class="blog-post-content lb-rtxt"><table id="amazon-polly-audio-table"><tbody><tr><td id="amazon-polly-audio-tab">
<p></p></td></tr></tbody>


</table><p>At the beginning of January, I tend to set my top resolutions for the year, a way to focus on what I want to achieve. If AI and cloud computing are on your resolution list, consider creating an <a href="https://aws.amazon.com/free?trk=39d9c26c-b157-46ae-bde6-9cf598f5c9e0&amp;sc_channel=el">AWS Free Tier</a> account to receive up to $200 in credits and have 6 months of risk-free experimentation with AWS services.</p><p>During this period, you can explore essential services across compute, storage, databases, and AI/ML, plus access to over 30 always-free services with monthly usage limits. After 6 months, you can decide whether to upgrade to a standard AWS account.</p><p>Whether you’re a student exploring career options, a developer expanding your skill set, or a professional building with cloud technologies, this hands-on approach lets you focus on what matters most: developing real expertise in the areas you’re passionate about.</p><p><strong>Last week’s launches</strong><br />Here are the launches that got my attention this week:</p><ul><li><a href="https://aws.amazon.com/lambda?trk=39d9c26c-b157-46ae-bde6-9cf598f5c9e0&amp;sc_channel=el">AWS Lambda</a> – Now <a href="https://aws.amazon.com/about-aws/whats-new/2026/01/aws-lambda-dot-net-10/?trk=39d9c26c-b157-46ae-bde6-9cf598f5c9e0&amp;sc_channel=el">supports creating serverless applications using .NET 10</a> both as a managed runtime and a container base image. AWS will automatically apply updates to the managed runtime and base image as they become available. More info in <a href="https://aws.amazon.com/blogs/compute/net-10-runtime-now-available-in-aws-lambda/?trk=39d9c26c-b157-46ae-bde6-9cf598f5c9e0&amp;sc_channel=el">this blog post</a>.</li>
<li><a href="https://aws.amazon.com/ecs/?trk=39d9c26c-b157-46ae-bde6-9cf598f5c9e0&amp;sc_channel=el">Amazon ECS</a> – Adds <a href="https://aws.amazon.com/about-aws/whats-new/2026/01/amazon-ecs-tmpfs-mounts-aws-fargate-managed-instances/?trk=39d9c26c-b157-46ae-bde6-9cf598f5c9e0&amp;sc_channel=el">support for tmpfs mounts to Linux tasks running on AWS Fargate and Amazon ECS Managed Instances</a> in addition to the EC2 launch type. With tmpfs, you can create memory-backed file systems for your containerized workloads without writing data to task storage.</li>
<li><a href="https://aws.amazon.com/config/?trk=39d9c26c-b157-46ae-bde6-9cf598f5c9e0&amp;sc_channel=el">AWS Config</a> – Can now discover, assess, audit, and remediate <a href="https://aws.amazon.com/about-aws/whats-new/2026/01/aws-config-new-resource-types/?trk=39d9c26c-b157-46ae-bde6-9cf598f5c9e0&amp;sc_channel=el">additional AWS resource types</a> across key services including Amazon EC2, Amazon SageMaker, and Amazon S3 Tables.</li>
<li><a href="https://aws.amazon.com/amazon-mq/?trk=39d9c26c-b157-46ae-bde6-9cf598f5c9e0&amp;sc_channel=el">Amazon MQ</a> – <a href="https://aws.amazon.com/about-aws/whats-new/2026/01/amazon-mq-http-based-rabbitmq-brokers/?trk=39d9c26c-b157-46ae-bde6-9cf598f5c9e0&amp;sc_channel=el">Introduces HTTP based authentication for RabbitMQ brokers</a>. You can configure this plugin on brokers by making changes to the associated configuration file. It now also <a href="https://aws.amazon.com/about-aws/whats-new/2026/01/amazon-mq-certificate-based-authentication-mutual-tls-rabbitmq/?trk=39d9c26c-b157-46ae-bde6-9cf598f5c9e0&amp;sc_channel=el">supports certificate based authentication with mutual TLS</a> for RabbitMQ brokers.</li>
<li><a href="https://aws.amazon.com/managed-workflows-for-apache-airflow/?trk=39d9c26c-b157-46ae-bde6-9cf598f5c9e0&amp;sc_channel=el">Amazon MWAA</a> – You can <a href="https://aws.amazon.com/about-aws/whats-new/2026/01/apache-airflow-2-11-support-amazon-managed-workflows/?trk=39d9c26c-b157-46ae-bde6-9cf598f5c9e0&amp;sc_channel=el">now create Apache Airflow version 2.11 environments</a> with Amazon Managed Workflows for Apache Airflow. This version of Apache Airflow introduces changes that help you prepare for upgrading to Apache Airflow 3.</li>
<li><a href="https://aws.amazon.com/ec2/?trk=39d9c26c-b157-46ae-bde6-9cf598f5c9e0&amp;sc_channel=el">Amazon EC2</a> – <a href="https://aws.amazon.com/about-aws/whats-new/2026/01/amazon-ec2-m8i-instances-additional-regions/?trk=39d9c26c-b157-46ae-bde6-9cf598f5c9e0&amp;sc_channel=el">M8i</a>, <a href="https://aws.amazon.com/about-aws/whats-new/2026/01/amazon-ec2-c8i-c8i-flex-instances-additional-aws-regions/?trk=39d9c26c-b157-46ae-bde6-9cf598f5c9e0&amp;sc_channel=el">C8i and C8i-flex</a>, <a href="https://aws.amazon.com/about-aws/whats-new/2026/01/amazon-ec2-r8i-r8i-flex-instances-additional-aws-regions/?trk=39d9c26c-b157-46ae-bde6-9cf598f5c9e0&amp;sc_channel=el">R8i and R8i-flex</a>, and <a href="https://aws.amazon.com/about-aws/whats-new/2026/01/amazon-ec2-i7ie-instances-additional-aws-regions/?trk=39d9c26c-b157-46ae-bde6-9cf598f5c9e0&amp;sc_channel=el">I7ie instances are now available</a> in additional AWS Regions.</li>
<li><a href="https://aws.amazon.com/vpn/client-vpn/?trk=39d9c26c-b157-46ae-bde6-9cf598f5c9e0&amp;sc_channel=el">AWS Client VPN</a> – A <a href="https://aws.amazon.com/about-aws/whats-new/2026/01/aws-client-vpn-onboarding-quickstart-setup/?trk=39d9c26c-b157-46ae-bde6-9cf598f5c9e0&amp;sc_channel=el">new quickstart reduces the number of steps required</a> to set up a Client VPN endpoint.</li>
<li><a href="https://aws.amazon.com/quicksuite/?trk=39d9c26c-b157-46ae-bde6-9cf598f5c9e0&amp;sc_channel=el">Amazon Quick Suite</a> – Added <a href="https://aws.amazon.com/about-aws/whats-new/2026/01/3p-agent-in-quick/?trk=39d9c26c-b157-46ae-bde6-9cf598f5c9e0&amp;sc_channel=el">integrations for AI agents and to its built-in actions library</a>. For example, these now include GitHub, Notion, Canva, Box, Linear, Hugging Face, Monday.com, HubSpot, Intercom, and more.</li>
</ul><p><strong>Additional updates<br /></strong> Here are some additional projects, blog posts, and news items that I found interesting:</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2026/01/12/crossmodal-search-amazon-nova-multimodal-embeddings-architecture.png"><img class="aligncenter size-full wp-image-102789" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2026/01/12/crossmodal-search-amazon-nova-multimodal-embeddings-architecture.png" alt="Crossmodal search with Amazon Nova Multimodal Embeddings Architecture" width="1430" height="653" /></a></p><p><strong>Upcoming AWS events<br /></strong> Join us January 28 or 29 (depending on your time zone) for <a href="https://aws.amazon.com/best-of-reinvent/">Best of AWS re:Invent</a>, a free virtual event where we bring you the most impactful announcements and top sessions from AWS re:Invent. Jeff Barr, AWS VP and Chief Evangelist, will share his highlights during the opening session.</p><p>There is still time until January 21 to compete for $250,000 in prizes and AWS credits in the <a href="https://builder.aws.com/connect/events/10000aideas?trk=39d9c26c-b157-46ae-bde6-9cf598f5c9e0&amp;sc_channel=el">Global 10,000 AIdeas Competition</a> (yes, the second letter is an I as in Idea, not an L as in like). No code required yet: simply submit your idea, and if you’re selected as a semifinalist, you’ll build your app using <a href="https://kiro.dev/?trk=39d9c26c-b157-46ae-bde6-9cf598f5c9e0&amp;sc_channel=el">Kiro</a> within <a href="https://aws.amazon.com/free?trk=39d9c26c-b157-46ae-bde6-9cf598f5c9e0&amp;sc_channel=el">AWS Free Tier</a> limits. Beyond the cash prizes and potential featured placement at <a href="https://reinvent.awsevents.com/?trk=39d9c26c-b157-46ae-bde6-9cf598f5c9e0&amp;sc_channel=el">AWS re:Invent 2026</a>, you’ll gain hands-on experience with next-generation AI tools and connect with innovators globally.</p><p>If you’re interested in these opportunities, join the <a href="https://builder.aws.com/?trk=39d9c26c-b157-46ae-bde6-9cf598f5c9e0&amp;sc_channel=el">AWS Builder Center</a> to learn with builders in the AWS community.</p><p>That’s all for this week. Check back next Monday for another <a href="https://aws.amazon.com/blogs/aws/tag/week-in-review/?trk=39d9c26c-b157-46ae-bde6-9cf598f5c9e0&amp;sc_channel=el">Weekly Roundup</a>!</p><p>– <a href="https://x.com/danilop">Danilo</a></p></section><aside id="Comments" class="blog-comments"><div data-lb-comp="aws-blog:cosmic-comments" data-env="prod" data-content-id="3c6b480e-81e3-4317-82be-dc0ca3fde21f" data-title="AWS Weekly Roundup: AWS Lambda for .NET 10, AWS Client VPN quickstart, Best of AWS re:Invent, and more (January 12, 2026)" data-url="https://aws.amazon.com/blogs/aws/aws-weekly-roundup-aws-lambda-for-net-10-aws-client-vpn-quickstart-best-of-aws-reinvent-and-more-january-12-2026/"><p data-failed-message="Comments cannot be loaded… Please refresh and try again.">Loading comments…</p></div>
</aside>]]></summary>
    <link href="https://aws.amazon.com/blogs/aws/aws-weekly-roundup-aws-lambda-for-net-10-aws-client-vpn-quickstart-best-of-aws-reinvent-and-more-january-12-2026/"/>
    <updated>2026-01-12T18:39:00+01:00</updated>
  </entry>
  <entry>
    <id>https://aws.amazon.com/blogs/aws/happy-new-year-aws-weekly-roundup-10000-aideas-competition-amazon-ec2-amazon-ecs-managed-instances-and-more-january-5-2026/</id>
    <title><![CDATA[Happy New Year! AWS Weekly Roundup: 10,000 AIdeas Competition, Amazon EC2, Amazon ECS Managed Instances and more (January 5, 2026)]]></title>
    <summary><![CDATA[<section class="blog-post-content lb-rtxt"><table id="amazon-polly-audio-table"><tbody><tr><td id="amazon-polly-audio-tab">
<p></p></td></tr></tbody>


</table><p>Happy New Year! I hope the holidays gave you time to recharge and spend time with your loved ones.</p><p>Like every year, I took a few weeks off after <a href="https://aws.amazon.com/reinvent/">AWS re:Invent</a> to rest and plan ahead. I used some of that downtime to plan the next cohort for <a href="https://besaprogram.com/">Become a Solutions Architect (BeSA)</a>. BeSA is a free mentoring program that I, along with a few other <a href="https://aws.amazon.com/">Amazon Web Services (AWS)</a> employees, volunteer to host as a way to help people excel in their cloud and AI careers. We’re kicking off a 6-week cohort on “Agentic AI on AWS” starting February 21, 2026. Visit the <a href="https://besaprogram.com/">BeSA website</a> to learn more.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2026/01/02/10k-AIdeas-1080x1080-1.jpg"><img class="alignright wp-image-102711 size-medium" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2026/01/02/10k-AIdeas-1080x1080-1-300x300.jpg" alt="" width="300" height="300" /></a>There is still time to submit your idea for the <a href="https://builder.aws.com/connect/events/10000aideas">Global 10,000 AIdeas Competition</a> and compete for $250,000 in cash prizes, AWS credits, and recognition, including potential featured placement at AWS re:Invent 2026 and across AWS channels.</p><p>You will gain hands-on experience with next-generation AI development tools, connect with innovators globally, and access technical enablement through biweekly workshops, AWS User Groups, and AWS Builder Center resources.</p><p>The deadline is January 21, 2026, and no code is required yet. If you’re selected as a semifinalist, you’ll build your app then. Your finished app needs to use Kiro for at least part of development, stay within <a href="https://aws.amazon.com/free/">AWS Free Tier</a> limits, and be completely original and not yet published.</p><p>If you haven’t yet caught up with all the new releases and announcements from AWS re:Invent 2025, check out our <a href="https://aws.amazon.com/blogs/aws/top-announcements-of-aws-reinvent-2025/">top announcements post</a> or <a href="https://reinvent.awsevents.com/on-demand/">watch the keynotes, innovation talks, and breakout sessions on-demand</a>.</p><p><strong>Launches from the last few weeks</strong><br />I’d like to highlight some launches that got my attention since our last Week in Review on December 15, 2025:</p><ul><li><a href="https://aws.amazon.com/about-aws/whats-new/2025/12/generally-available-amazon-ec2-m8gn-m8gb-instances/">Amazon EC2 M8gn and M8gb instances</a> – New M8gn and M8gb instances are powered by AWS Graviton4 processors to deliver up to 30% better compute performance than AWS Graviton3 processors. M8gn instances feature the latest 6th generation AWS Nitro Cards, and offer up to 600 Gbps network bandwidth, the highest network bandwidth among network-optimized EC2 instances. M8gb offer up to 150 Gbps of Amazon EBS bandwidth to provide higher EBS performance compared to same-sized equivalent Graviton4-based instances.</li>
<li><a href="https://aws.amazon.com/about-aws/whats-new/2025/12/direct-connect-resilience-testing-fault-injection-service/">AWS Direct Connect supports resilience testing with AWS Fault Injection Service</a> – You can now use AWS Fault Injection Service to test how your applications handle Direct Connect Border Gateway Protocol (BGP) failover in a controlled environment. For example, you can validate that traffic routes to redundant virtual interfaces when a primary virtual interface’s BGP session is disrupted and your applications continue to function as expected.</li>
<li><a href="https://aws.amazon.com/about-aws/whats-new/2025/12/176-security-hub-controls-control-tower/">New AWS Security Hub controls in AWS Control Tower</a> – AWS Control Tower now supports 176 additional Security Hub controls in the Control Catalog, covering use cases including security, cost, durability, and operations. With this launch, you can search, discover, enable, and manage these controls directly from AWS Control Tower to govern additional use cases across your multi-account environment.</li>
<li><a href="https://aws.amazon.com/about-aws/whats-new/2025/12/aws-transform-hybrid-network-migration/">AWS Transform supports network conversion for hybrid data center migrations</a> – You can now use AWS Transform for VMware to automatically convert networks from hybrid data centers. This removes manual network mapping for environments running both VMware and other workloads. The service analyzes VLANs and IP ranges across all exported source networks and maps them to AWS constructs such as virtual private clouds (VPCs), subnets, and security groups.</li>
<li><a href="https://aws.amazon.com/about-aws/whats-new/2025/12/nvidia-nemotron-3-nano-amazon-bedrock/">NVIDIA Nemotron 3 Nano available on Amazon Bedrock</a> – Amazon Bedrock now supports NVIDIA Nemotron 3 Nano 30B A3B model, NVIDIA’s latest breakthrough in efficient language modeling that delivers high reasoning performance, built-in tool calling support, and extended context processing with 256K token context window.</li>
<li><a href="https://aws.amazon.com/about-aws/whats-new/2025/12/amazon-ec2-az-id-api-support/">Amazon EC2 supports Availability Zone ID across its APIs</a> – You can specify the Availability Zone ID (AZ ID) parameter directly in your Amazon EC2 APIs to guarantee consistent placement of resources. AZ IDs are consistent and static identifiers that represent the same physical location across all AWS accounts, helping you optimize resource placement. Prior to this launch, you had to use an AZ name while creating a resource, but these names could map to different physical locations. This mapping made it difficult to ensure resources were always co-located, especially when operating with multiple accounts.</li>
<li><a href="https://aws.amazon.com/about-aws/whats-new/2025/12/amazon-ecs-managed-instances-ec2-spot-instances/">Amazon ECS Managed Instances supports Amazon EC2 Spot Instances</a> – Amazon ECS Managed Instances now supports Amazon EC2 Spot Instances, extending the range of capabilities available with AWS managed infrastructure. You can use spare EC2 capacity at up to 90% discount compared to On-Demand prices for fault-tolerant workloads in Amazon ECS Managed Instances.</li>
</ul><p>See <a href="https://aws.amazon.com/new/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">AWS What’s New</a> for more launch news that I haven’t covered here. That’s all for this week. Check back next Monday for another Weekly Roundup!</p><p>Here’s to a fantastic start to 2026. Happy building!</p><p>– <a href="https://www.linkedin.com/in/kprasadrao/">Prasad</a></p></section><aside id="Comments" class="blog-comments"><div data-lb-comp="aws-blog:cosmic-comments" data-env="prod" data-content-id="0253ba8e-0988-4f4a-a27b-8726db8ce8aa" data-title="Happy New Year! AWS Weekly Roundup: 10,000 AIdeas Competition, Amazon EC2, Amazon ECS Managed Instances and more (January 5, 2026)" data-url="https://aws.amazon.com/blogs/aws/happy-new-year-aws-weekly-roundup-10000-aideas-competition-amazon-ec2-amazon-ecs-managed-instances-and-more-january-5-2026/"><p data-failed-message="Comments cannot be loaded… Please refresh and try again.">Loading comments…</p></div>
</aside>]]></summary>
    <link href="https://aws.amazon.com/blogs/aws/happy-new-year-aws-weekly-roundup-10000-aideas-competition-amazon-ec2-amazon-ecs-managed-instances-and-more-january-5-2026/"/>
    <updated>2026-01-05T18:10:00+01:00</updated>
  </entry>
  <entry>
    <id>https://aws.amazon.com/blogs/aws/aws-weekly-roundup-amazon-ecs-amazon-cloudwatch-amazon-cognito-and-more-december-15-2025/</id>
    <title><![CDATA[AWS Weekly Roundup: Amazon ECS, Amazon CloudWatch, Amazon Cognito and more (December 15, 2025)]]></title>
    <summary><![CDATA[<section class="blog-post-content lb-rtxt"><table id="amazon-polly-audio-table"><tbody><tr><td id="amazon-polly-audio-tab">
<p></p></td></tr></tbody>


</table><p>Can you believe it? We’re nearly at the end of 2025. And what a year it’s been! From re:Invent recap events, to AWS Summits, AWS Innovate, AWS re:Inforce, Community Days, and DevDays and, recently, adding that cherry on the cake, re:Invent 2025, we have lived through a year filled with exciting moments and technology advancements which continue to shape our new modern world.</p><p>Speaking of re:Invent, if you haven’t caught up yet on all the new releases and announcements (and there were plenty of exciting launches across every area), be sure to check out our curated post highlighting the <a href="https://aws.amazon.com/blogs/aws/top-announcements-of-aws-reinvent-2025?trk=ac97e39c-d115-4d4a-b3fe-c695e0c9a7ee&amp;sc_channel=el">top announcements from AWS re:Invent 2025</a>. We’ve organized all the key releases into easy-to-navigate categories and included links so you can dive deeper into anything that sparks your interest.</p><p>While the year may be wrapping up, our teams are still busy working on things that you have either asked for as customers or that we pro-actively create to make your lives easier. Last week had quite a few interesting releases as usual, so let’s look at a few that I think could be useful for many of you out there.</p><p><strong>Last week’s launches</strong></p><p><a href="https://aws.amazon.com/about-aws/whats-new/2025/12/amazon-workspaces-secure-browser-web-content-filtering/?trk=ac97e39c-d115-4d4a-b3fe-c695e0c9a7ee&amp;sc_channel=el">Amazon WorkSpaces Secure Browser introduces Web Content Filtering</a> – Organizations can now control web access through category-based filtering across 25+ predefined categories, granular URL policies, and integrated compliance logging. The feature works alongside existing Chrome policies and integrates with Session Logger for enhanced monitoring and is available at no additional cost in 10 AWS Regions with pay-as-you-go pricing.</p><p><a href="https://aws.amazon.com/about-aws/whats-new/2025/12/amazon-aurora-dsql-cluster-creation-in-seconds/?trk=ac97e39c-d115-4d4a-b3fe-c695e0c9a7ee&amp;sc_channel=el">Amazon Aurora DSQL now supports cluster creation in seconds</a> – Developers can now instantly provision Aurora DSQL databases with setup time reduced from minutes to seconds, enabling rapid prototyping through the integrated AWS console query editor or AI-powered development via the Aurora DSQL Model Context Protocol server. Available at no additional cost in all AWS Regions where Aurora DSQL is offered, with AWS Free Tier access available.</p><p><a href="https://aws.amazon.com/about-aws/whats-new/2025/12/amazon-aurora-postgresql-integration-kiro-powers/?trk=ac97e39c-d115-4d4a-b3fe-c695e0c9a7ee&amp;sc_channel=el">Amazon Aurora PostgreSQL now supports integration with Kiro powers</a> – Developers can now accelerate Aurora PostgreSQL application development using AI-assisted coding through Kiro powers, a repository of pre-packaged Model Context Protocol servers. The Aurora PostgreSQL integration provides direct database connectivity for queries, schema management, and cluster operations, dynamically loading relevant context as developers work. Available for one-click installation in Kiro IDE across all AWS Regions.</p><p><a href="https://aws.amazon.com/about-aws/whats-new/2025/12/amazon-ecs-custom-container-stop-signals-fargate/?trk=ac97e39c-d115-4d4a-b3fe-c695e0c9a7ee&amp;sc_channel=el">Amazon ECS now supports custom container stop signals on AWS Fargate</a> – Fargate tasks now honor the stop signal configured in container images, enabling graceful shutdowns for containers that rely on signals like SIGQUIT or SIGINT instead of the default SIGTERM. The ECS container agent reads the STOPSIGNAL instruction from OCI-compliant images and sends the appropriate signal during task termination. Available at no additional cost across all AWS Regions.</p><p><a href="https://aws.amazon.com/about-aws/whats-new/2025/12/amazon-cloudwatch-sdk-json-cbor-protocols/?trk=ac97e39c-d115-4d4a-b3fe-c695e0c9a7ee&amp;sc_channel=el">Amazon CloudWatch SDK supports optimized JSON, CBOR protocols</a> – CloudWatch SDK now defaults to JSON and CBOR protocols, delivering lower latency, reduced payload sizes, and decreased client-side CPU and memory usage compared to the traditional AWS Query protocol. Available at no additional cost across all AWS Regions and SDK language variants.</p><p><a href="https://aws.amazon.com/about-aws/whats-new/2025/12/amazon-cognito-identity-pools-private-connectivity-aws-privatelink/?trk=ac97e39c-d115-4d4a-b3fe-c695e0c9a7ee&amp;sc_channel=el">Amazon Cognito identity pools now support private connectivity with AWS PrivateLink</a> – Organizations can now securely exchange federated identities for temporary AWS credentials through private VPC connections, eliminating the need to route authentication traffic over the public internet. Available in all AWS Regions where Cognito identity pools are supported, except AWS China (Beijing) and AWS GovCloud (US) Regions.</p><p><a href="https://aws.amazon.com/about-aws/whats-new/2025/12/application-migration-service-ipv6?trk=ac97e39c-d115-4d4a-b3fe-c695e0c9a7ee&amp;sc_channel=el">AWS Application Migration Service supports IPv6</a> – Organizations can now migrate applications using IPv6 addressing through dual-stack service endpoints that support both IPv4 and IPv6 communications. During replication, testing, and cutover phases, you can use IPv4, IPv6, or dual-stack configurations to launch servers in your target environment. Available at no additional cost in all AWS Regions that support MGN and EC2 dual-stack endpoints.</p><p>And that’s it for the AWS News Blog Weekly Roundup…not just for this week, but for 2025! We’ll be taking a break and returning in January to continue bringing you the latest AWS releases and updates.</p><p>As we close out 2025, it’s remarkable to look back at just how much has changed since the beginning of year. From groundbreaking AI capabilities to transformative infrastructure innovations, AWS has delivered an incredible year of releases that have reshaped what’s possible in the cloud. Throughout it all, the AWS News Blog has been right here with you every week with our Weekly Roundup series, helping you stay informed and ready to take advantage of each new opportunity as it arrived. We’re grateful you’ve joined us on this journey, and we can’t wait to continue bringing you the latest AWS innovations when we return in January 2026.</p><p>Until then, happy building, and here’s to an even more exciting year ahead!</p><a href="https://link.codingmatheus.com/linkedin">Matheus Guimaraes | @codingmatheus</a></section><aside id="Comments" class="blog-comments"><div data-lb-comp="aws-blog:cosmic-comments" data-env="prod" data-content-id="a31a556d-c15f-49bb-9753-fa6cb63aa282" data-title="AWS Weekly Roundup: Amazon ECS, Amazon CloudWatch, Amazon Cognito and more (December 15, 2025)" data-url="https://aws.amazon.com/blogs/aws/aws-weekly-roundup-amazon-ecs-amazon-cloudwatch-amazon-cognito-and-more-december-15-2025/"><p data-failed-message="Comments cannot be loaded… Please refresh and try again.">Loading comments…</p></div>
</aside>]]></summary>
    <link href="https://aws.amazon.com/blogs/aws/aws-weekly-roundup-amazon-ecs-amazon-cloudwatch-amazon-cognito-and-more-december-15-2025/"/>
    <updated>2025-12-15T17:42:00+01:00</updated>
  </entry>
  <entry>
    <id>https://aws.amazon.com/blogs/aws/aws-weekly-roundup-aws-reinvent-keynote-recap-on-demand-videos-and-more-december-8-2025/</id>
    <title><![CDATA[AWS Weekly Roundup: AWS re:Invent keynote recap, on-demand videos, and more (December 8, 2025)]]></title>
    <summary><![CDATA[<section class="blog-post-content lb-rtxt"><table id="amazon-polly-audio-table"><tbody><tr><td id="amazon-polly-audio-tab">
<p></p></td></tr></tbody>


</table><p>The week after AWS re:Invent builds on the excitement and energy of the event and is a good time to learn more and understand how the recent announcements can help you solve your challenges and unlock new opportunities. As usual, we have you covered with our <a href="https://aws.amazon.com/blogs/aws/top-announcements-of-aws-reinvent-2025/?trk=c4ea046f-18ad-4d23-a1ac-cdd1267f942c&amp;sc_channel=el">top announcements of AWS re:Invent 2025</a> that you can learn all about here.</p><p>For me, one moment stood out above all the technical announcements: watching <a href="https://builder.aws.com/community/heroes/RaphaelQuisumbing?trk=c4ea046f-18ad-4d23-a1ac-cdd1267f942c&amp;sc_channel=el">Rafi (Raphael Francis Quisumbing)</a> from the Philippines receive the Now Go Build Award from <a href="https://www.allthingsdistributed.com/">Werner Vogels</a>. Rafi has been an AWS Hero since 2015 and co-lead of <a href="https://www.facebook.com/groups/AWSUGPH/">AWS User Group Philippines</a> since 2013. His dedication to building communities and empowering developers across the region embodies what this award represents. You can read more about Rafi on <a href="https://thekernel.news/#:~:text=Winner%20of%20the%202025%20Now%20Go%20Build%20Award%3A%20Raphael%20Quisumbing">The Kernel</a>. Congrats, Rafi!</p><p><img class="aligncenter size-full wp-image-102608 c6" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/12/08/2025-news-nowgobuild-1.png" alt="" width="1830" height="1142" /></p><p><strong>The keynote recap: Agents, renaissance, and the developer’s role<br /></strong> This year’s AWS re:Invent keynotes painted a clear picture of where we’re headed.</p><p><a href="https://www.youtube.com/watch?v=q3Sb9PemsSo"><strong>Matt Garman</strong> </a>emphasized that developers are “the heart of AWS” and that “freedom to invent” remains AWS’s core mission after 20 years. He focused on AI agents as the next inflection point: “AI assistants are starting to give way to AI agents that can perform tasks and automate on your behalf. This is where we’re starting to see material business returns from your AI investments.”</p><p><a href="https://www.youtube.com/watch?v=prVdCIHlipg"><strong>Swami Sivasubramanian</strong></a> highlighted the transformative moment we’re in: “For the first time in history, we can describe what we want to accomplish in natural language, and agents generate the plan. They write the code, call the necessary tools, and execute the complete solution.” AWS is building production-ready infrastructure that’s secure, reliable, and scalable—purpose-built for the non-deterministic nature of agents.</p><p><a href="https://www.youtube.com/watch?v=JeUpUK0nhC0"><strong>Peter DeSantis and Dave Brown</strong></a> reinforced that the core attributes AWS has obsessed over for 20 years—security, availability, performance, elasticity, cost, and agility—are more important than ever in the AI era. Dave Brown showcased Graviton and AWS’s custom silicon innovations that deliver these attributes at scale.</p><p><a href="https://www.youtube.com/watch?v=3Y1G9najGiI"><strong>Werner Vogels</strong></a> delivered his final keynote after 14 years, introducing the concept of the “renaissance developer”—someone who is curious, thinks in systems, and communicates effectively. His message about AI and developer evolution resonated: “Will AI take my job? Maybe. Will AI make me obsolete? Absolutely not… if you evolve.” He emphasized that developers must be owners: “The work is yours, not that of the tools. You build it, you own it.”</p><p>You can also watch from keynotes, innovation talks to breakout sessions and more in the <a href="https://reinvent.awsevents.com/on-demand/?trk=c4ea046f-18ad-4d23-a1ac-cdd1267f942c&amp;sc_channel=el">on-demand video page</a>.</p><p><strong>Innovations Talks</strong></p><ul><li><a href="https://youtu.be/L_Q7LPB5HcA">Harnessing analytics for humans and AI (INV201)</a></li>
<li><a href="https://youtu.be/D0UkoghAVM0">AI agents in action: Architecting the future of applications (INV202)</a></li>
<li><a href="https://youtu.be/qHvm3oFmRls">The agent-enabled workplace: Transforming businesses with AI (INV203)</a></li>
<li><a href="https://youtu.be/tqHCjUSRKxc">Build and scale AI: from reliable agents to transformative systems (INV204)</a></li>
<li><a href="https://youtu.be/A8BYnqiHfeA">Reinventing software development with AI agents (INV205)</a></li>
<li><a href="https://youtu.be/2_Ev5YCO2Ik">Unlocking possibilities with AWS Compute (INV207)</a></li>
<li><a href="https://youtu.be/MBvyZENChk0">Databases made effortless so agents and developers can change the world (INV208)</a></li>
<li><a href="https://youtu.be/_Wi_4I40bqQ">The next frontier: Building the agentic future of Financial Services (INV209)</a></li>
<li><a href="https://youtu.be/pIiZupnNpPM">Infrastructure for the impossible: Turning public sector barriers into breakthroughs (INV210)</a></li>
<li><a href="https://youtu.be/zj44evAY_AA">Behind the curtain: How Amazon’s AI innovations are powered by AWS (INV211)</a></li>
<li><a href="https://youtu.be/6b1Ho9hr8-0">Migrate, modernize, and move your business into the AI era (INV212)</a></li>
<li><a href="https://youtu.be/RkdPAFJEPSA">The power of cloud network innovation (INV213)</a></li>
<li><a href="https://youtu.be/q3ZRbCTnB3U">Intelligent security: Protection at scale from development to production (INV214)</a></li>
<li><a href="https://youtu.be/beWO7h7Ut44">AWS storage beyond data boundaries: Building the data foundation (INV215)</a></li>
</ul><table><tbody><tr><td><strong>Breakout sessions — Topics</strong></td>
<td><strong>Breakout sessions — Segments</strong></td>
</tr><tr><td valign="top">
<ul><li><a href="https://www.youtube.com/playlist?list=PL2yQDdvlhXf8hzMmCGt2F--Oa6rF2rktF&amp;trk=c4ea046f-18ad-4d23-a1ac-cdd1267f942c">Analytics</a></li>
<li><a href="https://www.youtube.com/playlist?list=PL2yQDdvlhXf9m3loOlzEtdpobt-85B04t&amp;trk=c4ea046f-18ad-4d23-a1ac-cdd1267f942c">Application Integration</a></li>
<li><a href="https://www.youtube.com/playlist?list=PL2yQDdvlhXf9t-nSD6dYTv-szvZxsBeh0&amp;trk=c4ea046f-18ad-4d23-a1ac-cdd1267f942c">Architecture</a></li>
<li><a href="https://www.youtube.com/playlist?list=PL2yQDdvlhXf-UqnINCmXu-dDZJm_B3bbJ" data-sk="tooltip_parent">Artificial Intelligence</a></li>
<li><a href="https://www.youtube.com/playlist?list=PL2yQDdvlhXf-H-1ygVzXLcZifTGGIreNK&amp;trk=c4ea046f-18ad-4d23-a1ac-cdd1267f942c">Business Applications</a></li>
<li><a href="https://www.youtube.com/playlist?list=PL2yQDdvlhXf8fUGVeljVNpQwugV7XBZRt" data-sk="tooltip_parent">Cloud Operations</a></li>
<li><a href="https://www.youtube.com/playlist?list=PL2yQDdvlhXf_0uJ0iFTpJ6zhvGpSl-jsy" data-sk="tooltip_parent">Compute</a></li>
<li><a href="https://www.youtube.com/playlist?list=PL2yQDdvlhXf9l1y_VZAm0vG2hproZMtip" data-sk="tooltip_parent">Database</a></li>
<li><a href="https://www.youtube.com/playlist?list=PL2yQDdvlhXf8HawFRm2kL9935mQmLyGy1" data-sk="tooltip_parent">Developer Tools</a></li>
<li><a href="https://www.youtube.com/playlist?list=PL2yQDdvlhXf_5JZDW_n_p6sGrZpWzPK-3&amp;trk=c4ea046f-18ad-4d23-a1ac-cdd1267f942c">End-User Computing</a></li>
<li><a href="https://www.youtube.com/playlist?list=PL2yQDdvlhXf9Vml7a6_rh4BNrLTTU0euv" data-sk="tooltip_parent">Hybrid Cloud &amp; Multi Cloud</a></li>
<li><a href="https://www.youtube.com/playlist?list=PL2yQDdvlhXf_LuBI1bIWPHQi88G6QR6KM" data-sk="tooltip_parent">Industry Solutions</a></li>
<li><a href="https://www.youtube.com/playlist?list=PL2yQDdvlhXf8Kb_IJfMCw7CA630dbJL7a" data-sk="tooltip_parent">Migration &amp; Modernization</a></li>
<li><a href="https://www.youtube.com/playlist?list=PL2yQDdvlhXf8-QA1Hi5D-W2vRdc3h7FpK" data-sk="tooltip_parent">Networking &amp; Content Delivery</a></li>
<li><a href="https://www.youtube.com/playlist?list=PL2yQDdvlhXf-a5wgEXBveQkE0MpHprsQt&amp;trk=c4ea046f-18ad-4d23-a1ac-cdd1267f942c">Open Source</a></li>
<li><a href="https://www.youtube.com/playlist?list=PL2yQDdvlhXf_1rHZpRPA6MDUBv_OKz6Qk&amp;trk=c4ea046f-18ad-4d23-a1ac-cdd1267f942c">Security &amp; Identity</a></li>
<li><a href="https://www.youtube.com/playlist?list=PL2yQDdvlhXf-Gzj7psv0r9d1u9_yYAGus&amp;trk=c4ea046f-18ad-4d23-a1ac-cdd1267f942c">Serverless &amp; Containers</a></li>
<li><a href="https://www.youtube.com/playlist?list=PL2yQDdvlhXf9MLj95GGJAqCTecdrkT6mQ&amp;trk=c4ea046f-18ad-4d23-a1ac-cdd1267f942c">Storage</a></li>
</ul></td>
<td valign="top">
<ul><li><a href="https://www.youtube.com/playlist?list=PL2yQDdvlhXf-SZMaPAmK7huvDT6GBF18B" data-sk="tooltip_parent">Developer Community</a></li>
<li><a href="https://www.youtube.com/playlist?list=PL2yQDdvlhXf_NqSnDKx7Hbb9FrNQKmxg7&amp;trk=direct">Digital Native Business</a></li>
<li><a href="https://www.youtube.com/playlist?list=PL2yQDdvlhXf_NqSnDKx7Hbb9FrNQKmxg7&amp;trk=direct">Enterprise</a></li>
<li><a href="https://www.youtube.com/playlist?list=PL2yQDdvlhXf9BqAa6B07Xia-qmR23RHaJ&amp;trk=direct">Independent Software Vendor</a></li>
<li><a href="https://www.youtube.com/playlist?list=PL2yQDdvlhXf_bh3ol1Obzb3P8n6BQENH9&amp;trk=direct">New to AWS</a></li>
<li><a href="https://www.youtube.com/playlist?list=PL2yQDdvlhXf9iOBmyb15YPGkoMK1hAX_x&amp;trk=direct" data-sk="tooltip_parent">Partner Enablement</a></li>
<li><a href="https://www.youtube.com/playlist?list=PL2yQDdvlhXf-W_5uNnQaCqUaXJHt37Luo&amp;trk=direct">Public Sector</a></li>
<li><a href="https://www.youtube.com/playlist?list=PL2yQDdvlhXf9MBvsGS5rB-X7CMvG21ib-&amp;trk=direct">Senior Leaders</a></li>
<li><a href="https://www.youtube.com/playlist?list=PL2yQDdvlhXf_Chz_eS8KUdzzbuACSV0CE&amp;trk=direct">Small &amp; Medium Business</a></li>
<li><a href="https://www.youtube.com/playlist?list=PL2yQDdvlhXf-yRZ2GBW1PJzz5cneMld9Z&amp;trk=direct">Startup</a></li>
</ul></td>
</tr></tbody></table><p><strong>Last week’s launches<br /></strong> Here are the launches that caught my attention not yet covered in our <a href="https://aws.amazon.com/blogs/aws/top-announcements-of-aws-reinvent-2025/?trk=c4ea046f-18ad-4d23-a1ac-cdd1267f942c&amp;sc_channel=el">top announcements of AWS re:Invent 2025</a> post:</p><ul><li><a href="https://kiro.dev/blog/introducing-kiro-autonomous-agent/?trk=c4ea046f-18ad-4d23-a1ac-cdd1267f942c&amp;sc_channel=el">Kiro Autonomous Agent</a> – Building on Kiro’s general availability in November with team features, AWS introduced an autonomous agent that maintains awareness across sessions, learns from pull requests and feedback, and handles bug triage and code coverage improvements spanning multiple repositories. “Orders of magnitude more efficient” than first-generation AI coding tools, Matt Garman said. Kiro is now Amazon’s standard AI development environment company-wide.</li>
<li><a href="https://docs.aws.amazon.com/bedrock/latest/userguide/kb-multimodal.html?trk=c4ea046f-18ad-4d23-a1ac-cdd1267f942c&amp;sc_channel=el">Multimodal Retrieval for Bedrock Knowledge Bases (GA)</a> – Build AI-powered search and question-answering applications that work across text, images, audio, and video files. Developers can now ingest multimodal content with full control of parsing, chunking, embedding, and vector storage options, then send text or image queries to retrieve relevant segments across all media types.</li>
<li><a href="https://docs.aws.amazon.com/interconnect/latest/userguide/what-is.html?trk=c4ea046f-18ad-4d23-a1ac-cdd1267f942c&amp;sc_channel=el">AWS Interconnect – Multicloud (Preview)</a> – Quickly establish private, secure, high-speed network connections with dedicated bandwidth and built-in resiliency between Amazon VPCs and other cloud environments. Starting in preview with Google Cloud as the first launch partner, with Microsoft Azure support coming in 2026.</li>
</ul><p>See <a href="https://aws.amazon.com/new/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">AWS What’s New</a> for more launch news that I haven’t covered here. That’s all for this week. Check back next Monday for another Weekly Roundup!</p><p>Happy building!</p><p>— <a href="https://www.linkedin.com/in/donnieprakoso">Donnie</a></p><p><em>This post is part of our <a href="https://aws.amazon.com/blogs/aws/tag/week-in-review/">Weekly Roundup series</a>. Check back each week for a quick roundup of interesting news and announcements from AWS!</em></p></section><aside id="Comments" class="blog-comments"><div data-lb-comp="aws-blog:cosmic-comments" data-env="prod" data-content-id="9c33e027-6703-462e-9fdb-4e9d21c69b7c" data-title="AWS Weekly Roundup: AWS re:Invent keynote recap, on-demand videos, and more (December 8, 2025)" data-url="https://aws.amazon.com/blogs/aws/aws-weekly-roundup-aws-reinvent-keynote-recap-on-demand-videos-and-more-december-8-2025/"><p data-failed-message="Comments cannot be loaded… Please refresh and try again.">Loading comments…</p></div>
</aside>]]></summary>
    <link href="https://aws.amazon.com/blogs/aws/aws-weekly-roundup-aws-reinvent-keynote-recap-on-demand-videos-and-more-december-8-2025/"/>
    <updated>2025-12-08T18:05:00+01:00</updated>
  </entry>
  <entry>
    <id>https://aws.amazon.com/blogs/aws/improve-model-accuracy-with-reinforcement-fine-tuning-in-amazon-bedrock/</id>
    <title><![CDATA[Amazon Bedrock adds reinforcement ﬁne-tuning simplifying how developers build smarter, more accurate AI models]]></title>
    <summary><![CDATA[<section class="blog-post-content lb-rtxt"><table id="amazon-polly-audio-table"><tbody><tr><td id="amazon-polly-audio-tab">
<p></p></td></tr></tbody>


</table><p>Organizations face a challenging trade-off when adapting AI models to their specific business needs: settle for generic models that produce average results, or tackle the complexity and expense of advanced model customization. Traditional approaches force a choice between poor performance with smaller models or the high costs of deploying larger model variants and managing complex infrastructure. Reinforcement fine-tuning is an advanced technique that trains models using feedback instead of massive labeled datasets, but implementing it typically requires specialized ML expertise, complicated infrastructure, and significant investment—with no guarantee of achieving the accuracy needed for specific use cases.</p><p>Today, we’re announcing reinforcement fine-tuning in <a href="https://aws.amazon.com/bedrock/?trk=c4ea046f-18ad-4d23-a1ac-cdd1267f942c&amp;sc_channel=el">Amazon Bedrock</a>, a new <a href="https://aws.amazon.com/bedrock/customize/?trk=c4ea046f-18ad-4d23-a1ac-cdd1267f942c&amp;sc_channel=el">model customization</a> capability that creates smarter, more cost-effective models that learn from feedback and deliver higher-quality outputs for specific business needs. Reinforcement fine-tuning uses a feedback-driven approach where models improve iteratively based on reward signals, delivering 66% accuracy gains on average over base models.</p><p>Amazon Bedrock automates the reinforcement fine-tuning workflow, making this advanced model customization technique accessible to everyday developers without requiring deep <a href="https://aws.amazon.com/ai/machine-learning/">machine learning (ML)</a> expertise or large labeled datasets.</p><p><strong>How reinforcement fine-tuning works<br /></strong> Reinforcement fine-tuning is built on top of <a href="https://aws.amazon.com/what-is/reinforcement-learning/?trk=c4ea046f-18ad-4d23-a1ac-cdd1267f942c&amp;sc_channel=el">reinforcement learning</a> principles to address a common challenge: getting models to consistently produce outputs that align with business requirements and user preferences.</p><p>While traditional fine-tuning requires large, labeled datasets and expensive human annotation, reinforcement fine-tuning takes a different approach. Instead of learning from fixed examples, it uses reward functions to evaluate and judge which responses are considered good for particular business use cases. This teaches models to understand what makes a quality response without requiring massive amounts of pre-labeled training data, making advanced model customization in Amazon Bedrock more accessible and cost-effective.</p><p>Here are the benefits of using reinforcement fine-tuning in Amazon Bedrock:</p><ul><li><strong>Ease of use</strong> – Amazon Bedrock automates much of the complexity, making reinforcement fine-tuning more accessible to developers building AI applications. Models can be trained using existing API logs in Amazon Bedrock or by uploading datasets as training data, eliminating the need for labeled datasets or infrastructure setup.</li>
<li><strong>Better model performance</strong> – Reinforcement fine-tuning improves model accuracy by 66% on average over base models, enabling optimization for price and performance by training smaller, faster, and more efficient model variants. This works with <a href="https://aws.amazon.com/nova/?trk=c4ea046f-18ad-4d23-a1ac-cdd1267f942c&amp;sc_channel=el">Amazon Nova 2 Lite</a> model, improving quality and price performance for specific business needs, with support for additional models coming soon.</li>
<li><strong>Security –</strong> Data remains within the secure AWS environment throughout the entire customization process, mitigating security and compliance concerns.</li>
</ul><p>The capability supports two complementary approaches to provide flexibility for optimizing models:</p><ul><li><strong>Reinforcement Learning with Verifiable Rewards (RLVR)</strong> uses rule-based graders for objective tasks like code generation or math reasoning.</li>
<li><strong>Reinforcement Learning from AI Feedback (RLAIF)</strong> employs AI-based judges for subjective tasks like instruction following or content moderation.</li>
</ul><p><strong>Getting started with reinforcement fine-tuning in Amazon Bedrock</strong><br />Let’s walk through creating a reinforcement fine-tuning job.</p><p>First, I access the <a href="https://console.aws.amazon.com/bedrock/?trk=c4ea046f-18ad-4d23-a1ac-cdd1267f942c&amp;sc_channel=el">Amazon Bedrock console</a>. Then, I navigate to the <strong>Custom models</strong> page. I choose <strong>Create</strong> and then choose <strong>Reinforcement fine-tuning job</strong>.</p><p><img class="aligncenter wp-image-102279 size-full c6" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/28/2025-news-bedrock-rev-3-1.png" alt="" width="1440" height="917" /></p><p>I start by entering the name of this customization job and then select my base model. At launch, reinforcement fine-tuning supports <a href="https://aws.amazon.com/nova/?trk=c4ea046f-18ad-4d23-a1ac-cdd1267f942c&amp;sc_channel=el">Amazon Nova 2 Lite</a>, with support for additional models coming soon.</p><p><img class="aligncenter size-full wp-image-101326 c6" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/18/2025-news-bedrock-rl-2-0.png" alt="" width="1146" height="607" /></p><p>Next, I need to provide training data. I can use my stored invocation logs directly, eliminating the need to upload separate datasets. I can also upload new JSONL files or select existing datasets from <a href="https://aws.amazon.com/s3/?trk=c4ea046f-18ad-4d23-a1ac-cdd1267f942c&amp;sc_channel=el">Amazon Simple Storage Service (Amazon S3)</a>. Reinforcement fine-tuning automatically validates my training dataset and supports the OpenAI Chat Completions data format. If I provide invocation logs in the Amazon Bedrock <a href="https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InvokeModel.html">invoke</a> or <a href="https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_Converse.html">converse</a> format, Amazon Bedrock automatically converts them to the Chat Completions format.</p><p><img class="aligncenter wp-image-102280 size-full c6" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/28/2025-news-bedrock-rev-3-2.png" alt="" width="1337" height="381" /></p><p>The reward function setup is where I define what constitutes a good response. I have two options here. For objective tasks, I can select <strong>Custom code</strong> and write custom Python code that gets executed through <a href="https://aws.amazon.com/lambda/?trk=c4ea046f-18ad-4d23-a1ac-cdd1267f942c&amp;sc_channel=el">AWS Lambda</a> functions. For more subjective evaluations, I can select <strong>Model as judge</strong> to use <a href="https://aws.amazon.com/what-is/foundation-models/?trk=c4ea046f-18ad-4d23-a1ac-cdd1267f942c&amp;sc_channel=el">foundation models (FMs)</a> as judges by providing evaluation instructions.</p><p>Here, I select <strong>Custom code</strong>, and I create a new Lambda function or use an existing one as a reward function. I can start with one of the provided templates and customize it for my specific needs.</p><p><img class="aligncenter wp-image-102281 size-full c6" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/28/2025-news-bedrock-rev-3-3.png" alt="" width="1440" height="474" /></p><p>I can optionally modify default hyperparameters like learning rate, batch size, and epochs.</p><p><img class="aligncenter wp-image-102282 size-full c6" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/28/2025-news-bedrock-rev-3-4.png" alt="" width="1334" height="1133" /></p><p>For enhanced security, I can configure virtual private cloud (VPC) settings and <a href="https://aws.amazon.com/kms/">AWS Key Management Service (AWS KMS)</a> encryption to meet my organization’s compliance requirements. Then, I choose <strong>Create</strong> to start the model customization job.</p><p><img class="aligncenter wp-image-102284 size-full c6" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/28/2025-news-bedrock-rev-3-6.png" alt="" width="1264" height="882" /></p><p>During the training process, I can monitor real-time metrics to understand how the model is learning. The training metrics dashboard shows key performance indicators including reward scores, loss curves, and accuracy improvements over time. These metrics help me understand whether the model is converging properly and if the reward function is effectively guiding the learning process.</p><p><img class="aligncenter size-full wp-image-102206 c6" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/27/2025-news-bedrock-rev-4.png" alt="" width="2207" height="2279" /></p><p>When the reinforcement fine-tuning job is completed, I can see the final job status on the <strong>Model details</strong> page.</p><p><img class="aligncenter size-full wp-image-102207 c6" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/27/2025-news-bedrock-rev-3.png" alt="" width="2194" height="1388" /></p><p>Once the job is completed, I can deploy the model with a single click. I select <strong>Set up inference</strong>, then choose <strong>Deploy for on-demand</strong>.</p><p><img class="aligncenter size-full wp-image-102208 c6" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/27/2025-news-bedrock-rev-5.png" alt="" width="2224" height="957" /></p><p>Here, I provide a few details for my model.</p><p><img class="aligncenter size-full wp-image-102209 c6" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/27/2025-news-bedrock-rev-6.png" alt="" width="2659" height="1269" /></p><p>After deployment, I can quickly evaluate the model’s performance using the Amazon Bedrock playground. This helps me to test the fine-tuned model with sample prompts and compare its responses against the base model to validate the improvements. I select <strong>Test in playground.</strong></p><p><img class="aligncenter size-full wp-image-102210 c6" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/27/2025-news-bedrock-rev-7.png" alt="" width="2166" height="1102" /></p><p>The playground provides an intuitive interface for rapid testing and iteration, helping me confirm that the model meets my quality requirements before integrating it into production applications.</p><p><img class="aligncenter size-full wp-image-102211 c6" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/27/2025-news-bedrock-rev-8.png" alt="" width="2783" height="1628" /></p><p><strong>Interactive demo</strong><br />Learn more by navigating an interactive demo of <a href="https://aws.storylane.io/share/2wbkrcppkxdr">Amazon Bedrock reinforcement fine-tuning</a> in action.</p><p><img class="aligncenter wp-image-102214 size-full" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/27/2025-news-bedrock-rev-9.png" alt="" width="1798" height="967" /></p><p><strong>Additional things to know</strong><br />Here are key points to note:</p><ul><li><strong>Templates —</strong> There are seven ready-to-use reward function templates covering common use cases for both objective and subjective tasks.</li>
<li><strong>Pricing —</strong> To learn more about pricing, refer to the <a href="https://aws.amazon.com/bedrock/pricing/?trk=c4ea046f-18ad-4d23-a1ac-cdd1267f942c&amp;sc_channel=el">Amazon Bedrock pricing page</a>.</li>
<li><strong>Security —</strong> Training data and custom models remain private and aren’t used to improve FMs for public use. It supports VPC and AWS KMS encryption for enhanced security.</li>
</ul><p>Get started with reinforcement fine-tuning by visiting the <a href="https://docs.aws.amazon.com/bedrock/latest/userguide/reinforcement-fine-tuning.html">reinforcement fine-tuning documentation</a> and by accessing the <a href="https://console.aws.amazon.com/bedrock?trk=c4ea046f-18ad-4d23-a1ac-cdd1267f942c&amp;sc_channel=el">Amazon Bedrock console</a>.</p><p>Happy building!<br />— <a href="https://www.linkedin.com/in/donnieprakoso">Donnie</a></p></section><aside id="Comments" class="blog-comments"><div data-lb-comp="aws-blog:cosmic-comments" data-env="prod" data-content-id="e8293dcd-0524-43ea-8e86-7520bc50b472" data-title="Amazon Bedrock adds reinforcement ﬁne-tuning simplifying how developers build smarter, more accurate AI models" data-url="https://aws.amazon.com/blogs/aws/improve-model-accuracy-with-reinforcement-fine-tuning-in-amazon-bedrock/"><p data-failed-message="Comments cannot be loaded… Please refresh and try again.">Loading comments…</p></div>
</aside>]]></summary>
    <link href="https://aws.amazon.com/blogs/aws/improve-model-accuracy-with-reinforcement-fine-tuning-in-amazon-bedrock/"/>
    <updated>2025-12-03T17:08:00+01:00</updated>
  </entry>
  <entry>
    <id>https://aws.amazon.com/blogs/aws/new-serverless-customization-in-amazon-sagemaker-ai-accelerates-model-fine-tuning/</id>
    <title><![CDATA[New serverless customization in Amazon SageMaker AI accelerates model fine-tuning]]></title>
    <summary><![CDATA[<section class="blog-post-content lb-rtxt"><table><tbody><tr><td>
<p></p></td></tr></tbody>


</table><p>Today, I’m happy to announce new serverless customization in <a href="https://aws.amazon.com/sagemaker/ai/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">Amazon SageMaker AI</a> for popular AI models, such as <a href="https://aws.amazon.com/ai/generative-ai/nova/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">Amazon Nova</a>, <a href="https://aws.amazon.com/bedrock/deepseek/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">DeepSeek</a>, <a href="https://aws.amazon.com/bedrock/openai/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">GPT-OSS</a>, <a href="https://aws.amazon.com/bedrock/meta/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">Llama</a>, and <a href="https://aws.amazon.com/bedrock/qwen/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">Qwen</a>. The new customization capability provides an easy-to-use interface for the latest fine-tuning techniques like reinforcement learning, so you can accelerate the AI model customization process from months to days.</p><p>With a few clicks, you can seamlessly select a model and customization technique, and handle model evaluation and deployment—all entirely serverless so you can focus on model tuning rather than managing infrastructure. When you choose serverless customization, SageMaker AI automatically selects and provisions the appropriate compute resources based on the model and data size.</p><p><strong class="c6">Getting started with serverless model customization</strong><br />You can get started customizing models in <a href="https://aws.amazon.com/sagemaker/ai/studio/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">Amazon SageMaker Studio</a>. Choose <strong>Models</strong> in the left navigation pane and check out your favorite AI models to be customized.</p><p><img class="aligncenter size-full wp-image-101130" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/17/2025-sagemaker-ai-custom-model-1-models.jpg" alt="" width="2560" height="1393" data-wp-editing="1" /></p><p><strong>Customize with UI</strong><br />You can customize AI models in a only few clicks. In the <strong>Customize model</strong> dropdown list for a specific model such as <strong>Meta Llama 3.1 8B Instruct</strong>, choose <strong>Customize with UI</strong>.</p><p><img class="aligncenter wp-image-101777 size-full c7" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/24/2025-sagemaker-ai-custom-model-2-with-ui.jpg" alt="" width="1989" height="2560" /></p><p>You can select a customization technique used to adapt the base model to your use case. SageMaker AI supports <strong>Supervised Fine-Tuning</strong> and the latest model customization techniques including <strong>Direct Preference Optimization</strong>, <strong>Reinforcement Learning from Verifiable Rewards (RLVR)</strong>, and <strong>Reinforcement Learning from AI Feedback (RLAIF)</strong>. Each technique optimizes models in different ways, with selection influenced by factors such as dataset size and quality, available computational resources, task at hand, desired accuracy levels, and deployment constraints.</p><p>Upload or select a training dataset to match the format required by the customization technique selected. Use the values of batch size, learning rate, and number of epochs recommended by the technique selected. You can conﬁgure advanced settings such as hyperparameters, a newly introduced serverless MLﬂow application for experiment tracking, and network and storage volume encryption. Choose <strong>Submit</strong> to get started on your model training job.</p><p>After your training job is complete, you can see the models you created in the <strong>My Models</strong> tab. Choose <strong>View details</strong> in one of your models.</p><p><img class="aligncenter wp-image-101144 size-full c7" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/17/2025-sagemaker-ai-custom-model-3-with-ui-1.jpg" alt="" width="2076" height="1396" /></p><p>By choosing <strong>Continue customization</strong>, you can continue to customize your model by adjusting hyperparameters or training with different techniques. By choosing <strong>Evaluate</strong>, you can evaluate your customized model to see how it performs compared to the base model.</p><p>When you complete both jobs, you can choose either the <strong>SageMaker</strong> or <strong>Bedrock</strong> in the <strong>Deploy</strong> dropdown list to deploy your model.</p><p><img class="aligncenter wp-image-101778 size-full c7" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/24/2025-sagemaker-ai-custom-model-4-with-ui-1-1.jpg" alt="" width="2040" height="1460" /></p><p>You can choose <a href="https://aws.amazon.com/bedrock/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">Amazon Bedrock</a> for serverless inference. Choose <strong>Bedrock</strong> and the model name to deploy the model into Amazon Bedrock. To find your deployed models, choose <strong>Imported models</strong> in the <a href="https://console.aws.amazon.com/bedrock?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">Bedrock console</a>.</p><p><img class="aligncenter wp-image-101781 size-full c8" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/24/2025-sagemaker-ai-custom-model-5-with-ui-deploy-bedrock.jpg" alt="" width="2392" height="1050" /></p><p>You can also deploy your model to a SageMaker AI inference endpoint if you want to control your deployment resources such as an instance type and instance count. After the SageMaker AI deployment is <strong>In service</strong>, you can use this endpoint to perform inference. In the <strong>Playground</strong> tab, you can test your customized model with a single prompt or chat mode.</p><p><img class="aligncenter wp-image-101444 size-full c7" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/19/2025-sagemaker-ai-custom-model-6-with-ui-1.jpg" alt="" width="2230" height="2014" /></p><p>With the serverless MLﬂow capability, you can automatically log all critical experiment metrics without modifying code and access rich visualizations for further analysis.</p><p><strong>Customize with code</strong><br />When you choose customizing with code, you can see a sample notebook to fine-tune or deploy AI models. If you want to edit the sample notebook, open it in JupyterLab. Alternatively, you can deploy the model immediately by choosing <strong>Deploy</strong>.</p><p><img class="aligncenter wp-image-101784 size-full c7" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/24/2025-sagemaker-ai-custom-model-3-with-code-1-tune-1.jpg" alt="" width="2278" height="1446" /></p><p>You can choose the Amazon Bedrock or SageMaker AI endpoint by selecting the deployment resources either from <a href="https://aws.amazon.com/sagemaker/ai/deploy/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">Amazon SageMaker Inference</a> or <a href="https://aws.amazon.com/sagemaker/ai/hyperpod/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">Amazon SageMaker Hyperpod</a>.</p><p><img class="aligncenter size-full wp-image-101160 c7" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/17/2025-sagemaker-ai-custom-model-3-with-code-2-deploy.jpg" alt="" width="2220" height="1262" /></p><p>When you choose <strong>Deploy</strong> on the bottom right of the page, it will be redirected back to the model detail page. After the SageMaker AI deployment is in service, you can use this endpoint to perform inference.</p><p>Okay, you’ve seen how to streamline the model customization in the SageMaker AI. You can now choose your favorite way. To learn more, visit the <a href="https://docs.aws.amazon.com/sagemaker/latest/dg/whatis.html?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">Amazon SageMaker AI Developer Guide</a>.</p><p><strong class="c6">Now available</strong><br /><a href="https://aws.amazon.com/sagemaker/ai/model-customization/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">New serverless AI model customization</a> in Amazon SageMaker AI is now available in US East (N. Virginia), US West (Oregon), Asia Pacific (Tokyo), and Europe (Ireland) Regions. You only pay for the tokens processed during training and inference. To learn more details, visit <a href="https://aws.amazon.com/sagemaker/ai/pricing/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">Amazon SageMaker AI pricing page</a>.</p><p>Give it a try in <a href="https://console.aws.amazon.com/sagemaker?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">Amazon SageMaker Studio</a> and send feedback to <a href="https://repost.aws/tags/TAT80swPyVRPKPcA0rsJYPuA/amazon-sagemaker?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">AWS re:Post for SageMaker</a> or through your usual AWS Support contacts.</p><p>— <a href="https://linkedin.com/in/channy">Channy</a></p></section><aside class="blog-comments"><div data-lb-comp="aws-blog:cosmic-comments" data-env="prod" data-content-id="51695978-8bb2-471e-b8b5-6be8c2638279" data-title="New serverless customization in Amazon SageMaker AI accelerates model fine-tuning" data-url="https://aws.amazon.com/blogs/aws/new-serverless-customization-in-amazon-sagemaker-ai-accelerates-model-fine-tuning/"><p data-failed-message="Comments cannot be loaded… Please refresh and try again.">Loading comments…</p></div>
</aside>]]></summary>
    <link href="https://aws.amazon.com/blogs/aws/new-serverless-customization-in-amazon-sagemaker-ai-accelerates-model-fine-tuning/"/>
    <updated>2025-12-03T17:08:00+01:00</updated>
  </entry>
  <entry>
    <id>https://aws.amazon.com/blogs/aws/introducing-checkpointless-and-elastic-training-on-amazon-sagemaker-hyperpod/</id>
    <title><![CDATA[Introducing checkpointless and elastic training on Amazon SageMaker HyperPod]]></title>
    <summary><![CDATA[<section class="blog-post-content lb-rtxt"><table><tbody><tr><td>
<p></p></td></tr></tbody>


</table><p>Today, we’re announcing two new AI model training features within <a href="https://aws.amazon.com/sagemaker/ai/hyperpod/">Amazon SageMaker HyperPod</a>: checkpointless training, an approach that mitigates the need for traditional checkpoint-based recovery by enabling peer-to-peer state recovery, and elastic training, enabling AI workloads to automatically scale based on resource availability.</p><ul><li><strong>Checkpointless training</strong> – Checkpointless training eliminates disruptive checkpoint-restart cycles, maintaining forward training momentum despite failures, reducing recovery time from hours to minutes. Accelerate your AI model development, reclaim days from development timelines, and confidently scale training workflows to thousands of AI accelerators.</li>
<li><strong>Elastic training </strong> – Elastic training maximizes cluster utilization as training workloads automatically expand to use idle capacity as it becomes available, and contract to yield resources as higher-priority workloads like inference volumes peak. Save hours of engineering time per week spent reconfiguring training jobs based on compute availability.</li>
</ul><p>Rather than spending time managing training infrastructure, these new training techniques mean that your team can concentrate entirely on enhancing model performance, ultimately getting your AI models to market faster. By eliminating the traditional checkpoint dependencies and fully utilizing available capacity, you can significantly reduce model training completion times.</p><p><strong class="c6">Checkpointless training: How it works</strong><br />Traditional checkpoint-based recovery has these sequential job stages: 1) job termination and restart, 2) process discovery and network setup, 3) checkpoint retrieval, 4) data loader initialization, and 5) training loop resumption. When failures occur, each stage can become a bottleneck and training recovery can take up to an hour on self-managed training clusters. The entire cluster must wait for every single stage to complete before training can resume. This can lead to the entire training cluster sitting idle during recovery operations, which increases costs and extends the time to market.</p><p>Checkpointless training removes this bottleneck entirely by maintaining continuous model state preservation across the training cluster. When failures occur, the system instantly recovers by using healthy peers, avoiding the need for a checkpoint-based recovery that requires restarting the entire job. As a result, checkpointless training enables fault recovery in minutes.</p><p><img class="aligncenter size-full wp-image-101253 c7" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/18/2025-sageamker-hyperpod-checkpointless-training.gif" alt="" width="800" height="592" /></p><p>Checkpointless training is designed for incremental adoption and built on four core components that work together: 1) collective communications initialization optimizations, 2) memory-mapped data loading that enables caching, 3) in-process recovery, and 4) checkpointless peer-to-peer state replication. These components are orchestrated through the <a href="https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-eks-operator.html">HyperPod training operator</a> that is used to launch the job. Each component optimizes a specific step in the recovery process, and together they enable automatic detection and recovery of infrastructure faults in minutes with zero manual intervention, even with thousands of AI accelerators. You can progressively enable each of these features as your training scales.</p><p>The latest <a href="https://aws.amazon.com/nova/">Amazon Nova</a> models were trained using this technology on tens of thousands of accelerators. Additionally, based on internal studies on cluster sizes ranging between 16 GPUs to over 2,000 GPUs, checkpointless training showcased significant improvements in recovery times, reducing downtime by over 80% compared to traditional checkpoint-based recovery.</p><p>To learn more, visit <a href="https://docs.aws.amazon.com/sagemaker/latest/dg/hyperpod-checkpointless-training.html">HyperPod Checkpointless Training</a> in the Amazon SageMaker AI Developer Guide.</p><p><strong class="c6">Elastic training: How it works</strong><br />On clusters that run different types of modern AI workloads, accelerator availability can change continuously throughout the day as short-duration training runs complete, inference spikes occur and subside, or resources free up from completed experiments. Despite this dynamic availability of AI accelerators, traditional training workloads remain locked into their initial compute allocation, unable to take advantage of idle accelerators without manual intervention. This rigidity leaves valuable GPU capacity unused and prevents organizations from maximizing their infrastructure investment.</p><p><img class="size-full wp-image-101255 alignright c8" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/18/2025-sageamker-hyperpod-elastic-training.gif" alt="" width="400" height="437" />Elastic training transforms how training workloads interact with cluster resources. Training jobs can automatically scale up to utilize available accelerators and gracefully contract when resources are needed elsewhere, all while maintaining training quality.</p><p>Workload elasticity is enabled through the HyperPod training operator that orchestrates scaling decisions through integration with the Kubernetes control plane and resource scheduler. It continuously monitors cluster state through three primary channels: pod lifecycle events, node availability changes, and resource scheduler priority signals. This comprehensive monitoring enables near-instantaneous detection of scaling opportunities, whether from newly available resources or requests from higher-priority workloads.</p><p>The scaling mechanism relies on adding and removing data parallel replicas. When additional compute resources become available, new data parallel replicas join the training job, accelerating throughput. Conversely, during scale-down events (for example, when a higher-priority workload requests resources), the system scales down by removing replicas rather than terminating the entire job, allowing training to continue at reduced capacity.</p><p>Across different scales, the system preserves the global batch size and adapts learning rates, preventing model convergence from being adversely impacted. This enables workloads to dynamically scale up or down to utilize available AI accelerators without any manual intervention.</p><p>You can start elastic training through the HyperPod recipes for publicly available foundation models (FMs) including Llama and GPT-OSS. Additionally, you can modify your PyTorch training scripts to add elastic event handlers, which enable the job to dynamically scale.</p><p>To learn more, visit the <a href="https://docs.aws.amazon.com/sagemaker/latest/dg/hyperpod-elastic-training.html">HyperPod Elastic Training</a> in the Amazon SageMaker AI Developer Guide. To get started, find the <a href="https://github.com/aws/sagemaker-hyperpod-recipes">HyperPod recipes</a> available in the AWS GitHub repository.</p><p><strong class="c6">Now available</strong><br />Both these new HyperPod features are available in <mark>[REGION LIST]</mark> AWS Regions. You can use these training techniques without additional cost. To learn more, visit the <a href="https://aws.amazon.com/sagemaker/hyperpod?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">SageMaker HyperPod product page</a> and <a href="https://aws.amazon.com/sagemaker/pricing?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">SageMaker AI pricing page</a>.</p><p>Give it a try and send feedback to <a href="https://repost.aws/tags/TAT80swPyVRPKPcA0rsJYPuA/amazon-sagemaker">AWS re:Post for SageMaker</a> or through your usual AWS Support contacts.</p><p>— <a href="https://linkedin.com/in/channy">Channy</a></p></section><aside class="blog-comments"><div data-lb-comp="aws-blog:cosmic-comments" data-env="prod" data-content-id="e9ff4725-9aa0-45f0-8719-079fb6a71e9b" data-title="Introducing checkpointless and elastic training on Amazon SageMaker HyperPod" data-url="https://aws.amazon.com/blogs/aws/introducing-checkpointless-and-elastic-training-on-amazon-sagemaker-hyperpod/"><p data-failed-message="Comments cannot be loaded… Please refresh and try again.">Loading comments…</p></div>
</aside>]]></summary>
    <link href="https://aws.amazon.com/blogs/aws/introducing-checkpointless-and-elastic-training-on-amazon-sagemaker-hyperpod/"/>
    <updated>2025-12-03T17:07:00+01:00</updated>
  </entry>
  <entry>
    <id>https://aws.amazon.com/blogs/aws/announcing-replication-support-and-intelligent-tiering-for-amazon-s3-tables/</id>
    <title><![CDATA[Announcing replication support and Intelligent-Tiering for Amazon S3 Tables]]></title>
    <summary><![CDATA[<section class="blog-post-content lb-rtxt"><table id="amazon-polly-audio-table"><tbody><tr><td id="amazon-polly-audio-tab">
<p></p></td></tr></tbody>


</table><p>Today, we’re announcing two new capabilities for <a href="https://aws.amazon.com/s3/features/tables/">Amazon S3 Tables</a>: support for the new Intelligent-Tiering storage class that automatically optimizes costs based on access patterns, and replication support to automatically maintain consistent <a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-tables.html">Apache Iceberg</a> table replicas across <a href="https://docs.aws.amazon.com/global-infrastructure/latest/regions/aws-regions.html">AWS Regions</a> and <a href="https://docs.aws.amazon.com/glossary/latest/reference/glos-chap.html#account">accounts</a> without manual sync.</p><p>Organizations working with tabular data face two common challenges. First, they need to manually manage storage costs as their datasets grow and access patterns change over time. Second, when maintaining replicas of Iceberg tables across Regions or accounts, they must build and maintain complex architectures to track updates, manage object replication, and handle metadata transformations.</p><p><strong>S3 Tables Intelligent-Tiering storage class<br /></strong> With the S3 Tables Intelligent-Tiering storage class<strong>,</strong> data is automatically tiered to the most cost-effective access tier based on access patterns. Data is stored in three low-latency tiers: Frequent Access, Infrequent Access (40% lower cost than Frequent Access), and Archive Instant Access (68% lower cost compared to Infrequent Access). After 30 days without access, data moves to Infrequent Access, and after 90 days, it moves to Archive Instant Access. This happens without changes to your applications or impact on performance.</p><p>Table maintenance activities, including compaction, snapshot expiration, and unreferenced file removal, operate without affecting the data’s access tiers. Compaction automatically processes only data in the Frequent Access tier, optimizing performance for actively queried data while reducing maintenance costs by skipping colder files in lower-cost tiers.</p><p>By default, all existing tables use the Standard storage class. When creating new tables, you can specify Intelligent-Tiering as the storage class, or you can rely on the default storage class configured at the table bucket level. You can set Intelligent-Tiering as the default storage class for your table bucket to automatically store tables in Intelligent-Tiering when no storage class is specified during creation.</p><p><strong>Let me show you how it works<br /></strong> You can use the <a href="https://aws.amazon.com/cli/">AWS Command Line Interface (AWS CLI)</a> and the <code>put-table-bucket-storage-class</code> and <code>get-table-bucket-storage-class</code> commands to change or verify the storage tier of your S3 table bucket.</p><pre class="lang-sh"># Change the storage class
aws s3tables put-table-bucket-storage-class \
   --table-bucket-arn $TABLE_BUCKET_ARN  \
   --storage-class-configuration storageClass=INTELLIGENT_TIERING
# Verify the storage class
aws s3tables get-table-bucket-storage-class \
   --table-bucket-arn $TABLE_BUCKET_ARN  \
{ "storageClassConfiguration":
   { 
      "storageClass": "INTELLIGENT_TIERING"
   }
}</pre><p><strong>S3 Tables replication support<br /></strong> The new S3 Tables replication support helps you maintain consistent read replicas of your tables across AWS Regions and accounts. You specify the destination table bucket and the service creates read-only replica tables. It replicates all updates chronologically while preserving parent-child snapshot relationships. Table replication helps you build global datasets to minimize query latency for geographically distributed teams, meet compliance requirements, and provide data protection.</p><p>You can now easily create replica tables that deliver similar query performance as their source tables. Replica tables are updated within minutes of source table updates and support independent encryption and retention policies from their source tables. Replica tables can be queried using <a href="https://aws.amazon.com/sagemaker/unified-studio/">Amazon SageMaker Unified Studio</a> or any Iceberg-compatible engine including <a href="https://duckdb.org/">DuckDB</a>, <a href="https://py.iceberg.apache.org/">PyIceberg</a>, <a href="https://spark.apache.org/">Apache Spark</a>, and <a href="https://trino.io/">Trino</a>.</p><p>You can create and maintain replicas of your tables through the <a href="https://console.aws.amazon.com">AWS Management Console</a> or APIs and <a href="https://aws.amazon.com/tools/">AWS SDKs</a>. You specify one or more destination table buckets to replicate your source tables. When you turn on replication, S3 Tables automatically creates read-only replica tables in your destination table buckets, backfills them with the latest state of the source table, and continually monitors for new updates to keep replicas in sync. This helps you meet time-travel and audit requirements while maintaining multiple replicas of your data.</p><p><strong>Let me show you how it works<br /></strong> To show you how it works, I proceed in three steps. First, I create an S3 table bucket, create an Iceberg table, and populate it with data. Second, I configure the replication. Third, I connect to the replicated table and query the data to show you that changes are replicated.</p><p>For this demo, the S3 team kindly gave me access to an <a href="https://aws.amazon.com/emr">Amazon EMR</a> cluster already provisioned. You can follow <a href="https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-gs.html">the Amazon EMR documentation to create your own cluster</a>. They also created two S3 table buckets, a source and a destination for the replication. Again, <a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-tables-buckets-create.html">the S3 Tables documentation will help you to get started</a>.</p><p>I take a note of the two S3 Tables bucket Amazon Resource Names (ARNs). In this demo, I refer to these as the environment variables <code>SOURCE_TABLE_ARN</code> and <code>DEST_TABLE_ARN</code>.</p><p><strong>First step: Prepare the source database</strong></p><p>I start a terminal, connect to the EMR cluster, start a Spark session, create a table, and insert a row of data. The commands I use in this demo are documented in <a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-tables-integrating-open-source.html">Accessing tables using the Amazon S3 Tables Iceberg REST endpoint</a>.</p><pre class="lang-spark">sudo spark-shell \
--packages "org.apache.iceberg:iceberg-spark-runtime-3.5_2.12:1.4.1,software.amazon.awssdk:bundle:2.20.160,software.amazon.awssdk:url-connection-client:2.20.160" \
--master "local[*]" \
--conf "spark.sql.extensions=org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions" \
--conf "spark.sql.defaultCatalog=spark_catalog" \
--conf "spark.sql.catalog.spark_catalog=org.apache.iceberg.spark.SparkCatalog" \
--conf "spark.sql.catalog.spark_catalog.type=rest" \
--conf "spark.sql.catalog.spark_catalog.uri=https://s3tables.us-east-1.amazonaws.com/iceberg" \
--conf "spark.sql.catalog.spark_catalog.warehouse=arn:aws:s3tables:us-east-1:012345678901:bucket/aws-news-blog-test" \
--conf "spark.sql.catalog.spark_catalog.rest.sigv4-enabled=true" \
--conf "spark.sql.catalog.spark_catalog.rest.signing-name=s3tables" \
--conf "spark.sql.catalog.spark_catalog.rest.signing-region=us-east-1" \
--conf "spark.sql.catalog.spark_catalog.io-impl=org.apache.iceberg.aws.s3.S3FileIO" \
--conf "spark.hadoop.fs.s3a.aws.credentials.provider=org.apache.hadoop.fs.s3a.SimpleAWSCredentialProvider" \
--conf "spark.sql.catalog.spark_catalog.rest-metrics-reporting-enabled=false"
spark.sql("""
CREATE TABLE s3tablesbucket.test.aws_news_blog (
customer_id STRING,
address STRING
) USING iceberg
""")
spark.sql("INSERT INTO s3tablesbucket.test.aws_news_blog VALUES ('cust1', 'val1')")
spark.sql("SELECT * FROM s3tablesbucket.test.aws_news_blog LIMIT 10").show()
+-----------+-------+
|customer_id|address|
+-----------+-------+
|      cust1|   val1|
+-----------+-------+</pre><p>So far, so good.</p><p><strong>Second step: Configure the replication for S3 Tables</strong></p><p>Now, I use the CLI on my laptop to configure the S3 table bucket replication.</p><p>Before doing so, I create an <a href="https://aws.amazon.com/iam/">AWS Identity and Access Management (IAM)</a> policy to authorize the replication service to access my S3 table bucket and encryption keys. <a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-tables-replication-tables.html,">Refer to the S3 Tables replication documentation for the details</a>. The permissions I used for this demo are:</p><pre class="lang-json">{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:*",
                "s3tables:*",
                "kms:DescribeKey",
                "kms:GenerateDataKey",
                "kms:Decrypt"
            ],
            "Resource": "*"
        }
    ]
}</pre><p>After having created this IAM policy, I can now proceed and configure the replication:</p><pre class="lang-sh">aws s3tables-replication put-table-replication \
--table-arn ${SOURCE_TABLE_ARN} \
--configuration  '{
    "role": "arn:aws:iam::&lt;MY_ACCOUNT_NUMBER&gt;:role/S3TableReplicationManualTestingRole", 
    "rules":[
        {
            "destinations": [
                {
                    "destinationTableBucketARN": "${DST_TABLE_ARN}"
                }]
        }
    ]
</pre><p>The replication starts automatically. Updates are typically replicated within minutes. The time it takes to complete depends on the volume of data in the source table.</p><p><strong>Third step: Connect to the replicated table and query the data</strong></p><p>Now, I connect to the EMR cluster again, and I start a second Spark session. This time, I use the destination table.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/14/2025-11-14_13-59-13.png"><img class="aligncenter size-full wp-image-100986" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/14/2025-11-14_13-59-13.png" alt="S3 Tables replication - destination table" width="802" height="424" /></a></p><p>To verify the replication works, I insert a second row of data on the source table.</p><pre class="lang-spark">spark.sql("INSERT INTO s3tablesbucket.test.aws_news_blog VALUES ('cust2', 'val2')")
</pre><p>I wait a few minutes for the replication to trigger. I follow the status of the replication with the <code>get-table-replication-status</code> command.</p><pre class="lang-sh">aws s3tables-replication get-table-replication-status \
--table-arn ${SOURCE_TABLE_ARN} \
{
    "sourceTableArn": "arn:aws:s3tables:us-east-1:012345678901:bucket/manual-test/table/e0fce724-b758-4ee6-85f7-ca8bce556b41",
    "destinations": [
        {
            "replicationStatus": "pending",
            "destinationTableBucketArn": "arn:aws:s3tables:us-east-1:012345678901:bucket/manual-test-dst",
            "destinationTableArn": "arn:aws:s3tables:us-east-1:012345678901:bucket/manual-test-dst/table/5e3fb799-10dc-470d-a380-1a16d6716db0",
            "lastSuccessfulReplicatedUpdate": {
                "metadataLocation": "s3://e0fce724-b758-4ee6-8-i9tkzok34kum8fy6jpex5jn68cwf4use1b-s3alias/e0fce724-b758-4ee6-85f7-ca8bce556b41/metadata/00001-40a15eb3-d72d-43fe-a1cf-84b4b3934e4c.metadata.json",
                "timestamp": "2025-11-14T12:58:18.140281+00:00"
            }
        }
    ]
}</pre><p>When replication status shows <code>ready</code>, I connect to the EMR cluster and I query the destination table. Without surprise, I see the new row of data.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/14/2025-11-14_14-44-40.png"><img class="aligncenter size-full wp-image-100987" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/14/2025-11-14_14-44-40.png" alt="S3 Tables replication - target table is up to date" width="778" height="126" /></a></p><p><strong>Additional things to know<br /></strong> Here are a couple of additional points to pay attention to:</p><ul><li>Replication for S3 Tables supports both Apache Iceberg V2 and V3 table formats, giving you flexibility in your table format choice.</li>
<li>You can configure replication at the table bucket level, making it straightforward to replicate all tables under that bucket without individual table configurations.</li>
<li>Your replica tables maintain the storage class you choose for your destination tables, which means you can optimize for your specific cost and performance needs.</li>
<li>Any Iceberg-compatible catalog can directly query your replica tables without additional coordination—they only need to point to the replica table location. This gives you flexibility in choosing query engines and tools.</li>
</ul><p><strong>Pricing and availability<br /></strong> You can track your storage usage by access tier through <a href="https://docs.aws.amazon.com/cur/latest/userguide/what-is-cur.html">AWS Cost and Usage Reports</a> and <a href="https://aws.amazon.com/cloudwatch/">Amazon CloudWatch</a> metrics. For replication monitoring, <a href="https://aws.amazon.com/cloudtrail/">AWS CloudTrail</a> logs provide events for each replicated object.</p><p>There are no additional charges to configure Intelligent-Tiering. You only pay for storage costs in each tier. Your tables continue to work as before, with automatic cost optimization based on your access patterns.</p><p>For S3 Tables replication, you pay the S3 Tables charges for storage in the destination table, for replication PUT requests, for table updates (commits), and for object monitoring on the replicated data. For cross-Region table replication, you also pay for inter-Region data transfer out from Amazon S3 to the destination Region based on the Region pair.</p><p>As usual, refer to the <a href="https://aws.amazon.com/s3/pricing/">Amazon S3 pricing page</a> for the details.</p><p>Both capabilities are available today in all AWS Regions where <a href="https://docs.aws.amazon.com/general/latest/gr/s3.html#s3_region">S3 Tables are supported</a>.</p><p>To learn more about these new capabilities, visit the <a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-tables.html">Amazon S3 Tables documentation</a> or try them in the <a href="https://console.aws.amazon.com/s3/table-buckets">Amazon S3 console</a> today. Share your feedback through AWS re:Post for Amazon S3 or through your AWS Support contacts.</p><a href="https://linktr.ee/sebsto">— seb</a></section><aside id="Comments" class="blog-comments"><div data-lb-comp="aws-blog:cosmic-comments" data-env="prod" data-content-id="e73c113b-82e0-4d8f-9a0a-34bce32c7919" data-title="Announcing replication support and Intelligent-Tiering for Amazon S3 Tables" data-url="https://aws.amazon.com/blogs/aws/announcing-replication-support-and-intelligent-tiering-for-amazon-s3-tables/"><p data-failed-message="Comments cannot be loaded… Please refresh and try again.">Loading comments…</p></div>
</aside>]]></summary>
    <link href="https://aws.amazon.com/blogs/aws/announcing-replication-support-and-intelligent-tiering-for-amazon-s3-tables/"/>
    <updated>2025-12-02T17:19:00+01:00</updated>
  </entry>
  <entry>
    <id>https://aws.amazon.com/blogs/aws/amazon-s3-storage-lens-adds-performance-metrics-support-for-billions-of-prefixes-and-export-to-s3-tables/</id>
    <title><![CDATA[Amazon S3 Storage Lens adds performance metrics, support for billions of prefixes, and export to S3 Tables]]></title>
    <summary><![CDATA[<section class="blog-post-content lb-rtxt"><table id="amazon-polly-audio-table"><tbody><tr><td id="amazon-polly-audio-tab">
<p></p></td></tr></tbody>


</table><p>Today, we’re announcing three new capabilities for <a href="https://aws.amazon.com/s3/storage-lens/">Amazon S3 Storage Lens</a> that give you deeper insights into your storage performance and usage patterns. With the addition of performance metrics, support for analyzing billions of prefixes, and direct export to <a href="https://aws.amazon.com/s3/features/tables/?trk=7c8639c6-87c6-47d6-9bd0-a5812eecb848&amp;sc_channel=el">Amazon S3 Tables,</a> you have the tools you need to optimize application performance, reduce costs, and make data-driven decisions about your Amazon S3 storage strategy.</p><p><strong>New performance metric categories</strong><br />S3 Storage Lens now includes eight new performance metric categories that help identify and resolve performance constraints across your organization. These are available at organization, account, bucket, and prefix levels. For example, the service helps you identify small objects in a bucket or prefix that can  slow down application performance. This can be mitigated by batching small objects for using the <a href="https://aws.amazon.com/s3/storage-classes/express-one-zone/?trk=7c8639c6-87c6-47d6-9bd0-a5812eecb848&amp;sc_channel=el">Amazon S3 Express One Zone</a> storage class for higher performance small object workloads.</p><p>To access the new performance metrics, you need to enable performance metrics in the S3 Storage Lens advanced tier when creating a new Storage Lens dashboard or editing an existing configuration.</p><table class="c9"><tbody><tr class="c7"><td class="c6"><strong>Metric category</strong></td>
<td class="c6"><strong>Details</strong></td>
<td class="c6"><strong>Use case</strong></td>
<td class="c6"><strong>Mitigation</strong></td>
</tr><tr class="c8"><td class="c6">Read request size</td>
<td class="c6">Distribution of read request sizes (GET) by day</td>
<td class="c6">Identify dataset with small read request patterns that slow down performance</td>
<td class="c6">Small request: Batch small objects or use Amazon S3 Express One Zone for high-performance small object workloads</td>
</tr><tr class="c8"><td class="c6">Write request size</td>
<td class="c6">Distribution of write request sizes (PUT, POST, COPY, and UploadPart) by day</td>
<td class="c6">Identify dataset with small write request patterns that slow down performance</td>
<td class="c6">Large request: Parallelize requests, use MPU or use AWS CRT</td>
</tr><tr class="c8"><td class="c6">Storage size</td>
<td class="c6">Distribution of object sizes</td>
<td class="c6">Identify dataset with small small objects that slow down performance</td>
<td class="c6">Small object sizes: Consider bundling small objects</td>
</tr><tr class="c8"><td class="c6">Concurrent PUT 503 errors</td>
<td class="c6">Number of 503s due to concurrent PUT operation on same object</td>
<td class="c6">Identify prefixes with concurrent PUT throttling that slow down performance</td>
<td class="c6">For single writer, modify retry behavior or use Amazon S3 Express One Zone. For multiple writers, use consensus mechanism or use Amazon S3 Express One Zone</td>
</tr><tr class="c8"><td class="c6">Cross-Region data transfer</td>
<td class="c6">Bytes transferred and requests sent across Region, in Region</td>
<td class="c6">Identify potential performance and cost degradation due to cross-Region data access</td>
<td class="c6">Co-locate compute with data in the same AWS Region</td>
</tr><tr class="c8"><td class="c6">Unique objects accessed</td>
<td class="c6">Number or percentage of unique objects accessed per day</td>
<td class="c6">Identify datasets where small subset of objects are being frequently accessed. These can be moved to higher performance storage tier for better performance</td>
<td class="c6">Consider moving active data to Amazon S3 Express One Zone or other caching solutions</td>
</tr><tr class="c8"><td class="c6">FirstByteLatency (existing <a href="https://aws.amazon.com/cloudwatch/?trk=7c8639c6-87c6-47d6-9bd0-a5812eecb848&amp;sc_channel=el">Amazon CloudWatch</a> metric)</td>
<td class="c6">Daily average of first byte latency metric</td>
<td class="c6">The daily average per-request time from the complete request being received to when the response starts to be returned</td>
<td class="c6">
</td></tr><tr class="c8"><td class="c6">TotalRequestLatency (existing Amazon CloudWatch metric)</td>
<td class="c6">Daily average of Total Request Latency</td>
<td class="c6">The daily average elapsed per request time from the first byte received to the last byte sent</td>
<td class="c6">
</td></tr></tbody></table><p><strong>How it works<br /></strong> On the <a href="https://console.aws.amazon.com/s3/storage-analytics-insights/?trk=7c8639c6-87c6-47d6-9bd0-a5812eecb848&amp;sc_channel=el">Amazon S3 console</a> I choose <strong>Create Storage Lens dashboard</strong> to create a new dashboard. You can also edit an existing dashboard configuration. I then configure general settings such as providing a <strong>Dashboard name</strong>, <strong>Status</strong>, and the optional <strong>Tags.</strong> Then, I choose <strong>Next</strong>.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/24/metricsdashboard1.png"><img class="aligncenter size-large wp-image-101722" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/24/metricsdashboard1-1024x283.png" alt="" width="1024" height="283" /></a><br />Next, I define the scope of the dashboard by selecting <strong>Include all Regions and Include all buckets</strong> and specifying the Regions and buckets to be included.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/24/metricsdashboard2.png"><img class="aligncenter size-large wp-image-101724" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/24/metricsdashboard2-1024x227.png" alt="" width="1024" height="227" /></a><br />I opt in to the <strong>Advanced tier</strong> in the Storage Lens dashboard configuration, select <strong>Performance metrics</strong>, then choose <strong>Next</strong>.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/24/metricsdashboard3.png"><img class="aligncenter size-large wp-image-101725" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/24/metricsdashboard3-1024x394.png" alt="" width="1024" height="394" /></a><br />Next, I select <strong>Prefix aggregation</strong> as an additional metrics aggregation, then leave the rest of the information as default before I choose <strong>Next</strong>.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/25/Screenshot-2025-11-18-at-11.13.31-PM.png"><img class="aligncenter size-large wp-image-101833" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/25/Screenshot-2025-11-18-at-11.13.31-PM-1024x464.png" alt="" width="1024" height="464" /></a><br />I select the <strong>Default metrics report</strong>, then <strong>General purpose bucket</strong> as the bucket type, and then select the Amazon S3 bucket in my AWS account as the <strong>Destination bucket</strong>. I leave the rest of the information as default, then select <strong>Next</strong>.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/25/Screenshot-2025-11-24-at-5.25.01-PM.png"><img class="aligncenter size-large wp-image-101834" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/25/Screenshot-2025-11-24-at-5.25.01-PM-1024x382.png" alt="" width="1024" height="382" /></a><br />I review all the information before I choose <strong>Submit</strong> to finalize the process.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/24/metricsdashboard6.png"><img class="aligncenter size-large wp-image-101728" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/24/metricsdashboard6-871x1024.png" alt="" width="871" height="1024" /></a><br />After it’s enabled, I’ll receive daily performance metrics directly in the <a href="https://console.aws.amazon.com/s3/?trk=7c8639c6-87c6-47d6-9bd0-a5812eecb848&amp;sc_channel=el">Storage Lens console</a> dashboard. You can also choose to export report in CSV or Parquet format to any bucket in your account or publish to Amazon CloudWatch. The performance metrics are aggregated and published daily and will be available at multiple levels: organization, account, bucket, and prefix. In this dropdown menu, I choose the % concurrent PUT 503 error for the <strong>Metric</strong>, Last 30 days for the <strong>Date range</strong>, and 10 for the <strong>Top N buckets</strong>.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/24/Screenshot-2025-11-18-at-10.46.28-PM.png"><img class="aligncenter size-large wp-image-101738" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/24/Screenshot-2025-11-18-at-10.46.28-PM-1024x434.png" alt="" width="1024" height="434" /></a><br />The Concurrent PUT 503 error count metric tracks the number of 503 errors generated by simultaneous PUT operations to the same object. Throttling errors can degrade application performance. For a single writer, modify retry behavior or use higher performance storage tier such as Amazon S3 Express One Zone to mitigate concurrent PUT 503 errors. For multiple writers scenario, use a consensus mechanism to avoid concurrent PUT 503 errors or use higher performance storage tier such as Amazon S3 Express One Zone.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/13/oasis_conc_PUT.png"><img class="aligncenter size-large wp-image-100904" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/13/oasis_conc_PUT-1024x730.png" alt="" width="1024" height="730" /></a></p><p><strong>Complete analytics for all prefixes in your S3 buckets</strong><br />S3 Storage Lens now supports analytics for all prefixes in your S3 buckets through a new <strong>Expanded prefixes metrics report</strong>. This capability removes previous limitations that restricted analysis to prefixes meeting a 1% size threshold and a maximum depth of 10 levels. You can now track up to billions of prefixes per bucket for analysis at the most granular prefix level, regardless of size or depth.</p><p>The Expanded prefixes metrics report includes all existing S3 Storage Lens metric categories: storage usage, activity metrics (requests and bytes transferred), data protection metrics, and detailed status code metrics.</p><p><strong>How to get started</strong><br />I follow the same steps outlined in the <strong>How it works</strong> section to create or update the Storage Lens dashboard. In Step 4 on the console, where you select export options, you can select the new <strong>Expanded prefixes metrics report</strong>. Thereafter, I can export the expanded prefixes metrics report in CSV or Parquet format to any general purpose bucket in my account for efficient querying of my Storage Lens data.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/24/metricsdashboard5-1.png"><img class="aligncenter size-large wp-image-101729" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/24/metricsdashboard5-1-1024x350.png" alt="" width="1024" height="350" /></a><br /><strong>Good to know<br /></strong> This enhancement addresses scenarios where organizations need granular visibility across their entire prefix structure. For example, you can identify prefixes with incomplete multipart uploads to reduce costs, track compliance across your entire prefix structure for encryption and replication requirements, and detect performance issues at the most granular level.</p><p><strong>Export S3 Storage Lens metrics to S3 Tables<br /></strong> S3 Storage Lens metrics can now be automatically exported to S3 Tables, a fully managed feature on AWS with built-in Apache Iceberg support. This integration provides daily automatic delivery of metrics to AWS managed S3 Tables for immediate querying without requiring additional processing infrastructure.</p><p><strong>How to get started</strong><br />I start by following the process outlined in Step 5 on the console, where I choose the export destination. This time, I choose <strong>Expanded prefixes metrics report</strong>. In addition to General purpose bucket, I choose <strong>Table bucket</strong>.</p><p>The new Storage Lens metrics are exported to new tables in an <a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-tables-buckets.html">AWS managed bucket</a> <code>aws-s3</code>.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/18/seer1.png"><img class="aligncenter size-large wp-image-101352" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/18/seer1-1024x429.png" alt="" width="1024" height="429" /></a><br />I select the <strong>expanded_prefixes_activity_metrics</strong> table to view API usage metrics for expanded prefix reports.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/18/seer2.png"><img class="aligncenter size-large wp-image-101353" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/18/seer2-1024x446.png" alt="" width="1024" height="446" /></a><br />I can preview the table on the Amazon S3 console or use <a href="https://aws.amazon.com/athena">Amazon Athena</a> to query the table.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/18/seer3.png"><img class="aligncenter size-large wp-image-101354" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/18/seer3-1024x323.png" alt="" width="1024" height="323" /></a><br /><strong>Good to know<br /></strong> S3 Tables integration with S3 Storage Lens simplifies metric analysis using familiar SQL tools and AWS analytics services such as Amazon Athena, <a href="https://quicksight.aws">Amazon QuickSight</a>, <a href="https://aws.amazon.com/emr">Amazon EMR</a>, and <a href="https://aws.amazon.com/redshift/">Amazon Redshift</a>, without requiring a data pipeline. The metrics are automatically organized for optimal querying, with custom retention and encryption options to suit your needs.</p><p>This integration enables cross-account and cross-Region analysis, custom dashboard creation, and data correlation with other AWS services. For example, you can combine Storage Lens metrics with S3 Metadata to analyze prefix-level activity patterns and identify objects in prefixes with cold data that are eligible for transition to lower-cost storage tiers.</p><p>For your agentic AI workflows, you can use natural language to query S3 Storage Lens metrics in S3 Tables with the <a href="https://aws.amazon.com/about-aws/whats-new/2025/07/amazon-s3-tables-mcp-server/">S3 Tables MCP Server</a>. Agents can ask questions such as ‘which buckets grew the most last month?’ or ‘show me storage costs by storage class’ and get instant insights from your observability data.</p><p><strong>Now available<br /></strong> All three enhancements are available in all <a href="https://builder.aws.com/build/capabilities/explore?tab=service-feature">AWS Regions</a> where S3 Storage Lens is currently offered (except the China Regions and AWS GovCloud (US)).</p><p>These features are included in the Amazon S3 Storage Lens Advanced tier at no additional charge beyond standard advanced tier pricing. For the S3 Tables export, you pay only for S3 Tables storage, maintenance, and queries. There is no additional charge for the export functionality itself.</p><p>To learn more about Amazon S3 Storage Lens performance metrics, support for billions of prefixes, and export to S3 Tables, refer to the <a href="http://docs.aws.amazon.com/AmazonS3/latest/userguide/storage_lens.html">Amazon S3 user guide</a>. For pricing details, visit the <a href="https://aws.amazon.com/s3/pricing/">Amazon S3 pricing page</a>.</p><p><a href="https://linkedin.com/in/veliswa-boya">Veliswa Boya</a>.</p></section><aside id="Comments" class="blog-comments"><div data-lb-comp="aws-blog:cosmic-comments" data-env="prod" data-content-id="48e36b96-2fbe-437c-9c0d-5e8bc450a319" data-title="Amazon S3 Storage Lens adds performance metrics, support for billions of prefixes, and export to S3 Tables" data-url="https://aws.amazon.com/blogs/aws/amazon-s3-storage-lens-adds-performance-metrics-support-for-billions-of-prefixes-and-export-to-s3-tables/"><p data-failed-message="Comments cannot be loaded… Please refresh and try again.">Loading comments…</p></div>
</aside>]]></summary>
    <link href="https://aws.amazon.com/blogs/aws/amazon-s3-storage-lens-adds-performance-metrics-support-for-billions-of-prefixes-and-export-to-s3-tables/"/>
    <updated>2025-12-02T17:15:00+01:00</updated>
  </entry>
  <entry>
    <id>https://aws.amazon.com/blogs/aws/amazon-bedrock-agentcore-adds-quality-evaluations-and-policy-controls-for-deploying-trusted-ai-agents/</id>
    <title><![CDATA[Amazon Bedrock AgentCore adds quality evaluations and policy controls for deploying trusted AI agents]]></title>
    <summary><![CDATA[<section class="blog-post-content lb-rtxt"><table><tbody><tr><td>
<p></p></td></tr></tbody>


</table><p>Today, we’re announcing new capabilities in <a href="https://aws.amazon.com/bedrock/agentcore/">Amazon Bedrock AgentCore</a> to further remove barriers holding AI agents back from production. Organizations across industries are already building on AgentCore, the most advanced platform to build, deploy, and operate highly capable agents securely at any scale. In just 5 months since preview, the <a href="https://github.com/aws/bedrock-agentcore-sdk-python">AgentCore SDK</a> has been downloaded over 2 million times. For example:</p><ul><li>PGA TOUR, a pioneer and innovation leader in sports has built a multi-agent content generation system to create articles for their digital platforms. The new solution, built on AgentCore, enables the PGA TOUR to provide comprehensive coverage for every player in the field, by increasing content writing speed by 1,000 percent while achieving a 95 percent reduction in costs.</li>
<li>Independent software vendors (ISVs) like Workday are building the software of the future on AgentCore. AgentCore Code Interpreter provides Workday Planning Agent with secure data protection and essential features for financial data exploration. Users can analyze financial and operational data through natural language queries, making financial planning intuitive and self-driven. This capability reduces time spent on routine planning analysis by 30 percent, saving approximately 100 hours per month.</li>
<li>Grupo Elfa, a Brazilian distributor and retailer, relies on AgentCore Observability for complete audit traceability and real-time metrics of their agents, transforming their reactive processes into proactive operations. Using this unified platform, their sales team can handle thousands of daily price quotes while the organization maintains full visibility of agent decisions, helping achieve 100 percent traceability of agent decisions and interactions, and reduced problem resolution time by 50 percent.</li>
</ul><p>As organizations scale their agent deployments, they face challenges around implementing the right boundaries and quality checks to confidently deploy agents. The autonomy that makes agents powerful also makes them hard to confidently deploy at scale, as they might access sensitive data inappropriately, make unauthorized decisions, or take unexpected actions. Development teams must balance enabling agent autonomy while ensuring they operate within acceptable boundaries and with the quality you require to put them in front of customers and employees.</p><p>The new capabilities available today take the guesswork out of this process and help you build and deploy trusted AI agents with confidence:</p><ul><li><strong>Policy in AgentCore</strong> (Preview) – Defines clear boundaries for agent actions by intercepting AgentCore Gateway tool calls before they run using policies with fine-grained permissions.</li>
<li><strong>AgentCore Evaluations</strong> (Preview) – Monitors the quality of your agents based on real-world behavior using built-in evaluators for dimensions such as correctness and helpfulness, plus custom evaluators for business-specific requirements.</li>
</ul><p>We’re also introducing features that expand what agents can do:</p><ul><li><strong>Episodic functionality in AgentCore Memory</strong> – A new long-term strategy that helps agents learn from experiences and adapt solutions across similar situations for improved consistency and performance in similar future tasks.</li>
<li><strong>Bidirectional streaming in AgentCore Runtime</strong> – Deploys voice agents where both users and agents can speak simultaneously following a natural conversation flow.</li>
</ul><p><strong>Policy in AgentCore for precise agent control</strong><br />Policy gives you control over the actions agents can take and are applied outside of the agent’s reasoning loop, treating agents as autonomous actors whose decisions require verification before reaching tools, systems, or data. It integrates with AgentCore Gateway to intercept tool calls as they happen, processing requests while maintaining operational speed, so workflows remain fast and responsive.</p><p>You can create policies using natural language or directly use <a href="https://www.cedarpolicy.com/">Cedar</a>—an open source policy language for fine-grained permissions—simplifying the process to set up, understand, and audit rules without writing custom code. This approach makes policy creation accessible to development, security, and compliance teams who can create, understand, and audit rules without specialized coding knowledge.</p><p>The policies operate independently of how the agent was built or which model it uses. You can define which tools and data agents can access—whether they are APIs, <a href="https://aws.amazon.com/lambda/">AWS Lambda</a> functions, <a href="https://modelcontextprotocol.io/">Model Context Protocol (MCP)</a> servers, or third-party services—what actions they can perform, and under what conditions.</p><p>Teams can define clear policies once and apply them consistently across their organization. With policies in place, developers gain the freedom to create innovative agentic experiences, and organizations can deploy their agents to act autonomously while knowing they’ll stay within defined boundaries and compliance requirements.</p><p><strong>Using Policy in AgentCore<br /></strong> You can start by creating a policy engine in the new <strong>Policy</strong> section of the <a href="https://console.aws.amazon.com/bedrock-agentcore">AgentCore console</a> and associate it with one or more AgentCore gateways.</p><p>A policy engine is a collection of policies that are evaluated at the gateway endpoint. When associating a gateway with a policy engine, you can choose whether to enforce the result of the policy—effectively permitting or denying access to a tool call—or to only emit logs. Using logs helps you test and validate a policy before enabling it in production.</p><p>Then, you can define the policies to apply to have granular control over access to the tools offered by the associated AgentCore gateways.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/25/agentcore-policy-console.png"><img class="aligncenter size-full wp-image-101864" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/25/agentcore-policy-console.png" alt="Amazon Bedrock AgentCore Policy console" width="995" height="791" /></a></p><p>To create a policy, you can start with a natural language description (that should include information of the authentication claims to use) or directly edit Cedar code.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/25/agentcore-policy-add.png"><img class="aligncenter size-full wp-image-101868" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/25/agentcore-policy-add.png" alt="Amazon Bedrock AgentCore Policy add" width="1251" height="1007" /></a></p><p>Natural language-based policy authoring provides a more accessible way for you to create fine-grained policies. Instead of writing formal policy code, you can describe rules in plain English. The system interprets your intent, generates candidate policies, validates them against the tool schema, and uses automated reasoning to check safety conditions—identifying prompts that are overly permissive, overly restrictive, or contain conditions that can never be satisfied.</p><p>Unlike generic <a href="https://aws.amazon.com/what-is/large-language-model/">large language model (LLM)</a> translations, this feature understands the structure of your tools and generates policies that are both syntactically correct and semantically aligned with your intent, while flagging rules that cannot be enforced. It is also available as a <a href="https://modelcontextprotocol.io/">Model Context Protocol (MCP)</a> server, so you can author and validate policies directly in your preferred AI-assisted coding environment as part of your normal development workflow. This approach reduces onboarding time and helps you write high-quality authorization rules without needing Cedar expertise.</p><p>The following sample policy uses information from the OAuth claims in the JWT token used to authenticate to an AgentCore gateway (for the <code>role</code>) and the arguments passed to the tool call (<code>context.input</code>) to validate access to the tool processing a refund. Only an authenticated user with the <code>refund-agent</code> role can access the tool but for amounts (<code>context.input.amount</code>) lower than $200 USD.</p><pre class="lang-cedar">permit(
  principal is AgentCore::OAuthUser,
  action == AgentCore::Action::"RefundTool__process_refund",
  resource == AgentCore::Gateway::"&lt;GATEWAY_ARN&gt;"
)
when {
  principal.hasTag("role") &amp;&amp;
  principal.getTag("role") == "refund-agent" &amp;&amp;
  context.input.amount &lt; 200
};</pre><p><strong>AgentCore Evaluations for continuous, real-time quality intelligence</strong><br />AgentCore Evaluations is a fully managed service that helps you continuously monitor and analyze agent performance based on real-world behavior. With AgentCore Evaluations, you can use built-in evaluators for common quality dimensions such as correctness, helpfulness, tool selection accuracy, safety, goal success rate, and context relevance. You can also create custom model-based scoring systems configured with your choice of prompt and model for business-tailored scoring while the service samples live agent interactions and scores them continuously.</p><p>All results from AgentCore Evaluations are visualized in <a href="https://aws.amazon.com/cloudwatch/">Amazon CloudWatch</a> alongside AgentCore Observability insights, providing one place for unified monitoring. You can also set up alerts and alarms on the evaluation scores to proactively monitor agent quality and respond when metrics fall outside acceptable thresholds.</p><p>You can use AgentCore Evaluations during the testing phase where you can check an agent against the baseline before deployment to stop faulty versions from reaching users, and in production for continuous improvement of your agents. When quality metrics drop below defined thresholds—such as a customer service agent satisfaction declining or politeness scores dropping by more than 10 percent over an 8-hour period—the system triggers immediate alerts, helping to detect and address quality issues faster.</p><p><strong>Using AgentCore Evaluations<br /></strong> You can create an online evaluation in the new <strong>Evaluations</strong> section of the <a href="https://console.aws.amazon.com/bedrock-agentcore">AgentCore console</a>. You can use as data source an AgentCore agent endpoint or a CloudWatch log group used by an external agent. For example, I use here the same sample customer support agent I shared when we <a href="https://aws.amazon.com/blogs/aws/introducing-amazon-bedrock-agentcore-securely-deploy-and-operate-ai-agents-at-any-scale/">introduced AgentCore in preview</a>.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/25/agentcore-evaluations-source.png"><img class="aligncenter size-full wp-image-101865" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/25/agentcore-evaluations-source.png" alt="Amazon Bedrock AgentCore Evaluations source" width="982" height="813" /></a></p><p>Then, you can select the evaluators to use, including custom evaluators that you can define starting from the existing templates or build from scratch.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/25/agentcore-evaluations-evaluators.png"><img class="aligncenter size-full wp-image-101866" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/25/agentcore-evaluations-evaluators.png" alt="Amazon Bedrock AgentCore Evaluations source" width="980" height="991" /></a></p><p>For example, for a customer support agent, you can select metrics such as:</p><ul><li><strong>Correctness</strong> – Evaluates whether the information in the agent’s response is factually accurate</li>
<li><strong>Faithfulness</strong> – Evaluates whether information in the response is supported by provided context/sources</li>
<li><strong>Helpfulness</strong> – Evaluates from user’s perspective how useful and valuable the agent’s response is</li>
<li><strong>Harmfulness</strong> – Evaluates whether the response contains harmful content</li>
<li><strong>Stereotyping</strong> – Detects content that makes generalizations about individuals or groups</li>
</ul><p>The evaluators for tool selection and tool parameter accuracy can help you understand if an agent is choosing the right tool for a task and extracting the correct parameters from the user queries.</p><p>To complete the creation of the evaluation, you can choose the sampling rate and optional filters. For permissions, you can create a new <a href="https://aws.amazon.com/iam/">AWS Identity and Access Management (IAM)</a> service role or pass an existing one.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/25/agentcore-evaluations-filters-permissions.png"><img class="aligncenter size-full wp-image-101867" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/25/agentcore-evaluations-filters-permissions.png" alt="Amazon Bedrock AgentCore Evaluations create" width="983" height="643" /></a></p><p>The results are published, as they are evaluated, on <a href="https://aws.amazon.com/cloudwatch/">Amazon CloudWatch</a> in the AgentCore Observability dashboard. You can choose any of the bar chart sections to see the corresponding traces and gain deeper insight into the requests and responses behind that specific evaluation.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/27/agentcore-evaluations-results.png"><img class="aligncenter size-full wp-image-102185" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/27/agentcore-evaluations-results.png" alt="Amazon AgentCore Evaluations results" width="2360" height="1652" /></a></p><p>Because the results are in CloudWatch, you can use all of its feature to create, for example, alarms and automations.</p><p><strong>Creating custom evaluators in AgentCore Evaluations<br /></strong> Custom evaluators allow you to define business-specific quality metrics tailored to your agent’s unique requirements. To create a custom evaluator, you provide the model to use as a judge, including inference parameters such as temperature and max output tokens, and a tailored prompt with the judging instructions. You can start from the prompt used by one of the built-in evaluators or enter a new one.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/26/agentcore-policy-custom-evaluator.png"><img class="aligncenter size-full wp-image-102068" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/26/agentcore-policy-custom-evaluator.png" alt="AgentCore Evaluations create custom evaluator" width="1183" height="954" /></a></p><p>Then, you define the scale to produce in output. It can be either numeric values or custom text labels that you define. Finally, you configure whether the evaluation is computed by the model on single traces, full sessions, or for each tool call.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/26/agentcore-policy-custom-evaluator-scale.png"><img class="aligncenter size-full wp-image-102069" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/26/agentcore-policy-custom-evaluator-scale.png" alt="AgentCore Evaluations custom evaluator scale" width="1183" height="699" /></a></p><p><strong>AgentCore Memory episodic functionality for experience-based learning<br /></strong> AgentCore Memory, a fully managed service that gives AI agents the ability to remember past interactions, now includes a new <a href="https://docs.aws.amazon.com/bedrock-agentcore/latest/devguide/long-term-memory-long-term.html">long-term memory</a> strategy that gives agents the ability to learn from past experiences and apply those lessons to provide more helpful assistance in future interactions.</p><p>Consider booking travel with an agent: over time, the agent learns from your booking patterns—such as the fact that you often need to move flights to later times when traveling for work due to client meetings. When you start your next booking involving client meetings, the agent proactively suggests flexible return options based on these learned patterns. Just like an experienced assistant who learns your specific travel habits, agents with episodic memory can now recognize and adapt to your individual needs.</p><p>When you enable the new episodic functionality, AgentCore Memory captures structured episodes that record the context, reasoning process, actions taken, and outcomes of agent interactions, while a reflection agent analyzes these episodes to extract broader insights and patterns. When facing similar tasks, agents can retrieve these learnings to improve decision-making consistency and reduce processing time. This reduces the need for custom instructions by including in the agent context only the specific learnings an agent needs to complete a task instead of a long list of all possible suggestions.</p><p><strong>AgentCore Runtime bidirectional streaming for more natural conversations</strong><br />With AgentCore Runtime, you can deploy agentic applications with few lines of code. To simplify deploying conversational experiences that feel natural and responsive, AgentCore Runtime now supports bidirectional streaming. This capability enables voice agents to listen and adapt while users speak, so that people can interrupt agents mid-response and have the agent immediately adjust to the new context—without waiting for the agent to finish its current output. Rather than traditional turn-based interaction where users must wait for complete responses, bidirectional streaming creates flowing, natural conversations where agents dynamically change their response based on what the user is saying.</p><p>Building these conversational experiences from the ground up requires significant engineering effort to handle the complex flow of simultaneous communication. Bidirectional streaming simplifies this by managing the infrastructure needed for agents to process input while generating output, handling interruptions gracefully, and maintaining context throughout dynamic conversation shifts. You can now deploy agents that naturally adapt to the fluid nature of human conversation—supporting mid-thought interruptions, context switches, and clarifications without losing the thread of the interaction.</p><p><strong>Things to know</strong><br /><a href="https://aws.amazon.com/bedrock/agentcore/">Amazon Bedrock AgentCore</a>, including the preview of Policy, is available in the US East (Ohio, N. Virginia), US West (Oregon), Asia Pacific (Mumbai, Singapore, Sydney, Tokyo), and Europe (Frankfurt, Ireland) <a href="https://aws.amazon.com/about-aws/global-infrastructure/regions_az/">AWS Regions</a> . The preview of AgentCore Evaluations is available in the US East (Ohio, N. Virginia), US West (Oregon), Asia Pacific (Sydney), and Europe (Frankfurt) Regions. For Regional availability and future roadmap, visit <a href="https://builder.aws.com/capabilities/">AWS Capabilities by Region</a>.</p><p>With AgentCore, you pay for what you use with no upfront commitments. For detailed pricing information, visit the <a href="https://aws.amazon.com/bedrock/pricing/">Amazon Bedrock pricing page</a>. AgentCore is also a part of the <a href="https://aws.amazon.com/free">AWS Free Tier</a> that new AWS customers can use to get started at no cost and explore key AWS services.</p><p>These new features work with any open source framework such as <a href="https://www.crewai.com/">CrewAI</a>, <a href="https://www.langchain.com/langgraph">LangGraph</a>, <a href="https://www.llamaindex.ai/">LlamaIndex</a>, and <a href="https://strandsagents.com/">Strands Agents</a>, and with any foundation model. AgentCore services can be used together or independently, and you can get started using your favorite AI-assisted development environment with the <a href="https://awslabs.github.io/mcp/servers/amazon-bedrock-agentcore-mcp-server">AgentCore open source MCP server</a>.</p><p>To learn more and get started quickly, visit the <a href="https://docs.aws.amazon.com/bedrock-agentcore/latest/devguide/what-is-bedrock-agentcore.html">AgentCore Developer Guide</a>.</p><p>— <a href="https://x.com/danilop">Danilo</a></p></section><aside class="blog-comments"><div data-lb-comp="aws-blog:cosmic-comments" data-env="prod" data-content-id="abf27f60-117f-43d2-b6f3-b2ec43dc82ff" data-title="Amazon Bedrock AgentCore adds quality evaluations and policy controls for deploying trusted AI agents" data-url="https://aws.amazon.com/blogs/aws/amazon-bedrock-agentcore-adds-quality-evaluations-and-policy-controls-for-deploying-trusted-ai-agents/"><p data-failed-message="Comments cannot be loaded… Please refresh and try again.">Loading comments…</p></div>
</aside>]]></summary>
    <link href="https://aws.amazon.com/blogs/aws/amazon-bedrock-agentcore-adds-quality-evaluations-and-policy-controls-for-deploying-trusted-ai-agents/"/>
    <updated>2025-12-02T17:14:00+01:00</updated>
  </entry>
  <entry>
    <id>https://aws.amazon.com/blogs/aws/build-multi-step-applications-and-ai-workflows-with-aws-lambda-durable-functions/</id>
    <title><![CDATA[Build multi-step applications and AI workflows with AWS Lambda durable functions]]></title>
    <summary><![CDATA[<section class="blog-post-content lb-rtxt"><table id="amazon-polly-audio-table"><tbody><tr><td id="amazon-polly-audio-tab">
<p></p></td></tr></tbody>


</table><p>Modern applications increasingly require complex and long-running coordination between services, such as multi-step payment processing, AI agent orchestration, or approval processes awaiting human decisions. Building these traditionally required significant effort to implement state management, handle failures, and integrate multiple infrastructure services.</p><p>Starting today, you can use <a href="https://aws.amazon.com/lambda/durable-functions/?trk=c4ea046f-18ad-4d23-a1ac-cdd1267f942c&amp;sc_channel=el">AWS Lambda durable functions</a> to build reliable multi-step applications directly within the familiar AWS Lambda experience. Durable functions are regular Lambda functions with the same event handler and integrations you already know. You write sequential code in your preferred programming language, and durable functions track progress, automatically retry on failures, and suspend execution for up to one year at defined points, without paying for idle compute during waits.</p><p>AWS Lambda durable functions use a checkpoint and replay mechanism, known as durable execution, to deliver these capabilities. After enabling a function for durable execution, you add the new open source durable execution SDK to your function code. You then use SDK primitives like “steps” to add automatic checkpointing and retries to your business logic and “waits” to efficiently suspend execution without compute charges. When execution terminates unexpectedly, Lambda resumes from the last checkpoint, replaying your event handler from the beginning while skipping completed operations.</p><p><strong>Getting started with AWS Lambda durable functions<br /></strong> Let me walk you through how to use durable functions.</p><p>First, I create a new <a href="https://console.aws.amazon.com/lambda?trk=c4ea046f-18ad-4d23-a1ac-cdd1267f942c&amp;sc_channel=el">Lambda function in the console</a> and select <strong>Author from scratch</strong>. In the <strong>Durable execution</strong> section, I select <strong>Enable</strong>. Note that, durable function setting can only be set during function creation and currently can’t be modified for existing Lambda functions.</p><p><img class="aligncenter size-full wp-image-101851 c6" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/25/2025-news-durable-function-4.png" alt="" width="1066" height="889" /></p><p>After I create my Lambda durable function, I can get started with the provided code.</p><p><img class="aligncenter size-full wp-image-101860 c6" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/25/2025-news-durable-function-5.png" alt="" width="1311" height="1405" /></p><p>Lambda durable functions introduces two core primitives that handle state management and recovery:</p><ul><li><strong>Steps</strong>—The <code>context.step()</code> method adds automatic retries and checkpointing to your business logic. After a step is completed, it will be skipped during replay.</li>
<li><strong>Wait</strong>—The <code>context.wait()</code> method pauses execution for a specified duration, terminating the function, suspending and resuming execution without compute charges.</li>
</ul><p>Additionally, Lambda durable functions provides other operations for more complex patterns: <code>create_callback()</code> creates a callback that you can use to await results for external events like API responses or human approvals, <code>wait_for_condition()</code> pauses until a specific condition is met like polling a REST API for process completion, and <code>parallel()</code> or <code>map()</code> operations for advanced concurrency use cases.</p><p><strong>Building a production-ready order processing workflow<br /></strong> Now let’s expand the default example to build a production-ready order processing workflow. This demonstrates how to use callbacks for external approvals, handle errors properly, and configure retry strategies. I keep the code intentionally concise to focus on these core concepts. In a full implementation, you could enhance the validation step with <a href="https://console.aws.amazon.com/bedrock?trk=c4ea046f-18ad-4d23-a1ac-cdd1267f942c&amp;sc_channel=el">Amazon Bedrock</a> to add AI-powered order analysis.</p><p>Here’s how the order processing workflow works:</p><ul><li>First, <code>validate_order()</code> checks order data to ensure all required fields are present.</li>
<li>Next, <code>send_for_approval()</code> sends the order for external human approval and waits for a callback response, suspending execution without compute charges.</li>
<li>Then, <code>process_order()</code> completes order processing.</li>
<li>Throughout the workflow, try-catch error handling distinguishes between terminal errors that stop execution immediately and recoverable errors inside steps that trigger automatic retries.</li>
</ul><p>Here’s the complete order processing workflow with step definitions and the main handler:</p><pre class="language-python">import random
from aws_durable_execution_sdk_python import (
    DurableContext,
    StepContext,
    durable_execution,
    durable_step,
)
from aws_durable_execution_sdk_python.config import (
    Duration,
    StepConfig,
    CallbackConfig,
)
from aws_durable_execution_sdk_python.retries import (
    RetryStrategyConfig,
    create_retry_strategy,
)
@durable_step
def validate_order(step_context: StepContext, order_id: str) -&gt; dict:
    """Validates order data using AI."""
    step_context.logger.info(f"Validating order: {order_id}")
    # In production: calls Amazon Bedrock to validate order completeness and accuracy
    return {"order_id": order_id, "status": "validated"}
@durable_step
def send_for_approval(step_context: StepContext, callback_id: str, order_id: str) -&gt; dict:
    """Sends order for approval using the provided callback token."""
    step_context.logger.info(f"Sending order {order_id} for approval with callback_id: {callback_id}")
    # In production: send callback_id to external approval system
    # The external system will call Lambda SendDurableExecutionCallbackSuccess or
    # SendDurableExecutionCallbackFailure APIs with this callback_id when approval is complete
    return {
        "order_id": order_id,
        "callback_id": callback_id,
        "status": "sent_for_approval"
    }
@durable_step
def process_order(step_context: StepContext, order_id: str) -&gt; dict:
    """Processes the order with retry logic for transient failures."""
    step_context.logger.info(f"Processing order: {order_id}")
    # Simulate flaky API that sometimes fails
    if random.random() &gt; 0.4:
        step_context.logger.info("Processing failed, will retry")
        raise Exception("Processing failed")
    return {
        "order_id": order_id,
        "status": "processed",
        "timestamp": "2025-11-27T10:00:00Z",
    }
@durable_execution
def lambda_handler(event: dict, context: DurableContext) -&gt; dict:
    try:
        order_id = event.get("order_id")
        # Step 1: Validate the order
        validated = context.step(validate_order(order_id))
        if validated["status"] != "validated":
            raise Exception("Validation failed")  # Terminal error - stops execution
        context.logger.info(f"Order validated: {validated}")
        # Step 2: Create callback
        callback = context.create_callback(
            name="awaiting-approval",
            config=CallbackConfig(timeout=Duration.from_minutes(3))
        )
        context.logger.info(f"Created callback with id: {callback.callback_id}")
        # Step 3: Send for approval with the callback_id
        approval_request = context.step(send_for_approval(callback.callback_id, order_id))
        context.logger.info(f"Approval request sent: {approval_request}")
        # Step 4: Wait for the callback result
        # This blocks until external system calls SendDurableExecutionCallbackSuccess or SendDurableExecutionCallbackFailure
        approval_result = callback.result()
        context.logger.info(f"Approval received: {approval_result}")
        # Step 5: Process the order with custom retry strategy
        retry_config = RetryStrategyConfig(max_attempts=3, backoff_rate=2.0)
        processed = context.step(
            process_order(order_id),
            config=StepConfig(retry_strategy=create_retry_strategy(retry_config)),
        )
        if processed["status"] != "processed":
            raise Exception("Processing failed")  # Terminal error
        context.logger.info(f"Order successfully processed: {processed}")
        return processed
    except Exception as error:
        context.logger.error(f"Error processing order: {error}")
        raise error  # Re-raise to fail the execution
</pre><p>This code demonstrates several important concepts:</p><ul><li><strong>Error handling</strong>—The try-catch block handles terminal errors. When an unhandled exception is thrown outside of a step (like the validation check), it terminates the execution immediately. This is useful when there’s no point in retrying, such as invalid order data.</li>
<li><strong>Step retries</strong>—Inside the <code>process_order</code> step, exceptions trigger automatic retries based on the default (step 1) or configured <code>RetryStrategy</code> (step 5). This handles transient failures like temporary API unavailability.</li>
<li><strong>Logging</strong>—I use <code>context.logger</code> for the main handler and <code>step_context.logger</code> inside steps. The context logger suppresses duplicate logs during replay.</li>
</ul><p>Now I create a test event with <code>order_id</code> and invoke the function asynchronously to start the order workflow. I navigate to the <strong>Test</strong> tab and fill in the optional <strong>Durable execution name</strong> to identify this execution. Note that, durable functions provides built-in idempotency. If I invoke the function twice with the same execution name, the second invocation returns the existing execution result instead of creating a duplicate.</p><p><img class="aligncenter size-full wp-image-102269 c6" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/28/2025-news-durable-function-rev-8-2.png" alt="" width="1297" height="1257" /></p><p>I can monitor the execution by navigating to the <strong>Durable executions</strong> tab in the Lambda console:</p><p><img class="aligncenter wp-image-102325 size-full c6" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/29/2025-news-durable-function-rev-9-1.png" alt="" width="693" height="624" /></p><p>Here I can see each step’s status and timing. The execution shows <code>CallbackStarted</code> followed by <code>InvocationCompleted</code>, which indicates the function has terminated and execution is suspended to avoid idle charges while waiting for the approval callback.</p><p><img class="aligncenter size-full wp-image-102314 c6" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/29/2025-news-durable-function-rev-3-1-1.png" alt="" width="1397" height="419" /></p><p>I can now complete the callback directly from the console by choosing <strong>Send success</strong> or <strong>Send failure</strong>, or programmatically using the Lambda API.</p><p><img class="aligncenter size-full wp-image-102315 c6" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/29/2025-news-durable-function-rev-3-2.png" alt="" width="1645" height="725" /></p><p>I choose <strong>Send success</strong>.</p><p><img class="aligncenter size-full wp-image-102195 c6" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/27/2025-news-durable-function-rev-6.png" alt="" width="1342" height="967" /></p><p>After the callback completes, the execution resumes and processes the order. If the <code>process_order</code> step fails due to the simulated flaky API, it automatically retries based on the configured strategy. Once all retries succeed, the execution completes successfully.</p><p><img class="aligncenter size-full wp-image-102316 c6" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/29/2025-news-durable-function-rev-3-3.png" alt="" width="1387" height="834" /></p><p><strong>Monitoring executions with Amazon EventBridge<br /></strong> You can also monitor durable function executions using Amazon EventBridge. Lambda automatically sends execution status change events to the default event bus, allowing you to build downstream workflows, send notifications, or integrate with other AWS services.</p><p>To receive these events, create an EventBridge rule on the default event bus with this pattern:</p><pre class="language-json">{
  "source": ["aws.lambda"],
  "detail-type": ["Durable Execution Status Change"]
}
</pre><p><strong>Things to know<br /></strong> Here are key points to note:</p><ul><li><strong>Availability</strong>—Lambda durable functions are now available in US East (Ohio) AWS Region. For the latest Region availability, visit the <a href="https://builder.aws.com/build/capabilities/explore?trk=c4ea046f-18ad-4d23-a1ac-cdd1267f942c&amp;sc_channel=el">AWS Capabilities by Region</a> page.</li>
<li><strong>Programming language support</strong>—At launch, AWS Lambda durable functions supports JavaScript/TypeScript (Node.js 22/24) and Python (3.13/3.14). We recommend bundling the durable execution SDK with your function code using your preferred package manager. The SDKs are fast-moving, so you can easily update dependencies as new features become available.</li>
<li><strong>Using Lambda versions</strong>—When deploying durable functions to production, use Lambda versions to ensure replay always happens on the same code version. If you update your function code while an execution is suspended, replay will use the version that started the execution, preventing inconsistencies from code changes during long-running workflows.</li>
<li><strong>Testing your durable functions</strong>—You can test durable functions locally without AWS credentials using the separate testing SDK with pytest integration and the <a href="https://aws.amazon.com/serverless/sam/?trk=c4ea046f-18ad-4d23-a1ac-cdd1267f942c&amp;sc_channel=el">AWS Serverless Application Model (AWS SAM) command line interface (CLI)</a> for more complex integration testing.</li>
<li><strong>Open source SDKs</strong>—The durable execution SDKs are open source for <a href="https://github.com/aws/aws-durable-execution-sdk-js">JavaScript/TypeScript</a> and <a href="https://github.com/aws/aws-durable-execution-sdk-python">Python</a>. You can review the source code, contribute improvements, and stay updated with the latest features.</li>
<li><strong>Pricing</strong>—To learn more on AWS Lambda durable functions pricing, refer to the <a href="https://aws.amazon.com/lambda/pricing/?trk=c4ea046f-18ad-4d23-a1ac-cdd1267f942c&amp;sc_channel=el">AWS Lambda pricing</a> page.</li>
</ul><p>Get started with AWS Lambda durable functions by visiting the <a href="https://console.aws.amazon.com/lambda?trk=c4ea046f-18ad-4d23-a1ac-cdd1267f942c&amp;sc_channel=el">AWS Lambda console</a>. To learn more, refer to <a href="https://docs.aws.amazon.com/lambda/latest/dg/durable-functions.html?trk=c4ea046f-18ad-4d23-a1ac-cdd1267f942c&amp;sc_channel=el">AWS Lambda durable functions</a> documentation page.</p><p>Happy building!</p><p>— <a href="https://www.linkedin.com/in/donnieprakoso">Donnie</a></p></section><aside id="Comments" class="blog-comments"><div data-lb-comp="aws-blog:cosmic-comments" data-env="prod" data-content-id="d4475bd4-1897-4726-a25b-6c72a2f101d5" data-title="Build multi-step applications and AI workflows with AWS Lambda durable functions" data-url="https://aws.amazon.com/blogs/aws/build-multi-step-applications-and-ai-workflows-with-aws-lambda-durable-functions/"><p data-failed-message="Comments cannot be loaded… Please refresh and try again.">Loading comments…</p></div>
</aside>]]></summary>
    <link href="https://aws.amazon.com/blogs/aws/build-multi-step-applications-and-ai-workflows-with-aws-lambda-durable-functions/"/>
    <updated>2025-12-02T17:12:00+01:00</updated>
  </entry>
  <entry>
    <id>https://aws.amazon.com/blogs/aws/amazon-rds-for-oracle-and-rds-for-sql-server-add-new-capabilities-to-enhance-performance-and-optimize-costs/</id>
    <title><![CDATA[New capabilities to optimize costs and improve scalability on Amazon RDS for SQL Server and Oracle]]></title>
    <summary><![CDATA[<table id="amazon-polly-audio-table"><tbody><tr><td id="amazon-polly-audio-tab">
<p></p></td></tr></tbody>


</table><p>Managing database environments demands a balance of resource efficiency and scalability. Organizations need flexible options across their entire database lifecycle, spanning development, testing, and production workloads with diverse storage and compute requirements.</p><p>To address these needs, we’re announcing four new capabilities for <a href="https://aws.amazon.com/rds/?trk=ac97e39c-d115-4d4a-b3fe-c695e0c9a7ee&amp;sc_channel=el">Amazon Relational Database Service (Amazon RDS)</a> to help customers optimize their costs as well as improve efficiency and scalability for their <a href="https://awsblog.awsapps.com/workdocs-amazon/index.html#/document/893b3209d71ce7ba2c73c433e532f1f34708c2309572eb0f77352bc3f4ffb01f?trk=ac97e39c-d115-4d4a-b3fe-c695e0c9a7ee&amp;sc_channel=el">Amazon RDS for Oracle</a> and <a href="https://aws.amazon.com/rds/sqlserver?trk=ac97e39c-d115-4d4a-b3fe-c695e0c9a7ee&amp;sc_channel=el">Amazon RDS for SQL Server</a> databases. These enhancements include SQL Server Developer Edition support and expanded storage capabilities for both RDS for Oracle and RDS for SQL Server. Additionally, you can have CPU optimization options for RDS for SQL Server on M7i and R7i instances, which offer price reductions from previous generation instances and separately billed licensing fees.</p><p>Let’s explore what’s new.</p><p><strong>SQL Server Developer Edition support<br /></strong> SQL Server Developer Edition is now available on RDS for SQL Server, offering a free SQL Server edition that includes all the Enterprise Edition functionalities. Developer Edition is licensed specifically for non-production workloads, so you can build and test applications without incurring SQL Server licensing costs in your development and testing environments.</p><p>This release brings significant cost savings to your development and testing environments, while maintaining consistency with your production configurations. You’ll have access to all Enterprise Edition features in your development environment, making it easier to test and validate your applications. Additionally, you’ll benefit from the full suite of Amazon RDS features, including automated backups, software updates, monitoring, and encryption capabilities throughout your development process.</p><p>To get started, upload your SQL Server binary files to <a href="https://aws.amazon.com/s3?trk=ac97e39c-d115-4d4a-b3fe-c695e0c9a7ee&amp;sc_channel=el">Amazon Simple Storage Service (Amazon S3)</a> and use them to create your Developer Edition instance. You can migrate existing data from your Enterprise or Standard Edition instances to Developer Edition instances using built-in SQL Server backup and restore operations.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/28/image-22-5.png"><img class="aligncenter size-full wp-image-102267" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/28/image-22-5.png" alt="" width="1069" height="679" /></a></p><p><strong>M7i/R7i instances on RDS for SQL Server with support for optimize CPU<br /></strong> You can now use M7i and R7i instances on Amazon RDS for SQL Server to achieve several key benefits. These instances offer significant cost savings over previous generation instances. You also get improved transparency over your database costs with licensing fees and Amazon RDS DB instances costs billed separately.</p><p>RDS for SQL Server M7i/R7i instances offer up to 55% lower costs compared to previous generation instances. <a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/29/Screenshot-2025-11-26-at-20.17.37-1.png"><img class="aligncenter size-full wp-image-102340" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/29/Screenshot-2025-11-26-at-20.17.37-1.png" alt="" width="805" height="148" /></a></p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/29/Screenshot-2025-11-26-at-20.17.44-1.png"><img class="aligncenter size-full wp-image-102339" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/29/Screenshot-2025-11-26-at-20.17.44-1.png" alt="" width="804" height="146" /></a></p><p>Using the optimize CPU capability on these instances, you can customize the number of vCPUs on license-included RDS for SQL Server instances. This enhancement is particularly valuable for database workloads that require high memory and input/output operations per second (IOPS), but lower vCPU counts</p><p>This feature provides substantial benefits for your database operations. You can significantly reduce vCPU-based licensing costs while maintaining the same memory and IOPS performance levels your applications require. The capability supports higher memory-to-vCPU ratios and automatically disables hyperthreading while maintaining instance performance. Most importantly, you can fine-tune your CPU settings to precisely match your specific workload requirements, providing optimal resource utilization.</p><p>To get started, select SQL Server with an M7i or R7i instance type when creating a new database instance. Under <strong>Optimize CPU</strong> select <strong>Configure the number of vCPUs</strong> and set your desired vCPU count.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/26/image-42-1.png"><img class="aligncenter size-full wp-image-102110" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/26/image-42-1.png" alt="" width="2184" height="572" /></a></p><p><strong>Additional storage volumes for RDS for Oracle and SQL Server<br /></strong> Amazon RDS for Oracle and Amazon RDS for SQL Server now support up to 256 TiB storage size, a fourfold increase in storage size per database instance, through the addition of up to three additional storage volumes.</p><p>The additional storage volumes provide extensive flexibility in managing your database storage needs. You can configure your volumes using both io2 and gp3 volumes to create an optimal storage strategy. You can store frequently accessed data on high-performance Provisioned IOPS SSD (io2) volumes while keeping historical data on cost-effective General Purpose SSD (gp3) volumes, which balances performance and cost. For temporary storage needs, such as month-end processing or data imports, you can add storage volumes as needed. After these operations are complete, you can empty the volumes and then remove them to reduce unnecessary storage costs.</p><p>These storage volumes offer operational flexibility with zero downtime and you can add or remove additional storage volumes without interrupting your database operations. You can also scale up multiple volumes in parallel to quickly meet growing storage demands. For Multi-AZ deployments, all additional storage volumes are automatically replicated to maintain high availability.</p><p>You can add storage volumes to new or existing database instances through the <a href="https://console.aws.amazon.com/?trk=ac97e39c-d115-4d4a-b3fe-c695e0c9a7ee&amp;sc_channel=el">AWS Management Console</a>, <a href="https://aws.amazon.com/cli?trk=ac97e39c-d115-4d4a-b3fe-c695e0c9a7ee&amp;sc_channel=el">AWS Command Line Interface (AWS CLI)</a>, or <a href="https://builder.aws.com/build/tools?trk=ac97e39c-d115-4d4a-b3fe-c695e0c9a7ee&amp;sc_channel=el">AWS SDKs</a>.</p><p>Let me show you a quick example. I’ll add a storage volume to an existing RDS for Oracle database instance.</p><p>First, I navigate to the RDS console, then to my RDS for Oracle database instance detail page. I look under Configuration and I find the <strong>Additional storage volumes</strong> section.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/25/image-22-4.png"><img class="aligncenter size-full wp-image-101915" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/25/image-22-4.png" alt="" width="2266" height="556" /></a></p><p>You can add up to three additional storage volumes and each must be named according to a naming convention. Storage volumes can’t have the same name and you must choose between rdsdbdata2, rdsdbdata3, and rdsdbdata4. For RDS for Oracle database instances, I can add additional storage volumes to the database instance with the primary storage volume size of 200 GiB or higher.</p><p>I’m going to add two volumes, so I choose <strong>Add additional storage volume</strong> and then fill in all the required information. I choose <code>rdsdbdata2</code> as the volume name and give it 12000 GiB of allocated storage with 60000 provisioned IOPS on an io2 storage type. For my second additional storage volume, <code>rdsdbdata3</code>, I choose to have 2000 GiB on gp3 with 15000 provisioned IOPS.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/25/image-23-3.png"><img class="aligncenter size-full wp-image-101934" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/25/image-23-3.png" alt="" width="2712" height="1550" /></a></p><p>After confirmation, I wait for Amazon RDS to process my request and then my additional volumes are available.</p><p>You can also use the AWS CLI to add volumes during creation of database instances or when modifying them.</p><p><strong>Things to know<br /></strong> These capabilities are now available in all commercial <a href="https://docs.aws.amazon.com/glossary/latest/reference/glos-chap.html#region?trk=ac97e39c-d115-4d4a-b3fe-c695e0c9a7ee&amp;sc_channel=el">AWS Regions</a> and the <a href="https://aws.amazon.com/govcloud-us?trk=ac97e39c-d115-4d4a-b3fe-c695e0c9a7ee&amp;sc_channel=el">AWS GovCloud (US)</a> Regions where Amazon RDS for Oracle and Amazon RDS for SQL Server are offered.</p><p>You can learn more about each of these capabilities in the Amazon RDS documentation for <a href="https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Welcome.html?trk=ac97e39c-d115-4d4a-b3fe-c695e0c9a7ee&amp;sc_channel=el">Developer Edition</a>, <a href="https://docs.aws.amazon.com/AmazonRDS/UserGuide/SQLServer.Concepts.General.OptimizeCPU.html?trk=ac97e39c-d115-4d4a-b3fe-c695e0c9a7ee&amp;sc_channel=el">optimize CPU</a>, <a href="https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/User_Oracle_AdditionalStorage.html?trk=ac97e39c-d115-4d4a-b3fe-c695e0c9a7ee&amp;sc_channel=el">additional storage volumes for RDS for Oracle</a> and <a href="https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.SQLServer.CommonDBATasks.DatabaseStorage.html?trk=ac97e39c-d115-4d4a-b3fe-c695e0c9a7ee&amp;sc_channel=el">additional storage volumes for RDS for SQL Server</a>.</p><p>To learn more about the unbundled pricing structure for M7i and R7i instances on RDS for SQL Server, visit the <a href="https://aws.amazon.com/rds/sqlserver/pricing/?trk=ac97e39c-d115-4d4a-b3fe-c695e0c9a7ee&amp;sc_channel=el">Amazon RDS for SQL Server pricing page</a>.</p><p>To get started with any of these capabilities, go to the <a href="https://console.aws.com/rds?trk=ac97e39c-d115-4d4a-b3fe-c695e0c9a7ee&amp;sc_channel=el">Amazon RDS console</a> or learn more by visiting the <a href="https://docs.aws.amazon.com/rds?trk=ac97e39c-d115-4d4a-b3fe-c695e0c9a7ee&amp;sc_channel=el">Amazon RDS documentation</a>.</p>]]></summary>
    <link href="https://aws.amazon.com/blogs/aws/amazon-rds-for-oracle-and-rds-for-sql-server-add-new-capabilities-to-enhance-performance-and-optimize-costs/"/>
    <updated>2025-12-02T17:09:00+01:00</updated>
  </entry>
  <entry>
    <id>https://aws.amazon.com/blogs/aws/introducing-database-savings-plans-for-aws-databases/</id>
    <title><![CDATA[Introducing Database Savings Plans for AWS Databases]]></title>
    <summary><![CDATA[<section class="blog-post-content lb-rtxt"><table><tbody><tr><td>
<p></p></td></tr></tbody>


</table><p>Since <a href="https://aws.amazon.com/?nc2=h_home">Amazon Web Services (AWS)</a> introduced <a href="https://aws.amazon.com/savingsplans/">Savings Plans</a>, customers have been able to lower the cost of running sustained workloads while maintaining the flexibility to manage usage across accounts, resource types, and <a href="https://docs.aws.amazon.com/glossary/latest/reference/glos-chap.html#region">AWS Regions</a>. Today, we’re extending this flexible pricing model to AWS managed database services with the launch of Database Savings Plans, which help customers reduce database costs by up to 35% when they commit to a consistent amount of usage ($/hour) over a <strong>1-year</strong> term. Savings automatically apply each hour to eligible usage across supported database services, and any additional usage beyond the commitment is billed at on-demand rates.</p><p>As organizations build and manage data-driven and AI applications, they often use different database services, engines and deployment types, including instance-based and serverless options, to meet evolving business needs. Database Savings Plans provide the flexibility to choose how workloads run while maintaining cost efficiency. If customers are in the middle of a migration or modernization effort, they can switch database engines and adjust deployment types, such as from provisioned to serverless as part of ongoing cost optimization, while continuing to receive discounted rates. If a customer’s business expands globally, they can also shift usage across AWS Regions and continue to benefit from the same commitment. By applying a consistent hourly commitment, customers can maintain predictable spend even as usage patterns evolve and analyze coverage and utilization using familiar cost management tools.</p><p><strong class="c6">New Savings Plans<br /></strong> Each plan defines where pricing applies, the range of available discounts, and the level of flexibility provided across supported database engines, instance families, sizes, deployment options, or AWS Regions.</p><p>The hourly commitment automatically applies to all eligible usage regardless of Region, with support for <a href="https://aws.amazon.com/rds/aurora/?nc2=type_a">Amazon Aurora</a>, <a href="https://aws.amazon.com/rds/?nc2=type_a">Amazon Relational Database Service (Amazon RDS)</a>, <a href="https://aws.amazon.com/dynamodb/?nc2=type_a">Amazon DynamoDB</a>, <a href="https://aws.amazon.com/elasticache/?nc2=type_a">Amazon ElastiCache</a>, <a href="https://aws.amazon.com/documentdb/?nc2=type_a">Amazon DocumentDB (with MongoDB compatibility)</a>, <a href="https://aws.amazon.com/neptune/?nc2=type_a">Amazon Neptune</a>, <a href="https://aws.amazon.com/keyspaces/?nc2=type_a">Amazon Keyspaces (for Apache Cassandra)</a>, <a href="https://aws.amazon.com/timestream/?nc2=type_a">Amazon Timestream</a>, and <a href="https://aws.amazon.com/dms/?nc2=type_a">AWS Database Migration Service (AWS DMS)</a>. As new eligible database offerings, instance types, or Regions become available, Savings Plans will automatically apply to that usage.</p><p>Discounts vary by deployment model and service type. Serverless deployments provide up to 35% savings compared to on-demand rates. Provisioned instances across supported database services deliver up to 20% savings. For Amazon DynamoDB and Amazon Keyspaces, on-demand throughput workloads receive up to 18% savings, and provisioned capacity offers up to 12%. Together, these savings help customers optimize costs while maintaining consistent coverage for database usage. To learn more about the pricing and eligible usage, visit the <a href="https://aws.amazon.com/savingsplans/database-pricing/">Database Savings Plans pricing page</a>.</p><p><strong class="c6">Purchasing Database Savings Plans<br /></strong> <a href="https://aws.amazon.com/aws-cost-management/aws-cost-explorer/?nc2=type_a">AWS Billing and Cost Management Console</a> helps you choose Savings Plans and guides you through the purchase process. You can get started from the <a href="https://aws.amazon.com/console/?nc2=type_a">AWS Management Console</a> or use the <a href="https://aws.amazon.com/cli/">AWS Command Line Interface (AWS CLI)</a> and the API. There are two ways to evaluate Database Savings Plans purchases, in the Recommendations view and in the Purchase Analyzer.</p><p><strong>Recommendations</strong> – are automatically generated from your recent on-demand usage. To reach the Recommendations view in the <a href="https://console.aws.amazon.com/cost-reports/home?region=us-east-1#/dashboard"><strong>Billing and Cost Management console</strong></a>, choose <strong>Savings and Commitments</strong>, <strong>Savings Plans</strong>, and <strong>Recommendations</strong> in the navigation pane. In the <strong>Recommendations</strong> view, select <strong>Database Savings Plans</strong> and configure the <strong>Recommendation options</strong>. AWS Savings Plans recommendations analyze your historical on-demand usage to identify the hourly commitment that delivers the highest overall savings.<a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/15/recommendation-1.png"><img class="aligncenter size-full wp-image-101049" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/15/recommendation-1.png" alt="" width="3006" height="990" /></a></p><p><strong>The Purchase Analyzer</strong> – is designed for modeling custom commitment levels. If you want to purchase a different amount than the recommended commitment on the <strong>Purchase Analyzer</strong> page, select <strong>Database Savings Plans</strong> and configure <strong>Lookback period</strong> and <strong>Hourly commitment</strong> to simulate alternative commitment levels and see the projected impact on <strong>Cost</strong>, <strong>Coverage</strong>, and <strong>Utilization</strong>.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/20/image-1-17.png"><img class="aligncenter size-full wp-image-101457" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/20/image-1-17.png" alt="" width="1206" height="970" /></a></p><p>This way is preferred if your purchasing strategy includes smaller, incremental commitments over time or if you expect future usage changes that could affect your ideal purchase amount.</p><p>After reviewing the recommendations or running simulations in Savings Plans Recommendations or Savings Plans Purchase Analyzer, choose <strong>Add to cart</strong> to proceed with your chosen commitment. If you prefer to purchase directly, you can also navigate to the <strong>Purchase Savings Plans</strong> page. The console updates estimated discounts and coverage in real time as you adjust each setting, so you can evaluate the impact before completing your order.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/20/purchase-savings-plans.png"><img class="aligncenter size-full wp-image-101459" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/20/purchase-savings-plans.png" alt="" width="2876" height="1222" /></a></p><p>You can learn more about how to choose and purchase Database Saving Plans by visiting the <a href="https://docs.aws.amazon.com/savingsplans/latest/userguide/what-is-savings-plans.html">Savings Plans User Guide</a> documents.</p><p><strong class="c6">Now available<br /></strong> Database Savings Plans are available in all AWS Regions outside of China. Give them a try and start shaping your database strategy with more flexibility and predictable costs.</p><p>– <a href="https://www.linkedin.com/in/zhengyubin714/">Betty</a></p></section><aside id="Comments" class="blog-comments"><div data-lb-comp="aws-blog:cosmic-comments" data-env="prod" data-content-id="0c4d304d-9e06-4fc3-a9ab-9b551ef376ff" data-title="Introducing Database Savings Plans for AWS Databases" data-url="https://aws.amazon.com/blogs/aws/introducing-database-savings-plans-for-aws-databases/"><p data-failed-message="Comments cannot be loaded… Please refresh and try again.">Loading comments…</p></div>
</aside>]]></summary>
    <link href="https://aws.amazon.com/blogs/aws/introducing-database-savings-plans-for-aws-databases/"/>
    <updated>2025-12-02T17:09:00+01:00</updated>
  </entry>
  <entry>
    <id>https://aws.amazon.com/blogs/aws/amazon-cloudwatch-introduces-unified-data-management-and-analytics-for-operations-security-and-compliance/</id>
    <title><![CDATA[Amazon CloudWatch introduces unified data management and analytics for operations, security, and compliance]]></title>
    <summary><![CDATA[<section class="blog-post-content lb-rtxt"><table id="amazon-polly-audio-table"><tbody><tr><td id="amazon-polly-audio-tab">
<p></p></td></tr></tbody>


</table><p>Today we’re expanding <a href="https://aws.amazon.com/cloudwatch/">Amazon CloudWatch</a> capabilities to unify and manage log data across operational, security, and compliance use cases with flexible and powerful analytics in one place and with reduced data duplication and costs.</p><p>This enhancement means that CloudWatch can automatically normalize and process data to offer consistency across sources with built-in support for <a href="https://docs.aws.amazon.com/security-lake/latest/userguide/open-cybersecurity-schema-framework.html">Open Cybersecurity Schema Framework (OCSF)</a> and <a href="https://opentelemetry.io/">Open Telemetry (OTel)</a> formats, so you can focus on analytics and insights. CloudWatch also introduces Apache Iceberg compatible access to your data through <a href="https://aws.amazon.com/s3/features/tables/">Amazon Simple Storage Service (Amazon S3) Tables</a>, so that you can run analytics, not only locally but also using <a href="https://aws.amazon.com/athena">Amazon Athena</a>, <a href="https://aws.amazon.com/sagemaker/unified-studio/">Amazon SageMaker Unified Studio</a>, or any other Iceberg-compatible tool.</p><p>You can also correlate your operational data in CloudWatch with other business data from your preferred tools to correlate with other data. This unified approach streamlines management and provides comprehensive correlation across security, operational, and business use cases.</p><p>Here are the detailed enhancements:</p><ul><li><strong>Streamline data ingestion and normalization</strong> – CloudWatch automatically collects AWS vended logs across accounts and AWS Regions, integrating with <a href="https://aws.amazon.com/organizations/">AWS Organizations</a> from AWS services including <a href="https://aws.amazon.com/cloudtrail/">AWS CloudTrail,</a> <a href="https://aws.amazon.com/vpc/">Amazon Virtual Private Cloud (Amazon VPC)</a> Flow Logs, <a href="https://aws.amazon.com/waf">AWS WAF</a> access logs, <a href="https://aws.amazon.com/route53">Amazon Route 53</a> resolver logs, and pre-built connectors for third-party sources such as endpoint (CrowdStrike, SentinelOne), identity (Okta, Entra ID), cloud security (Wiz), network security (Zscaler, Palo Alto Networks), productivity and collaboration (Microsoft Office 365, Windows Event Logs, and GitHub), along with IT service manager with ServiceNow CMBD. To normalize and process your data as they are being ingested, CloudWatch offers managed OCSF conversion for various AWS and third-party data sources and other processors such ad Grok for custom parsing, field-level operations, and string manipulations.</li>
<li><strong>Reduce costly log data management</strong> – CloudWatch consolidates log management into a single service with built-in governance capabilities without storing and maintaining multiple copies of the same data across different tools and data stores. The unified data store of CloudWatch eliminates the need for complex ETL pipelines and reduces your operational costs and management overhead needed to maintain multiple separate data stores and tools.</li>
<li><strong>Discover business insights from log data</strong> – You can run queries in CloudWatch using natural language queries and popular query languages such as LogsQL, PPL, and SQL through a single interface, or query your data using your preferred analytics tools through Apache Iceberg-compatible tables. The new Facets interface gives you intuitive filtering by source, application, account, region, and log type, which you can use to run queries across log groups of multiple AWS accounts and Regions with intelligent parameter inference.</li>
</ul><p>In the next sections we explore the new log management and analytics features of the CloudWatch Logs!</p><p><strong>1. Data discovery and management by data sources and types</strong></p><p>You can see a high-level overview of logs and all data sources with a new Logs Management View in the CloudWatch console. To get started, go to the <a href="https://console.aws.amazon.com/cloudwatch/home">CloudWatch console</a> and choose <strong>Log Management</strong> under the <strong>Logs</strong> menu in the left navigation pane. In the <strong>Summary</strong> tab, you can observe your logs data sources and types, insights into how your log groups are doing across ingestion, and anomalies.</p><p><img class="aligncenter wp-image-101615 size-full c6" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/21/2025-cloudwatch-log-1-data-management-overview-1-1.png" alt="" width="2554" height="1833" /></p><p>Choose the <strong>Data sources</strong> tab to find and manage your log data by data sources, types, and fields. CloudWatch ingests and automatically categorizes data sources by AWS services, third-party, or custom sources such as application logs.</p><p><img class="aligncenter wp-image-101619 size-full c6" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/21/2025-cloudwatch-log-management-2-data-sources-2.png" alt="" width="2428" height="1856" /></p><p>Choose the <strong>Data source actions</strong> to integrate S3 Tables to make future logs for selected data sources. You have the flexibility to analyze the logs through Athena and Amazon Redshift and other query engines such as Spark using Iceberg compatible access patterns. With this integration, logs from CloudWatch are available in a read-only <code>aws-cloudwatch</code> S3 Tables bucket.</p><p>When you choose a specific data source such as CloudTrail data, you can view the details of the data source that includes information regarding data format, pipeline, facets/field indexes, S3 Tables association, and the number of logs with that data source. You can observe all log groups included in this data source and type and edit a source/type field index policy using the new schema support.</p><p><img class="aligncenter wp-image-102254 size-full c6" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/28/2025-cloudwatch-log-management-3-data-sources-detail-2.png" alt="" width="2294" height="1370" /></p><p>To learn more about how to manage your data sources and index policy, visit <a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/data-sources.html?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">Data sources</a> in the Amazon CloudWatch Logs User Guide.</p><p><strong>2. Ingestion and transformation using CloudWatch pipelines</strong></p><p>You can create pipelines to streamline collecting, transforming, and routing telemetry and security data while standardizing data formats to optimize observability and security data management. The new pipeline feature of CloudWatch connects data from a catalogue of data sources, so that you can add and configure pipeline processors from a library to parse, enrich, and standardize data.</p><p><img class="aligncenter wp-image-102253 size-full c6" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/28/2025-cloudwatch-log-management-4-pipeline-list-1.jpg" alt="" width="2064" height="466" /></p><p>In the <strong>Pipeline</strong> tab, choose <strong>Add pipeline</strong>. It shows you the pipeline configuration wizard. This wizard guides you through five steps where you can choose the data source and other source details such as log source types, configure destination, configure up to 19 processors to perform an action on your data (such as filtering, transforming, or enriching), and finally review and deploy the pipeline.</p><p><img class="aligncenter wp-image-101618 size-full" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/21/2025-cloudwatch-log-management-4-pipeline-wizards.jpg" alt="" width="2560" height="1022" /></p><p>You also have the option to create pipelines through the new <strong>Ingestion</strong> experience in CloudWatch. To learn more about how to set up and manage the pipelines, visit <a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/cloudwatch-pipelines.html?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">Pipelines</a> in the Amazon CloudWatch Logs User Guide.</p><p><strong>3. Enhanced analytics and querying based on data sources</strong></p><p>You can enhance analytics with support for Facets and querying based on data sources. Facets enable interactive exploration and drill-down into logs and their values are automatically extracted based on the selected time period.</p><p>Choose the <strong>Facets</strong> tab in the <strong>Log Insights</strong> under the <strong>Logs</strong> menu in the left navigation pane. You can view available facets and values that appear in the panel. Choose one or more facets and values to interactively explore your data. I choose Facets regarding a VPC Flow Logs group and action, query to list the five most frequent patterns in my VPC Flow Logs through the AI query generator, and get the result patterns.</p><p><img class="aligncenter size-full wp-image-100830 c6" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/12/2025-cloudwatch-log-management-5-log-insights.png" alt="" width="2854" height="2287" /></p><p>You can save your query with the selected Facets and values that you have specified. When you next choose your saved query, the logs to be queried have the pre-specified facets and values. To learn more about Facet management, visit <a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CloudWatchLogs-Facets.html?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">Facets</a> in the CloudWatch Logs User Guide.</p><p>As I previously noted, you can integrate data sources into S3 Tables and query together. For example, using a Query Editor in Athena, you can query correlates network traffic with AWS API activity from a specific IP range (<code>174.163.137.*</code>) by joining VPC Flow Logs with CloudTrail logs based on matching source IP addresses.</p><p><img class="aligncenter size-full wp-image-101642 c6" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/22/2025-cloudwatch-log-management-5-log-insights-athena.png" alt="" width="2514" height="2234" /></p><p>This type of integrated search is particularly valuable for security monitoring, incident investigation, and suspicious behavior detection. You can view if an IP that’s making network connections is also performing sensitive AWS operations such as creating users, modifying security groups, or accessing data.</p><p>To learn more, visit <a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/s3-tables-integration.html?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">S3 Tables integration with CloudWatch</a> in the CloudWatch Logs User Guide.</p><p><strong class="c7">Now available</strong><br />New log management features of Amazon CloudWatch are available today in all AWS Regions except the AWS GovCloud (US) Regions and China Regions. For Regional availability and future roadmap, visit the <a class="c-link" href="https://builder.aws.com/capabilities/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el" target="_blank" rel="noopener noreferrer" data-stringify-link="https://builder.aws.com/capabilities/" data-sk="tooltip_parent">AWS Capabilities by Region</a>. There are no upfront commitments or minimum fees, and you pay for the usage of existing CloudWatch Logs for data ingestion, storage, and queries. To learn more, visit the <a href="https://aws.amazon.com/cloudwatch/pricing/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">CloudWatch pricing page</a>.</p><p>Give it a try in the <a href="https://console.aws.amazon.com/cloudwatch/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">CloudWatch console</a>. To learn more, visit the <a href="https://aws.amazon.com/cloudwatch/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">CloudWatch product page</a> and send feedback to <a href="https://repost.aws/tags/TAK9UOZOiFRI2NrXQ-VpOPfQ/amazon-cloudwatch-logs?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">AWS re:Post for CloudWatch Logs</a> or through your usual AWS Support contacts.</p><p>— <a href="https://linkedin.com/in/channy/">Channy</a></p></section><aside id="Comments" class="blog-comments"><div data-lb-comp="aws-blog:cosmic-comments" data-env="prod" data-content-id="eb53a97d-aac8-4a55-8aa1-e7db452eb516" data-title="Amazon CloudWatch introduces unified data management and analytics for operations, security, and compliance" data-url="https://aws.amazon.com/blogs/aws/amazon-cloudwatch-introduces-unified-data-management-and-analytics-for-operations-security-and-compliance/"><p data-failed-message="Comments cannot be loaded… Please refresh and try again.">Loading comments…</p></div>
</aside>]]></summary>
    <link href="https://aws.amazon.com/blogs/aws/amazon-cloudwatch-introduces-unified-data-management-and-analytics-for-operations-security-and-compliance/"/>
    <updated>2025-12-02T17:07:00+01:00</updated>
  </entry>
  <entry>
    <id>https://aws.amazon.com/blogs/aws/new-and-enhanced-aws-support-plans-add-ai-capabilities-to-expert-guidance/</id>
    <title><![CDATA[New and enhanced AWS Support plans add AI capabilities to expert guidance]]></title>
    <summary><![CDATA[<section class="blog-post-content lb-rtxt"><table><tbody><tr><td>
<p></p></td></tr></tbody>


</table><p>Today, we’re announcing a fundamental shift in how <a href="https://aws.amazon.com/premiumsupport/?trk=ac97e39c-d115-4d4a-b3fe-c695e0c9a7ee&amp;sc_channel=el">AWS Support</a> helps customers move from reactive problem-solving to proactive issue prevention. This evolution introduces new Support plans that combine AI-powered capabilities with <a href="https://aws.amazon.com/?trk=ac97e39c-d115-4d4a-b3fe-c695e0c9a7ee&amp;sc_channel=el">Amazon Web Services (AWS)</a> expertise. The new and enhanced plans help you identify and address potential issues before they impact your business operations, helping you to operate and optimize your cloud workloads more effectively.</p><p>The portfolio includes three plans designed to match different operational needs. Each plan offers distinct capabilities, with higher tiers including all the capabilities of lower tiers plus additional features and enhanced service levels. Let’s have a look at them.</p><p><strong>New and enhanced AWS Support paid plans<br /></strong> <strong>Business Support+</strong> transforms the developer, startup, and small business experience by providing intelligent assistance powered by AI. You can choose to engage directly with AWS experts or start with AI-powered contextual recommendations that seamlessly transition to AWS experts when needed. AWS experts respond in within 30 minutes for critical cases(twice as fast as before), maintaining previous context and saving you from having to repeat yourself.</p><p>With a low-cost monthly subscription, this plan delivers advanced operational capabilities through a combination of AI-powered tools and AWS expertise. The plan provides personalized recommendations to help optimize your workloads based on your specific environment, while maintaining seamless access to AWS experts for technical support when needed.</p><p><strong>Enterprise Support</strong> builds on our established support model, this tier accelerates innovation and cloud operations success through intelligent operations and AI-powered trusted human guidance. Your designated technical account manager (TAM) combines deep AWS knowledge with data-driven insights from your environment to help identify optimization opportunities and potential risks before they impact your operations. The plan also offers access to AWS Security Incident Response at no additional fee, a comprehensive service that centralizes tracking, storage, and management of security events while providing automated monitoring and investigation capabilities to strengthen your security posture.</p><p>Through AI-powered assistance and continuous monitoring of your AWS environment, this tier helps you achieve new levels of scale in your operations. With up to 15-minute response times for production-critical issues and support engineers who receive personalized context delivered by AI agents, this tier enables faster and more personalized resolution while maintaining operational excellence. Additionally, you also get access to interactive programs and hands-on workshops to foster continuous technical growth.</p><p><strong>Unified Operations Support</strong> delivers our highest level of context-aware Support through an expanded team of AWS experts. Your core team comprised of a Technical Account Manager, a Domain Engineer, and a designated Senior Billing and Account Specialist is complemented by on-demand experts in migration, incident management, and security. These designated experts understand your unique environment and operational history, providing guidance through your preferred collaboration channels while combining their architectural knowledge with AI-powered insights.</p><p>Through comprehensive around-the-clock monitoring and AI-powered automation, this tier strengthens your mission-critical operations with proactive risk identification and contextual guidance. When critical incidents occur, you receive 5-minute response times with technical recommendations provided by Support engineers who understand your workloads. The team conducts systematic application reviews, helps validate operational readiness, and supports business-critical events, which means you can focus on innovation while maintaining the highest levels of operational excellence.</p><p><strong>Transforming your cloud operations</strong><br />AWS Support is evolving to help you build, operate, and optimize your cloud infrastructure more effectively. We maintain context of your account’s support history and previous cases, configuration, and previous cases, so our AI-powered capabilities and AWS experts can deliver more relevant and effective solutions tailored to your specific environment.</p><p>Support plan capabilities will continuously evolve to add comprehensive visibility into your infrastructure, delivering actionable insights across performance, security, and cost dimensions with clear evaluation of business impact and cost benefits. This combination of AI-powered tools and AWS expertise represents a fundamental shift from reactive to proactive operations, helping you prevent issues before they impact your business.</p><p>Subscribers of AWS Developer Support, AWS Business Support (classic), and AWS Enterprise On-Ramp Support plans can continue to receive their current level of support through January 1, 2027. You can transition to one of the new and enhanced plans at any time before then by visiting the AWS Management Console or by reaching out to your AWS account team. Customers subscribed to AWS Enterprise Support can begin using the new features of this plan at any time.</p><p><strong>Things to know<br /></strong> Business Support+, Enterprise Support, and Unified Operations are available in all commercial AWS Regions. Existing customers can continue their current plans or explore the new offerings for enhanced performance and efficiency.</p><p>Business Support+ starts at $29 per month, a 71% savings over the previous Business Support monthly minimum. Enterprise Support starts at $5,000 per month, a 67% savings over the previous Enterprise Support minimum price. Unified Operations, designed for organizations with mission-critical workloads and including a designated team of AWS experts, starts at $50,000 a month. All new Support plans use pricing tiers, which rewards higher usage with lower marginal prices for Support.</p><p>For critical cases, AWS Support provides different target response times across the plans. Business Support+ offers a 30-minute response time, Enterprise Support responds within 15 minutes, and Unified Operations Support delivers the fastest response time at 5 minutes.</p><p>To learn more about AWS Support plans and features, visit the <a href="https://aws.amazon.com/premiumsupport?trk=ac97e39c-d115-4d4a-b3fe-c695e0c9a7ee&amp;sc_channel=el">AWS Support page</a> or sign in to the <a href="https://console.aws.amazon.com?trk=ac97e39c-d115-4d4a-b3fe-c695e0c9a7ee&amp;sc_channel=el">AWS Management Console</a>.</p><p>For hands-on guidance with AWS Support features, schedule a consultation with your account team.</p></section><aside class="blog-comments"><div data-lb-comp="aws-blog:cosmic-comments" data-env="prod" data-content-id="2a8d7168-c5f9-4e0a-9126-c39396c66d86" data-title="New and enhanced AWS Support plans add AI capabilities to expert guidance" data-url="https://aws.amazon.com/blogs/aws/new-and-enhanced-aws-support-plans-add-ai-capabilities-to-expert-guidance/"><p data-failed-message="Comments cannot be loaded… Please refresh and try again.">Loading comments…</p></div>
</aside>]]></summary>
    <link href="https://aws.amazon.com/blogs/aws/new-and-enhanced-aws-support-plans-add-ai-capabilities-to-expert-guidance/"/>
    <updated>2025-12-02T17:07:00+01:00</updated>
  </entry>
  <entry>
    <id>https://aws.amazon.com/blogs/aws/amazon-opensearch-service-improves-vector-database-performance-and-cost-with-gpu-acceleration-and-auto-optimization/</id>
    <title><![CDATA[Amazon OpenSearch Service improves vector database performance and cost with GPU acceleration and auto-optimization]]></title>
    <summary><![CDATA[<section class="blog-post-content lb-rtxt"><table><tbody><tr><td>
<p></p></td></tr></tbody>


</table><p>Today we’re announcing serverless GPU acceleration and auto-optimization for vector index in <a href="https://aws.amazon.com/opensearch-service/">Amazon OpenSearch Service</a> that helps you build large-scale vector databases faster with lower costs and automatically optimize vector indexes for optimal trade-offs between search quality, speed, and cost.</p><p>Here are the new capabilities introduced today:</p><ul><li><strong>GPU acceleration</strong> – You can build vector databases up to 10 times faster at a quarter of the indexing cost when compared to non-GPU acceleration, and you can create billion-scale vector databases in under an hour. With significant gains in cost saving and speed, you get an advantage in time-to-market, innovation velocity, and adoption of vector search at scale.</li>
<li><strong>Auto-optimization</strong> – You can find the best balance between search latency, quality, and memory requirements for your vector field without needing vector expertise. This optimization helps you achieve better cost-savings and recall rates when compared to default index configurations, while manual index tuning can take weeks to complete.</li>
</ul><p data-pm-slice="1 1 []">You can use these capabilities to build vector databases faster and more cost-effectively on OpenSearch Service. You can use them to power generative AI applications, search product catalogs and knowledge bases, and more. You can enable GPU acceleration and auto-optimization when you create a new OpenSearch domain or collection, as well as update an existing domain or collection.</p><p>Let’s go through how it works!</p><p><strong class="c6">GPU acceleration for vector index</strong><br />When you enable GPU acceleration on your OpenSearch Service domain or Serverless collection, OpenSearch Service automatically detects opportunities to accelerate your vector indexing workloads. This acceleration helps build the vector data structures in your OpenSearch Service domain or Serverless collection.</p><p>You don’t need to provision the GPU instances, manage their usage or pay for idle time. OpenSearch Service securely isolates your accelerated workloads to your domain’s or collection’s <a href="https://aws.amazon.com/vpc">Amazon Virtual Private Cloud (Amazon VPC)</a> within your account. You pay only for useful processing through the OpenSearch Units (OCU) – Vector Acceleration pricing.</p><p>To enable GPU acceleration, go to the <a href="https://console.aws.amazon.com/aos/home">OpenSearch Service console</a> and choose <strong>Enable GPU Acceleration</strong> in the <strong>Advanced features</strong> section when you create or update your OpenSearch Service domain or Serverless collection.</p><p><img class="aligncenter size-full wp-image-100952 c7" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/14/2025-opensearch-gpu-accel-auto-tune-1-gpu-setting.jpg" alt="" width="2238" height="960" /></p><p>You can use the following <a href="https://aws.amazon.com/cli">AWS Command Line Interface (AWS CLI)</a> command to enable GPU acceleration for an existing OpenSearch Service domain.</p><pre class="lang-bash">$ aws opensearch update-domain-config \
    --domain-name my-domain \
    --aiml-options '{"ServerlessVectorAcceleration": {"Enabled": true}}'
</pre><p>You can create a vector index optimized for GPU processing. This example index stores 768-dimensional vectors for text embeddings by enabling <code>index.knn.remote_index_build.enabled</code>.</p><pre class="lang-json">PUT my-vector-index
{
    "settings": {
        "index.knn": true,
        "index.knn.remote_index_build.enabled": true
    },
    "mappings": {
        "properties": {
        "vector_field": {
        "type": "knn_vector",
        "dimension": 768,
      },
      "text": {
        "type": "text"
      }
    }
  }
}</pre><p>Now you can add vector data and optimize your index using standard OpenSearch Service operations using the bulk API. The GPU acceleration is automatically applied to indexing and force-merge operations.</p><pre class="lang-json">POST my-vector-index/_bulk
{"index": {"_id": "1"}}
{"vector_field": [0.1, 0.2, 0.3, ...], "text": "Sample document 1"}
{"index": {"_id": "2"}}
{"vector_field": [0.4, 0.5, 0.6, ...], "text": "Sample document 2"}</pre><p>We ran index build benchmarks and observed speed gains from GPU acceleration ranging between 6.4 to 13.8 times. Stay tuned for more benchmarks and further details in upcoming posts.</p><p><img class="aligncenter size-full wp-image-101751 c8" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/24/2025-opensearchservice-gpu-acceralation-benchmark.png" alt="" width="1000" height="600" /></p><p>To learn more, visit <a href="https://docs.aws.amazon.com/opensearch-service/latest/developerguide/gpu-acceleration-vector-index.html">GPU acceleration for vector indexing</a> in the Amazon OpenSearch Service Developer Guide.</p><p><strong class="c6">Auto-optimizing vector databases</strong><br />You can use the new vector ingestion feature to ingest documents from <a href="https://aws.amazon.com/s3">Amazon Simple Storage Service (Amazon S3)</a>, generate vector embeddings, optimize indexes automatically, and build large-scale vector indexes in minutes. During the ingestion, auto-optimization generates recommendations based on your vector fields and indexes of your OpenSearch Service domain or Serverless collection. You can choose one of these recommendations to quickly ingest and index your vector dataset instead of manually configuring these mappings.</p><p>To get started, choose <strong>Vector ingestion</strong> under the <strong>Ingestion</strong> menu in the left navigation pane of <a href="https://console.aws.amazon.com/aos/home">OpenSearch Service console</a>.</p><p><img class="aligncenter size-full wp-image-101470 c7" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/20/2025-opensearch-service-auto-optimize-1.png" alt="" width="2342" height="1286" /></p><p>You can create a new vector ingestion job with the following steps:</p><ul><li><strong>Prepare dataset</strong> – Prepare OpenSearch Service parquet documents in an S3 bucket and choose a domain or collection for your destination.</li>
<li><strong>Configure index and automate optimizations</strong> – Auto-optimize your vector fields or manually configure them.</li>
<li><strong>Ingest and accelerate indexing</strong> – Use OpenSearch ingestion pipelines to load data from Amazon S3 into OpenSearch Service. Build large vector indexes up to 10 times faster at a quarter of the cost.</li>
</ul><p>In <strong>Step 2</strong>, configure your vector index with auto-optimize vector field. Auto-optimize is currently limited to one vector field. Further index mappings can be input after the auto-optimization job has completed.</p><p><img class="aligncenter wp-image-101471 size-full c7" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/20/2025-opensearch-service-auto-optimize-3.png" alt="" width="2266" height="1679" /></p><p>Your vector field optimization settings depend on your use case. For example, if you need high search quality (recall rate) and don’t need faster responses, then choose <strong>Modest</strong> for the <strong>Latency requirements (p90)</strong> and more than or equal to <strong>0.9</strong> for the <strong>Acceptable search quality (recall)</strong>. When you create a job, it starts to ingest vector data and auto-optimize vector index. The processing time depends on the vector dimensionality.</p><p>To learn more, visit <a href="https://docs.aws.amazon.com/opensearch-service/latest/developerguide/serverless-auto-optimize.html">Auto-optimize vector index</a> in the OpenSearch Service Developer Guide.</p><p><strong class="c6">Now available</strong><br />GPU acceleration in Amazon OpenSearch Service is now available in the US East (N. Virginia), US West (Oregon), Asia Pacific (Sydney), Asia Pacific (Tokyo), and Europe (Ireland) Regions. Auto-optimization in OpenSearch Service is now available in the US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Paciﬁc (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), and Europe (Ireland) Regions.</p><p>OpenSearch Service separately charges for used OCU – Vector Acceleration only to index your vector databases. For more information, visit<a href="https://aws.amazon.com/opensearch-service/pricing/">OpenSearch Service pricing page</a>.</p><p>Give it a try and send feedback to the <a href="https://repost.aws/tags/TA6VFzFFY6QQa_KlHRKR-WsA/amazon-opensearch-service">AWS re:Post for Amazon OpenSearch Service</a> or through your usual AWS Support contacts.</p><p>— <a href="https://linkedin.com/in/channyun">Channy</a></p></section><aside class="blog-comments"><div data-lb-comp="aws-blog:cosmic-comments" data-env="prod" data-content-id="dcdf7d1c-b771-4c62-84ea-ed35df912bc1" data-title="Amazon OpenSearch Service improves vector database performance and cost with GPU acceleration and auto-optimization" data-url="https://aws.amazon.com/blogs/aws/amazon-opensearch-service-improves-vector-database-performance-and-cost-with-gpu-acceleration-and-auto-optimization/"><p data-failed-message="Comments cannot be loaded… Please refresh and try again.">Loading comments…</p></div>
</aside>]]></summary>
    <link href="https://aws.amazon.com/blogs/aws/amazon-opensearch-service-improves-vector-database-performance-and-cost-with-gpu-acceleration-and-auto-optimization/"/>
    <updated>2025-12-02T17:06:00+01:00</updated>
  </entry>
  <entry>
    <id>https://aws.amazon.com/blogs/aws/amazon-s3-vectors-now-generally-available-with-increased-scale-and-performance/</id>
    <title><![CDATA[Amazon S3 Vectors now generally available with increased scale and performance]]></title>
    <summary><![CDATA[<section class="blog-post-content lb-rtxt"><table><tbody><tr><td>
<p></p></td></tr></tbody>


</table><p>Today, I’m excited to announce that <a href="https://aws.amazon.com/s3/features/vectors/">Amazon S3 Vectors</a> is now generally available with significantly increased scale and production-grade performance capabilities. S3 Vectors is the first cloud object storage with native support to store and query vector data. You can use it to help you reduce the total cost of storing and querying vectors by up to 90% when compared to specialized vector database solutions.</p><p>Since <a href="https://aws.amazon.com/blogs/aws/introducing-amazon-s3-vectors-first-cloud-storage-with-native-vector-support-at-scale/">we announced the preview of S3 Vectors in July</a>, I’ve been impressed by how quickly you adopted this new capability to store and query vector data. In just over four months, you created over 250,000 vector indexes and ingested more than 40 billion vectors, performing over 1 billion queries (as of November 28th).</p><p>You can now store and search across up to 2 billion vectors in a single index, that’s up to 20 trillion vectors in a vector bucket and a 40x increase from 50 million per index during preview. This means that you can consolidate your entire vector dataset into one index, removing the need to shard across multiple smaller indexes or implement complex query federation logic.</p><p>Query performance has been optimized. Infrequent queries continue to return results in under one second, with more frequent queries now resulting in latencies around 100ms or less, making it well-suited for interactive applications such as conversational AI and multi-agent workflows. You can also retrieve up to 100 search results per query, up from 30 previously, providing more comprehensive context for retrieval augmented generation (RAG) applications.</p><p>The write performance has also improved substantially, with support for up to 1,000 PUT transactions per second when streaming single-vector updates into your indexes, delivering significantly higher write throughput for small batch sizes. This higher throughput supports workloads where new data must be immediately searchable, helping you ingest small data corpora quickly or handle many concurrent sources writing simultaneously to the same index.</p><p>The fully serverless architecture removes infrastructure overhead—there’s no infrastructure to set up or resources to provision. You pay for what you use as you store and query vectors. This AI-ready storage provides you with quick access to any amount of vector data to support your complete AI development lifecycle, from initial experimentation and prototyping through to large-scale production deployments. S3 Vectors now provides the scale and performance needed for production workloads across AI agents, inference, semantic search, and RAG applications.</p><p>Two key integrations that were launched in preview are now generally available. <a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-vectors-bedrock-kb.html">You can use S3 Vectors as a vector storage engine for Amazon Bedrock Knowledge Base</a>. In particular, you can use it to build RAG applications with production-grade scale and performance. Moreover, <a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-vectors-opensearch.html">S3 Vectors integration with Amazon OpenSearch is now generally available</a>, so that you can use S3 Vectors as your vector storage layer while using OpenSearch for search and analytics capabilities.</p><p>You can now use S3 Vectors in 14 AWS Regions, expanding from five AWS Regions during the preview.</p><p><strong>Let’s see how it works<br /></strong> In this post, I demonstrate how to use S3 Vectors through the AWS Console and CLI.</p><p>First, I create an S3 Vector bucket and an index.</p><pre class="lang-sh">echo "Creating S3 Vector bucket..."
aws s3vectors create-vector-bucket \
    --vector-bucket-name "$BUCKET_NAME"
echo "Creating vector index..."
aws s3vectors create-index \
    --vector-bucket-name "$BUCKET_NAME" \
    --index-name "$INDEX_NAME" \
    --data-type "float32" \
    --dimension "$DIMENSIONS" \
    --distance-metric "$DISTANCE_METRIC" \
    --metadata-configuration "nonFilterableMetadataKeys=AMAZON_BEDROCK_TEXT,AMAZON_BEDROCK_METADATA"</pre><p>The dimension metric must match the dimension of the model used to compute the vectors. The distance metric indicates to the algorithm to compute the distance between vectors. S3 Vectors supports <a href="https://en.wikipedia.org/wiki/Cosine_similarity">cosine</a> and <a href="https://en.wikipedia.org/wiki/Euclidean_distance">euclidian</a> distances.</p><p>I can also use the console to create the bucket. We’ve added the capability to configure encryption parameters at creation time. By default, indexes use the bucket-level encryption, but I can override bucket-level encryption at the index level with a custom <a href="https://aws.amazon.com/kms/">AWS Key Management Service (AWS KMS)</a> key.</p><p>I also can add tags for the vector bucket and vector index. Tags at the vector index help with access control and cost allocation.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/18/2025-11-18_13-59-12.png"><img class="aligncenter size-large wp-image-101282" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/18/2025-11-18_13-59-12-1024x613.png" alt="S3 Vector console - create" width="1024" height="613" /></a></p><p>And I can now manage <strong>Properties</strong> and <strong>Permissions</strong> directly in the console.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/18/2025-11-18_14-00-11.png"><img class="aligncenter size-large wp-image-101281" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/18/2025-11-18_14-00-11-1024x645.png" alt="S3 Vector console - properties" width="1024" height="645" /></a></p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/18/2025-11-18_14-05-11.png"><img class="aligncenter size-large wp-image-101280" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/18/2025-11-18_14-05-11-1024x584.png" alt="S3 Vector console - create" width="1024" height="584" /></a></p><p>Similarly, I define <strong>Non-filterable metadata</strong> and I configure <strong>Encryption</strong> parameters for the vector index.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/18/2025-11-18_14-06-58.png"><img class="aligncenter size-large wp-image-101283" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/18/2025-11-18_14-06-58-1024x649.png" alt="S3 Vector console - create index" width="1024" height="649" /></a></p><p>Next, I create and store the embeddings (vectors). For this demo, I ingest my constant companion: the AWS Style Guide. This is an 800-page document that describes how to write posts, technical documentation, and articles at AWS.</p><p>I use <a href="https://docs.aws.amazon.com/bedrock/latest/userguide/knowledge-base.html">Amazon Bedrock Knowledge Bases</a> to ingest the PDF document stored on a general purpose S3 bucket. Amazon Bedrock Knowledge Bases reads the document and splits it in pieces called chunks. Then, it computes the embeddings for each chunk with the <a href="https://docs.aws.amazon.com/bedrock/latest/userguide/titan-embedding-models.html">Amazon Titan Text Embeddings</a> model and it stores the vectors and their metadata on my newly created vector bucket. The detailed steps for that process are out of the scope of this post, but you can read <a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-vectors-bedrock-kb.html">the instructions in the documentation</a>.</p><p>When querying vectors, you can store up to 50 metadata keys per vector, with up to 10 marked as non-filterable. You can use the filterable metadata keys to filter query results based on specific attributes. Therefore, you can combine vector similarity search with metadata conditions to narrow down results. You can also store more non-filterable metadata for larger contextual information. Amazon Bedrock Knowledge Bases computes and stores the vectors. It also adds large metadata (the chunk of the original text). I exclude this metadata from the searchable index.</p><p>There are other methods to ingest your vectors. You can try the <a href="https://github.com/awslabs/s3vectors-embed-cli">S3 Vectors Embed CLI</a>, a command line tool that helps you generate embeddings using Amazon Bedrock and store them in S3 Vectors through direct commands. You can also use S3 Vectors <a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-vectors-opensearch.html">as a vector storage engine for OpenSearch</a>.</p><p>Now I’m ready to query my vector index. Let’s imagine I wonder how to write “open source”. Is it “open-source”, with a hyphen, or “open source” without a hyphen? Should I use uppercase or not? I want to search the relevant sections of the AWS Style Guide relative to “open source.”</p><pre class="lang-sh"># 1. Create embedding request
echo '{"inputText":"Should I write open source or open-source"}' | base64 | tr -d '\n' &gt; body_encoded.txt
# 2. Compute the embeddings with Amazon Titan Embed model
aws bedrock-runtime invoke-model \
  --model-id amazon.titan-embed-text-v2:0 \
  --body "$(cat body_encoded.txt)" \
  embedding.json
# Search the S3 Vectors index for similar chunks
vector_array=$(cat embedding.json | jq '.embedding') &amp;&amp; \
aws s3vectors query-vectors \
  --index-arn "$S3_VECTOR_INDEX_ARN" \
  --query-vector "{\"float32\": $vector_array}" \
  --top-k 3 \
  --return-metadata \
  --return-distance | jq -r '.vectors[] | "Distance: \(.distance) | Source: \(.metadata."x-amz-bedrock-kb-source-uri" | split("/")[-1]) | Text: \(.metadata.AMAZON_BEDROCK_TEXT[0:100])..."'</pre><p>The first result shows this JSON:</p><pre class="lang-json">        {
            "key": "348e0113-4521-4982-aecd-0ee786fa4d1d",
            "metadata": {
                "x-amz-bedrock-kb-data-source-id": "0SZY6GYPVS",
                "x-amz-bedrock-kb-source-uri": "s3://sst-aws-docs/awsstyleguide.pdf",
                "AMAZON_BEDROCK_METADATA": "{\"createDate\":\"2025-10-21T07:49:38Z\",\"modifiedDate\":\"2025-10-23T17:41:58Z\",\"source\":{\"sourceLocation\":\"s3://sst-aws-docs/awsstyleguide.pdf\"",
                "AMAZON_BEDROCK_TEXT": "[redacted] open source (adj., n.) Two words. Use open source as an adjective (for example, open source software), or as a noun (for example, the code throughout this tutorial is open source). Don't use open-source, opensource, or OpenSource. [redacted]",
                "x-amz-bedrock-kb-document-page-number": 98.0
            },
            "distance": 0.63120436668396
        }</pre><p>It finds the relevant section in the AWS Style Guide. I must write “open source” without a hyphen. It even retrieved the page number in the original document to help me cross-check the suggestion with the relevant paragraph in the source document.</p><p><strong>One more thing<br /></strong> S3 Vectors has also expanded its integration capabilities. You can now use <a href="https://aws.amazon.com/cloudformation/">AWS CloudFormation</a> to deploy and manage your vector resources, <a href="https://aws.amazon.com/privatelink/">AWS PrivateLink</a> for private network connectivity, and resource tagging for cost allocation and access control.</p><p><strong>Pricing and availability<br /></strong> S3 Vectors is now available in 14 AWS Regions, adding Asia Pacific (Mumbai, Seoul, Singapore, Tokyo), Canada (Central), and Europe (Ireland, London, Paris, Stockholm) to the existing five Regions from preview (US East (Ohio, N. Virginia), US West (Oregon), Asia Pacific (Sydney), and Europe (Frankfurt))</p><p>Amazon S3 Vectors pricing is based on three dimensions. <strong>PUT pricing</strong> is calculated based on the logical GB of vectors you upload, where each vector includes its logical vector data, metadata, and key. <strong>Storage costs</strong> are determined by the total logical storage across your indexes. <strong>Query charges</strong> include a per-API charge plus a $/TB charge based on your index size (excluding non-filterable metadata). As your index scales beyond 100,000 vectors, you benefit from lower $/TB pricing. <a href="https://aws.amazon.com/s3/pricing/">As usual, the Amazon S3 pricing page has the details</a>.</p><p>To get started with S3 Vectors, visit the <a href="https://console.aws.amazon.com/s3/vector-buckets">Amazon S3 console</a>. You can create vector indexes, start storing your embeddings, and begin building scalable AI applications. For more information, check out the <a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-vectors.html">Amazon S3 User Guide</a> or the <a href="https://docs.aws.amazon.com/cli/latest/reference/s3vectors/">AWS CLI Command Reference</a>.</p><p>I look forward to seeing what you build with these new capabilities. Please share your feedback through <a href="https://repost.aws/">AWS re:Post</a> or your usual <a href="https://aws.amazon.com/contact-us/">AWS Support contacts.</a></p><a href="https://linktr.ee/sebsto">— seb</a></section><aside class="blog-comments"><div data-lb-comp="aws-blog:cosmic-comments" data-env="prod" data-content-id="aee4c330-f81d-4cf3-a5ac-f76b9fdfd974" data-title="Amazon S3 Vectors now generally available with increased scale and performance" data-url="https://aws.amazon.com/blogs/aws/amazon-s3-vectors-now-generally-available-with-increased-scale-and-performance/"><p data-failed-message="Comments cannot be loaded… Please refresh and try again.">Loading comments…</p></div>
</aside>]]></summary>
    <link href="https://aws.amazon.com/blogs/aws/amazon-s3-vectors-now-generally-available-with-increased-scale-and-performance/"/>
    <updated>2025-12-02T17:06:00+01:00</updated>
  </entry>
  <entry>
    <id>https://aws.amazon.com/blogs/aws/amazon-bedrock-adds-fully-managed-open-weight-models/</id>
    <title><![CDATA[Amazon Bedrock adds 18 fully managed open weight models, including the new Mistral Large 3 and Ministral 3 models]]></title>
    <summary><![CDATA[<section class="blog-post-content lb-rtxt"><table><tbody><tr><td>
<p></p></td></tr></tbody>


</table><p>Today, we’re announcing the general availability of an additional 18 fully managed open weight models in <a href="https://aws.amazon.com/bedrock/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">Amazon Bedrock</a> from Google, Moonshot AI, MiniMax AI, <a href="https://aws.amazon.com/bedrock/mistral/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">Mistral AI</a>, NVIDIA, <a href="https://aws.amazon.com/bedrock/openai/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">OpenAI</a>, and <a href="https://aws.amazon.com/bedrock/qwen/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">Qwen</a>, including the new Mistral Large 3 and Ministral 3 3B, 8B, and 14B models.</p><p>With this launch, Amazon Bedrock now provides nearly 100 serverless models, offering a broad and deep range of models from leading AI companies, so customers can choose the precise capabilities that best serve their unique needs. By closely monitoring both customer needs and technological advancements, we regularly expand <a href="https://aws.amazon.com/bedrock/model-choice/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">our curated selection of models</a> based on customer needs and technological advancements to include promising new models alongside established industry favorites.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/12/02/Amazon-Bedrock-Model-Provider-Portfolio-reInvent-2025.png"><img class="alignnone wp-image-102511 size-full" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/12/02/Amazon-Bedrock-Model-Provider-Portfolio-reInvent-2025.png" alt="" width="1370" height="572" /></a></p><p>This ongoing expansion of high-performing and differentiated model offerings helps customers stay at the forefront of AI innovation. You can access these models on Amazon Bedrock through the unified API, evaluate, switch, and adopt new models without rewriting applications or changing infrastructure.</p><p><strong class="c6">New Mistral AI models</strong><br />These four Mistral AI models are now available first on Amazon Bedrock, each optimized for different performance and cost requirements:</p><ul><li><strong>Mistral Large 3</strong> – This open weight model is optimized for long-context, multimodal, and instruction reliability. It excels in long document understanding, agentic and tool use workflows, enterprise knowledge work, coding assistance, advanced workloads such as math and coding tasks, multilingual analysis and processing, and multimodal reasoning with vision.</li>
<li><strong>Ministral 3 3B</strong> – The smallest in the Ministral 3 family is edge-optimized for single GPU deployment with strong language and vision capabilities. It shows robust performance in image captioning, text classification, real-time translation, data extraction, short content generation, and lightweight real-time applications on edge or low-resource devices.</li>
<li><strong>Ministral 3 8B</strong> – The best-in-class Ministral 3 model for text and vision is edge-optimized for single GPU deployment with high performance and minimal footprint. This model is ideal for chat interfaces in constrained environments, image and document description and understanding, specialized agentic use cases, and balanced performance for local or embedded systems.</li>
<li><strong>Ministral 3 14B</strong> – The most capable Ministral 3 model delivers state-of the-art text and vision performance optimized for single GPU deployment. You can use advanced local agentic use cases and private AI deployments where advanced capabilities meet practical hardware constraints.</li>
</ul><p><strong class="c6">More open weight model options</strong><br />You can use these open weight models for a wide range of use cases across industries:</p><table class="c12"><tbody><tr class="c8"><td class="c7"><strong>Model provider</strong></td>
<td class="c7"><strong>Model name</strong></td>
<td class="c7"><strong>Description</strong></td>
<td class="c7"><strong>Use cases</strong></td>
</tr><tr class="c10"><td class="c9" rowspan="3"><strong>Google</strong></td>
<td class="c9"><a href="https://huggingface.co/google/gemma-3-4b-it">Gemma 3 4B</a></td>
<td class="c9">Efficient text and image model that runs locally on laptops. Multilingual support for on-device AI applications.</td>
<td class="c9">On-device AI for mobile and edge applications, privacy-sensitive local inference, multilingual chat assistants, image captioning and description, and lightweight content generation.</td>
</tr><tr class="c10"><td class="c9"><a href="https://huggingface.co/google/gemma-3-12b-it">Gemma 3 12B</a></td>
<td class="c9">Balanced text and image model for workstations. Multi-language understanding with local deployment for privacy-sensitive applications.</td>
<td class="c9">Workstation-based AI applications; local deployment for enterprises; multilingual document processing, image analysis and Q&amp;A; and privacy-compliant AI assistants.</td>
</tr><tr class="c10"><td class="c9"><a href="https://huggingface.co/collections/google/gemma-3-release">Gemma 3 27B</a></td>
<td class="c9">Powerful text and image model for enterprise applications. Multi-language support with local deployment for privacy and control.</td>
<td class="c9">Enterprise local deployment, high-performance multimodal applications, advanced image understanding, multilingual customer service, and data-sensitive AI workflows.</td>
</tr><tr class="c10"><td class="c9"><strong>Moonshot AI</strong></td>
<td class="c9"><a href="https://huggingface.co/moonshotai/Kimi-K2-Thinking">Kimi K2 Thinking</a></td>
<td class="c9">Deep reasoning model that thinks while using tools. Handles research, coding and complex workflows requiring hundreds of sequential actions.</td>
<td class="c9">Complex coding projects requiring planning, multistep workflows, data analysis and computation, and long-form content creation with research.</td>
</tr><tr class="c10"><td class="c11"><strong>MiniMax AI</strong></td>
<td class="c9"><a href="https://huggingface.co/MiniMaxAI/MiniMax-M2">MiniMax M2</a></td>
<td class="c9">Built for coding agents and automation. Excels at multi-file edits, terminal operations and executing long tool-calling chains efficiently.</td>
<td class="c9">Coding agents and integrated development environment (IDE) integration, multi-file code editing, terminal automation and DevOps, long-chain tool orchestration, and agentic software development.</td>
</tr><tr class="c10"><td class="c9" rowspan="3"><strong>Mistral AI</strong></td>
<td class="c9"><a href="https://huggingface.co/mistralai/Magistral-Small-2509">Magistral Small 1.2</a></td>
<td class="c9">Excels at math, coding, multilingual tasks, and multimodal reasoning with vision capabilities for efficient local deployment.</td>
<td class="c9">Math and coding tasks, multilingual analysis and processing, and multimodal reasoning with vision.</td>
</tr><tr class="c10"><td class="c9"><a href="https://huggingface.co/mistralai/Voxtral-Mini-3B-2507">Voxtral Mini 1.0</a></td>
<td class="c9">Advanced audio understanding model with transcription, multilingual support, Q&amp;A, summarization, and function-calling.</td>
<td class="c9">Voice-controlled applications, fast speech-to-text conversion, and offline voice assistants.</td>
</tr><tr class="c10"><td class="c9"><a href="https://huggingface.co/mistralai/Voxtral-Small-24B-2507">Voxtral Small 1.0</a></td>
<td class="c9">Features state-of-the-art audio input with best-in-class text performance; excels at speech transcription, translation, and understanding.</td>
<td class="c9">Enterprise speech transcription, multilingual customer service, and audio content summarization.</td>
</tr><tr class="c10"><td class="c9" rowspan="2"><strong>NVIDIA</strong></td>
<td class="c9"><a href="https://huggingface.co/nvidia/NVIDIA-Nemotron-Nano-9B-v2">NVIDIA Nemotron Nano 2 9B</a></td>
<td class="c9">High efficiency LLM with hybrid transformer Mamba design, excelling in reasoning and agentic tasks.</td>
<td class="c9">Reasoning, tool calling, math, coding, and instruction following.</td>
</tr><tr class="c10"><td class="c9"><a href="https://huggingface.co/nvidia/NVIDIA-Nemotron-Nano-12B-v2-VL-BF16">NVIDIA Nemotron Nano 2 VL 12B</a></td>
<td class="c9">Advanced multimodal reasoning model for video understanding and document intelligence, powering Retrieval-Augmented Generation (RAG) and multimodal agentic applications.</td>
<td class="c9">Multi-image and video understanding, visual Q&amp;A, and summarization.</td>
</tr><tr class="c10"><td class="c9" rowspan="2"><strong>OpenAI</strong></td>
<td class="c9"><a href="https://huggingface.co/openai/gpt-oss-safeguard-20b">gpt-oss-safeguard-20b</a></td>
<td class="c9">Content safety model that applies your custom policies. Classifies harmful content with explanations for trust and safety workflows.</td>
<td class="c9">Content moderation and safety classification, custom policy enforcement, user-generated content filtering, trust and safety workflows, and automated content triage.</td>
</tr><tr class="c10"><td class="c9"><a href="https://huggingface.co/openai/gpt-oss-safeguard-120b">gpt-oss-safeguard-120b</a></td>
<td class="c9">Larger content safety model for complex moderation. Applies custom policies with detailed reasoning for enterprise trust and safety teams.</td>
<td class="c9">Enterprise content moderation at scale, complex policy interpretation, multilayered safety classification, regulatory compliance checking, high-stakes content review.</td>
</tr><tr class="c10"><td class="c9" rowspan="2"><strong>Qwen</strong></td>
<td class="c9"><a href="https://huggingface.co/Qwen/Qwen3-Next-80B-A3B-Instruct">Qwen3-Next-80B-A3B</a></td>
<td class="c9">Fast inference with hybrid attention for ultra-long documents. Optimized for RAG pipelines, tool use &amp; agentic workflows with quick responses.</td>
<td class="c9">RAG pipelines with long documents, agentic workflows with tool calling, code generation and software development, multi-turn conversations with extended context, multilingual content generation.</td>
</tr><tr class="c10"><td class="c9"><a href="https://huggingface.co/Qwen/Qwen3-VL-235B-A22B-Instruct">Qwen3-VL-235B-A22B</a></td>
<td class="c9">Understands images and video. Extracts text from documents, converts screenshots to working code, and automates clicking through interfaces.</td>
<td class="c9">Extracting text from images and PDFs, converting UI designs or screenshots to working code, automating clicks and navigation in applications, video analysis and understanding, reading charts and diagrams.</td>
</tr></tbody></table><p>When implementing publicly available models, give careful consideration to data privacy requirements in your production environments, check for bias in output, and monitor your results for data security, <a href="https://aws.amazon.com/ai/responsible-ai/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">responsible AI</a>, and <a href="https://aws.amazon.com/bedrock/evaluations/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">model evaluation</a>.</p><p>You can access the <a href="https://aws.amazon.com/bedrock/security-compliance/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">enterprise-grade security features</a> of Amazon Bedrock and implement safeguards customized to your application requirements and responsible AI policies with <a href="https://aws.amazon.com/bedrock/guardrails/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">Amazon Bedrock Guardrails</a>. You can also evaluate and compare models to identify the optimal models for your use cases by using <a href="https://aws.amazon.com/blogs/aws/amazon-bedrock-model-evaluation-is-now-generally-available/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">Amazon Bedrock model evaluation tools</a>.</p><p>To get started, you can quickly test these models with a few prompts in the playground of the <a href="https://console.aws.amazon.com/bedrock/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">Amazon Bedrock console</a> or use any <a href="https://aws.amazon.com/tools/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">AWS SDKs</a> to include access to the Bedrock <a href="https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InvokeModel.html?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">InvokeModel</a> and <a href="https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_Converse.html?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">Converse</a> APIs. You can also use these models with any agentic framework that supports Amazon Bedrock and deploy the agents using <a href="https://aws.amazon.com/bedrock/agentcore/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">Amazon Bedrock AgentCore</a> and <a href="https://strandsagents.com/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">Strands Agents</a>. To learn more, visit <a href="https://docs.aws.amazon.com/bedrock/latest/userguide/service_code_examples.html?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">Code examples for Amazon Bedrock using AWS SDKs</a> in the Amazon Bedrock User Guide.</p><p><strong class="c6">Now available</strong><br />Check the <a href="https://docs.aws.amazon.com/bedrock/latest/userguide/models-regions.html?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">full Region list</a> for availability and future updates of new models or search your model name in the <a href="https://aws.amazon.com/cloudformation/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">AWS CloudFormation</a> resources tab of <a href="https://builder.aws.com/build/capabilities/explore?tab=cfn-resources&amp;trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">AWS Capabilities by Region</a>. To learn more, check out the <a href="https://aws.amazon.com/bedrock/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">Amazon Bedrock product page</a> and the <a href="https://aws.amazon.com/bedrock/pricing/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">Amazon Bedrock pricing page</a>.</p><p>Give these models a try in the <a href="https://console.aws.amazon.com/bedrock?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">Amazon Bedrock console</a> today and send feedback to <a href="https://repost.aws/tags/TAQeKlaPaNRQ2tWB6P7KrMag/amazon-bedrock?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">AWS re:Post for Amazon Bedrock</a> or through your usual AWS Support contacts.</p><p>— <a href="https://linkedin.com/in/channy/">Channy</a></p></section><aside class="blog-comments"><div data-lb-comp="aws-blog:cosmic-comments" data-env="prod" data-content-id="fceb2d09-2d64-4331-a754-db9527a41ecf" data-title="Amazon Bedrock adds 18 fully managed open weight models, including the new Mistral Large 3 and Ministral 3 models" data-url="https://aws.amazon.com/blogs/aws/amazon-bedrock-adds-fully-managed-open-weight-models/"><p data-failed-message="Comments cannot be loaded… Please refresh and try again.">Loading comments…</p></div>
</aside>]]></summary>
    <link href="https://aws.amazon.com/blogs/aws/amazon-bedrock-adds-fully-managed-open-weight-models/"/>
    <updated>2025-12-02T17:05:00+01:00</updated>
  </entry>
  <entry>
    <id>https://aws.amazon.com/blogs/aws/introducing-amazon-ec2-x8aedz-instances-powered-by-5th-gen-amd-epyc-processors-for-memory-intensive-workloads/</id>
    <title><![CDATA[Introducing Amazon EC2 X8aedz instances powered by 5th Gen AMD EPYC processors for memory-intensive workloads]]></title>
    <summary><![CDATA[<section class="blog-post-content lb-rtxt"><table><tbody><tr><td>
<p></p></td></tr></tbody>


</table><p>Today, we’re announcing the availability of new memory-optimized, high-frequency <a href="https://aws.amazon.com/ec2/">Amazon Elastic Compute Cloud (Amazon EC2)</a> X8aedz instances powered by a 5th Gen AMD EPYC processor. These instances offer the highest CPU frequency, 5GHz in the cloud. They deliver up to two times higher compute performance and 31% price-performance compared to previous generation X2iezn instances.</p><p>X8aedz instances are ideal for electronic design automation (EDA) workloads, such as physical layout and physical verification jobs, and relational databases that benefit from high single-threaded processor performance and a large memory footprint. The combination of 5 GHz processors and local NVMe storage enables faster processing of memory-intensive backend EDA workloads such as floor planning, logic placement, clock tree synthesis (CTS), routing, and power/signal integrity analysis. The high memory-to-vCPU ratio of 32:1 makes these instances particularly effective for applications with vCPU-based licensing models.</p><p>Let me explain the instance type naming: The “a” suffix indicates an AMD processor, “e” denotes extended memory in the memory-optimized instance family, “d” represents local NVMe-based SSDs physically connected to the host server, and “z” indicates high-frequency processors.</p><p><strong class="c6">X8aedz instances</strong><br />X8aedz instances are available in eight sizes ranging from 2–96 vCPUs with 64–3,072 GiB of memory, including two bare metal sizes. X8aedz instances feature up to 75 Gbps of network bandwidth with support for the <a href="https://aws.amazon.com/hpc/efa/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">Elastic Fabric Adapter (EFA)</a>, up to 60 Gbps of throughput to the <a href="https://aws.amazon.com/ebs/">Amazon Elastic Block Store (Amazon EBS)</a>, and up to 8 TB of local NVMe SSD storage.</p><p>Here are the specs for X8aedz instances:</p><table class="c10"><tbody><tr class="c8"><td class="c7"><strong>Instance name</strong></td>
<td class="c7"><strong>vCPUs</strong></td>
<td class="c7"><strong>Memory<br /></strong> <strong>(GiB)</strong></td>
<td class="c7"><strong>NVMe SSD storage (GB)</strong></td>
<td class="c7"><strong>Network bandwidth (Gbps)</strong></td>
<td class="c7"><strong>EBS bandwidth (Gbps)</strong></td>
</tr><tr class="c9"><td class="c7"><strong>x8aedz.large</strong></td>
<td class="c7">2</td>
<td class="c7">64</td>
<td class="c7">158</td>
<td class="c7">Up to 18.75</td>
<td class="c7">Up to 15</td>
</tr><tr class="c9"><td class="c7"><strong>x8aedz.xlarge</strong></td>
<td class="c7">4</td>
<td class="c7">128</td>
<td class="c7">316</td>
<td class="c7">Up to 18.75</td>
<td class="c7">Up to 15</td>
</tr><tr class="c9"><td class="c7"><strong>x8aedz.3xlarge</strong></td>
<td class="c7">12</td>
<td class="c7">384</td>
<td class="c7">950</td>
<td class="c7">Up to 18.75</td>
<td class="c7">Up to 15</td>
</tr><tr class="c9"><td class="c7"><strong>x8aedz.6xlarge</strong></td>
<td class="c7">24</td>
<td class="c7">768</td>
<td class="c7">1,900</td>
<td class="c7">18.75</td>
<td class="c7">15</td>
</tr><tr class="c9"><td class="c7"><strong>x8aedz.12xlarge</strong></td>
<td class="c7">48</td>
<td class="c7">1,536</td>
<td class="c7">3,800</td>
<td class="c7">37.5</td>
<td class="c7">30</td>
</tr><tr class="c9"><td class="c7"><strong>x8aedz.24xlarge</strong></td>
<td class="c7">96</td>
<td class="c7">3,072</td>
<td class="c7">7,600</td>
<td class="c7">75</td>
<td class="c7">60</td>
</tr><tr class="c9"><td class="c7"><strong>x8aedz.metal-12xl</strong></td>
<td class="c7">48</td>
<td class="c7">1,536</td>
<td class="c7">3,800</td>
<td class="c7">37.5</td>
<td class="c7">30</td>
</tr><tr class="c9"><td class="c7"><strong>x8aedz.metal-24xl</strong></td>
<td class="c7">96</td>
<td class="c7">3,072</td>
<td class="c7">7,600</td>
<td class="c7">75</td>
<td class="c7">60</td>
</tr></tbody></table><p>With the 60 Gbps Amazon EBS bandwidth and up to 8 TB of local NVMe SSD storage, you can achieve faster database response times and reduced latency for EDA operations, ultimately accelerating time-to-market for chip designs. These instances also support the instance bandwidth configuration feature that offers flexibility in allocating resources between network and EBS bandwidth. You can scale network or EBS bandwidth by 25% and improve database (read and write) performance, query processing, and logging speeds.</p><p>X8aedz instances use sixth-generation <a href="https://aws.amazon.com/ec2/nitro/">AWS Nitro</a> cards, which offload CPU virtualization, storage, and networking functions to dedicated hardware and software, enhancing performance and security for your workloads.</p><p><strong class="c6">Now available</strong><br />Amazon EC2 X8aedz instances are now available in US West (Oregon) and Asia Pacific (Tokyo) <a href="https://docs.aws.amazon.com/glossary/latest/reference/glos-chap.html#region?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">AWS Regions</a>, and additional Regions will be coming soon. For Regional availability and future roadmap, search the instance type in the <a href="https://aws.amazon.com/cloudformation/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">AWS CloudFormation</a> resources tab of the <a href="https://builder.aws.com/build/capabilities/explore?tab=cfn-resources&amp;trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">AWS Capabilities by Region</a>.</p><p>You can purchase these instances as <a href="https://aws.amazon.com/ec2/pricing/on-demand/?trk=cf96f8ec-de40-4ee0-8b64-3f7cf7660da2&amp;sc_channel=el">On-Demand</a>, <a href="https://aws.amazon.com/savingsplans/?trk=cc9e0036-98c5-4fa8-8df0-5281f75284ca&amp;sc_channel=el">Savings Plan</a>, <a href="https://aws.amazon.com/ec2/spot/pricing/?trk=307341f6-3463-47d5-ba81-0957847a9b73&amp;sc_channel=el">Spot Instances</a>, and <a href="https://aws.amazon.com/ec2/pricing/dedicated-instances/">Dedicated Instances</a>. To learn more, visit the <a href="https://aws.amazon.com/ec2/pricing/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">Amazon EC2 Pricing page</a>.</p><p>Give X8aedz instances a try in the <a href="https://console.aws.amazon.com/ec2/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">Amazon EC2 console</a>. To learn more, visit the <a href="https://aws.amazon.com/ec2/instance-types/x8aedz/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">Amazon EC2 X8aedz instances page</a> and send feedback to <a href="https://repost.aws/tags/TAO-wqN9fYRoyrpdULLa5y7g/amazon-ec-2?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">AWS re:Post for EC2</a> or through your usual AWS Support contacts.</p><p>— <a href="https://linkedin.com/in/channy/">Channy</a></p></section><aside class="blog-comments"><div data-lb-comp="aws-blog:cosmic-comments" data-env="prod" data-content-id="7d5f4cf5-af86-4dc9-909a-2a4e3400c611" data-title="Introducing Amazon EC2 X8aedz instances powered by 5th Gen AMD EPYC processors for memory-intensive workloads" data-url="https://aws.amazon.com/blogs/aws/introducing-amazon-ec2-x8aedz-instances-powered-by-5th-gen-amd-epyc-processors-for-memory-intensive-workloads/"><p data-failed-message="Comments cannot be loaded… Please refresh and try again.">Loading comments…</p></div>
</aside>]]></summary>
    <link href="https://aws.amazon.com/blogs/aws/introducing-amazon-ec2-x8aedz-instances-powered-by-5th-gen-amd-epyc-processors-for-memory-intensive-workloads/"/>
    <updated>2025-12-02T17:05:00+01:00</updated>
  </entry>
  <entry>
    <id>https://aws.amazon.com/blogs/aws/aws-devops-agent-helps-you-accelerate-incident-response-and-improve-system-reliability-preview/</id>
    <title><![CDATA[AWS DevOps Agent helps you accelerate incident response and improve system reliability (preview)]]></title>
    <summary><![CDATA[<section class="blog-post-content lb-rtxt"><table><tbody><tr><td>
<p></p></td></tr></tbody>


</table><p>Today, we’re announcing the public preview of AWS DevOps Agent, a <a href="https://aws.amazon.com/ai/frontier-agents">frontier agent</a> that helps you respond to incidents, identify root causes, and prevent future issues through systematic analysis of past incidents and operational patterns.</p><p>Frontier agents represent a new class of AI agents that are autonomous, massively scalable, and work for hours or days without constant intervention.</p><p>When production incidents occur, on-call engineers face significant pressure to quickly identify root causes while managing stakeholder communications. They must analyze data across multiple monitoring tools, review recent deployments, and coordinate response teams. After service restoration, teams often lack bandwidth to transform incident learnings into systematic improvements.</p><p>AWS DevOps Agent is your always-on, autonomous on-call engineer. When issues arise, it automatically correlates data across your operational toolchain, from metrics and logs to recent code deployments in GitHub or GitLab. It identifies probable root causes and recommends targeted mitigations, helping reduce mean time to resolution. The agent also manages incident coordination, using Slack channels for stakeholder updates and maintaining detailed investigation timelines.</p><p>To get started, you connect AWS DevOps Agent to your existing tools through the <a href="https://console.aws.amazon.com">AWS Management Console</a>. The agent works with popular services such as <a href="https://aws.amazon.com/cloudwatch/">Amazon CloudWatch</a>, <a href="https://www.datadoghq.com/">Datadog</a>, <a href="https://www.dynatrace.com/">Dynatrace</a>, <a href="https://newrelic.com/">New Relic</a>, and <a href="https://www.splunk.com/">Splunk</a> for observability data, while integrating with GitHub Actions and GitLab CI/CD to track deployments and their impact on your cloud resources. Through the bring your own (BYO) <a href="https://modelcontextprotocol.io/docs/getting-started/intro">Model Context Protocol (MCP)</a> server capability, you can also integrate additional tools such as your organization’s custom tools, specialized platforms or open source observability solutions, such as <a href="https://grafana.com/">Grafana</a> and <a href="https://prometheus.io/">Prometheus</a> into your investigations.</p><p>The agent acts as a virtual team member and can be configured to automatically respond to incidents from your ticketing systems. It includes built-in support for <a href="https://www.servicenow.com/">ServiceNow</a>, and through configurable <a href="https://en.wikipedia.org/wiki/Webhook">webhooks</a>, can respond to events from other incident management tools like <a href="https://www.pagerduty.com/">PagerDuty</a>. As investigations progress, the agent updates tickets and relevant Slack channels with its findings. All of this is powered by an intelligent application topology the agent builds—a comprehensive map of your system components and their interactions, including deployment history that helps identify potential deployment-related causes during investigations.</p><p><strong>Let me show you how it works<br /></strong> To show you how it works, I deployed a straigthforward <a href="https://aws.amazon.com/lambda/">AWS Lambda</a> function that intentionally generates errors when invoked. I deployed it in an <a href="https://aws.amazon.com/cloudformation/">AWS CloudFormation</a> stack.</p><p><strong>Step 1: Create an Agent Space</strong></p><p>An Agent Space defines the scope of what AWS DevOps Agent can access as it performs tasks.</p><p>You can organize Agent Spaces based on your operational model. Some teams align an Agent Space with a single application, others create one per on-call team managing multiple services, and some organizations use a centralized approach. For this demonstration, I’ll show you how to create an Agent Space for a single application. This setup helps isolate investigations and resources for that specific application, making it easier to track and analyze incidents within its context.</p><p>In the AWS DevOps Agent section of the <a href="https://console.aws.amazon.com">AWS Management Console</a>, I select <strong>Create Agent Space</strong>, enter a name for this space and create the <a href="https://aws.amazon.com/iam/">AWS Identity and Access Management (IAM)</a> roles it uses to introspect AWS resources in my or others’ AWS accounts.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/21/2025-11-21_08-15-01.png"><img class="aligncenter size-large wp-image-101546" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/21/2025-11-21_08-15-01-1024x570.png" alt="AWS DevOps Agent - Create an Agent Space" width="1024" height="570" /></a>For this demo, I choose to enable the AWS DevOps Agent web app; more about this later. This can be done at a later stage.</p><p>When ready, I choose <strong>Create</strong>.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/21/2025-11-21_08-15-07.png"><img class="aligncenter size-large wp-image-101547" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/21/2025-11-21_08-15-07-1024x472.png" alt="AWS DevOps Agent - Enable Web App" width="1024" height="472" /></a>After it has been created, I choose the <strong>Topology</strong> tab.</p><p>This view shows the key resources, entities, and relationships AWS DevOps Agent has selected as a foundation for performing its tasks efficiently. It doesn’t represent everything AWS DevOps Agent can access or see, only what the Agent considers most relevant right now. By default, the Topology includes the AWS resources that are contained in my account. As your agent completes more tasks, it will discover and add new resources to this list.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/21/2025-11-21_08-19-12.png"><img class="aligncenter size-large wp-image-101548" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/21/2025-11-21_08-19-12-1024x650.png" alt="AWS DevOps Agent - Topology" width="1024" height="650" /></a></p><p><strong>Step 2: Configure the AWS DevOps web app for the operators</strong></p><p>The AWS DevOps Agent web app provides a web interface for on-call engineers to manually trigger investigations, view investigation details including relevant topology elements, steer investigations, and ask questions about an investigation.</p><p>I can access the web app directly from my Agent Space in the AWS console by choosing the <strong>Operator access</strong> link. Alternatively, I can use <a href="https://aws.amazon.com/iam/identity-center/">AWS IAM Identity Center</a> to configure user access for my team. IAM Identity Center lets me manage users and groups directly or connect to an identity provider (IdP), providing a centralized way to control who can access the AWS DevOps Agent web app.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/28/2025-11-28_19-52-39.png"><img class="aligncenter size-large wp-image-102290" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/28/2025-11-28_19-52-39-1024x545.png" alt="AWS DevOps Agent - web app access" width="1024" height="545" /></a></p><p>At this stage, I have an Agent Space all set up to focus investigations and resources for this speciﬁc application, and I’ve enabled the DevOps team to initiate investigations using the web app.</p><p>Now that the one-time setup for this application is done, I start invoking the faulty Lambda function. It generates errors at each invocation. The CloudWatch alarm associated with the Lambda errors count turns on to <strong>ALARM</strong> state. In real life, you might receive an alert from external services, such as ServiceNow. You can configure AWS DevOps Agent to automatically start investigations when receiving such alerts.</p><p>For this demo, I manually start the investigation by selecting <strong>Start Investigation</strong>.</p><p>You can also choose from several preconfigured starting points to quickly begin your investigation: Latest alarm to investigate your most recent triggered alarm and analyze the underlying metrics and logs to determine the root cause, High CPU usage to investigate high CPU utilization metrics across your compute resources and identify which processes or services are consuming excessive resources, or Error rate spike to investigate the recent increase in application error rates by analyzing metrics, application logs, and identifying the source of failures.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/28/2025-11-28_19-55-05.png"><img class="aligncenter size-large wp-image-102291" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/28/2025-11-28_19-55-05-1024x528.png" alt="AWS DevOps Agent - web app" width="1024" height="528" /></a></p><p>I enter some information, such as <strong>Investigation details</strong>, <strong>Investigation starting point</strong>, the <strong>Date and time of the incident</strong>, the <strong>AWS Account ID for the incident.</strong></p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/21/2025-11-21_08-39-07-v3.png"><img class="aligncenter size-large wp-image-101554" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/21/2025-11-21_08-39-07-v3-601x1024.png" alt="- web app - start investigation" width="601" height="1024" /></a></p><p>In the AWS DevOps Agent web app, you can watch the investigation unfold in real time. The agent identifies the application stack. It correlates metrics from CloudWatch, examines logs from CloudWatch Logs or external sources, such as Splunk, reviews recent code changes from GitHub, and analyzes traces from <a href="https://aws.amazon.com/x-ray/">AWS X-Ray</a>.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/21/2025-11-21_08-45-49.png"><img class="aligncenter size-large wp-image-101555" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/21/2025-11-21_08-45-49-1024x900.png" alt="- web app - application stack" width="1024" height="900" /></a></p><p>It identifies the error patterns and provides a detailed investigation summary. In the context of this demo, the investigation reveals that these are intentional test exceptions, shows the timeline of function invocations leading to the alarm, and even suggests monitoring improvements for error handling.</p><p>The agent uses a dedicated incident channel in Slack, notifies on-call teams if needed, and provides real-time status updates to stakeholders. Through the investigation chat interface, you can interact directly with the agent by asking clarifying questions such as “which logs did you analyze?” or steering the investigation by providing additional context, such as “focus on these specific log groups and rerun your analysis.” If you need expert assistance, you can create an AWS Support case with a single click, automatically populating it with the agent’s findings, and engage with AWS Support experts directly through the investigation chat window.</p><p>For this demo, the AWS DevOps Agent correctly identified manual activities in the Lambda console to invoke a function that intentionally triggers errors .</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/21/2025-11-21_08-50-57.png"><img class="aligncenter size-large wp-image-101556" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/21/2025-11-21_08-50-57-1024x697.png" alt="- web app - root cause" width="1024" height="697" /></a></p><p>Beyond incident response, AWS DevOps Agent analyzes my recent incidents to identify high-impact improvements that prevent future issues.</p><p>During active incidents, the agent offers immediate mitigation plans through its incident mitigations tab to help restore service quickly. Mitigation plans consist of specs that provide detailed implementation guidance for developers and agentic development tools like <a href="https://kiro.dev/">Kiro</a>.</p><p>For longer-term resilience, it identifies potential enhancements by examining gaps in observability, infrastructure configurations, and deployment pipeline. My straightforward demo that triggered intentional errors was not enough to generate relevant recommendations though.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/21/2025-11-21_09-08-36.png"><img class="aligncenter size-large wp-image-101560" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/21/2025-11-21_09-08-36-1024x390.png" alt="AWS DevOps Agent - web app - recommendations" width="1024" height="390" /></a></p><p>For example, it might detect that a critical service lacks multi-AZ deployment and comprehensive monitoring. The agent then creates detailed recommendations with implementation guidance, considering factors like operational impact and implementation complexity. In an upcoming quick follow-up release, the agent will expand its analysis to include code bugs and testing coverage improvements.</p><p><strong>Availability<br /></strong> You can try AWS DevOps Agent today in the US East (N. Virginia) Region. Although the agent itself runs in US East (N. Virginia) (<code>us-east-1</code>), it can monitor applications deployed in any Region, across multiple AWS accounts.</p><p>During the preview period, you can use AWS DevOps Agent at no charge, but there will be a limit on the number of agent task hours per month.</p><p>As someone who has spent countless nights debugging production issues, I’m particularly excited about how AWS DevOps Agent combines deep operational insights with practical, actionable recommendations. The service helps teams move from reactive firefighting to proactive system improvement.</p><p>To learn more and sign up for the preview, visit <a href="https://aws.amazon.com/devops-agent">AWS DevOps Agent</a>. I look forward to hearing how AWS DevOps Agent helps improve your operational efficiency.</p><a href="https://linktr.ee/sebsto">— seb</a></section><aside class="blog-comments"><div data-lb-comp="aws-blog:cosmic-comments" data-env="prod" data-content-id="f7efd344-339b-4e8a-a079-68482fd8aceb" data-title="AWS DevOps Agent helps you accelerate incident response and improve system reliability (preview)" data-url="https://aws.amazon.com/blogs/aws/aws-devops-agent-helps-you-accelerate-incident-response-and-improve-system-reliability-preview/"><p data-failed-message="Comments cannot be loaded… Please refresh and try again.">Loading comments…</p></div>
</aside>]]></summary>
    <link href="https://aws.amazon.com/blogs/aws/aws-devops-agent-helps-you-accelerate-incident-response-and-improve-system-reliability-preview/"/>
    <updated>2025-12-02T17:05:00+01:00</updated>
  </entry>
  <entry>
    <id>https://aws.amazon.com/blogs/aws/accelerate-ai-development-using-amazon-sagemaker-ai-with-serverless-mlflow/</id>
    <title><![CDATA[Accelerate AI development using Amazon SageMaker AI with serverless MLflow]]></title>
    <summary><![CDATA[<section class="blog-post-content lb-rtxt"><table id="amazon-polly-audio-table"><tbody><tr><td id="amazon-polly-audio-tab">
<p></p></td></tr></tbody>


</table><p>Since we <a href="https://aws.amazon.com/blogs/aws/manage-ml-and-generative-ai-experiments-using-amazon-sagemaker-with-mlflow/?trk=c4ea046f-18ad-4d23-a1ac-cdd1267f942c&amp;sc_channel=el">announced Amazon SageMaker AI with MLflow in June 2024</a>, our customers have been using MLflow tracking servers to manage their <a href="https://aws.amazon.com/ai/machine-learning/">machine learning (ML)</a> and AI experimentation workflows. Building on this foundation, we’re continuing to evolve the MLflow experience to make experimentation even more accessible.</p><p>Today, I’m excited to announce that <a href="https://aws.amazon.com/sagemaker/ai/experiments/?trk=c4ea046f-18ad-4d23-a1ac-cdd1267f942c&amp;sc_channel=el">Amazon SageMaker AI with MLflow</a> now includes a serverless capability that eliminates infrastructure management. This new MLflow capability transforms experiment tracking into an immediate, on-demand experience with automatic scaling that removes the need for capacity planning.</p><p>The shift to zero-infrastructure management fundamentally changes how teams approach AI experimentation—ideas can be tested immediately without infrastructure planning, enabling more iterative and exploratory development workflows.</p><p><strong>Getting started with Amazon SageMaker AI and MLflow<br /></strong> Let me walk you through creating your first serverless MLflow instance.</p><p>I navigate to <a href="https://console.aws.amazon.com/sagemaker?trk=c4ea046f-18ad-4d23-a1ac-cdd1267f942c&amp;sc_channel=el">Amazon SageMaker AI Studio console</a> and select the <strong>MLflow</strong> application. The term <strong>MLflow Apps</strong> replaces the previous <strong>MLflow tracking servers</strong> terminology, reflecting the simplified, application-focused approach.</p><p><img class="aligncenter size-full wp-image-101811 c6" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/25/news-2025-11-sagemaker-mlflow-rev1-2.png" alt="" width="1675" height="954" /></p><p>Here, I can see there’s already a default MLflow App created. This simplified MLflow experience makes it more straightforward for me to start doing experiments.</p><p>I choose <strong>Create MLflow App</strong>, and enter a name. Here, I have both an <a href="https://aws.amazon.com/iam/">AWS Identity and Access Management (IAM) role</a> and <a href="https://aws.amazon.com/s3/">Amazon Simple Service (Amazon S3)</a> bucket are already been configured. I only need to modify them in <strong>Advanced settings</strong> if needed.<br /><img class="aligncenter size-full wp-image-100710 c6" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/11/news-2025-11-sagemaker-mlflow-1.png" alt="" width="1580" height="754" /></p><p>Here’s where the first major improvement becomes apparent—the creation process completes in approximately 2 minutes. This immediate availability enables rapid experimentation without infrastructure planning delays, eliminating the wait time that previously interrupted experimentation workflows.</p><p><img class="aligncenter size-full wp-image-100711 c6" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/11/news-2025-11-sagemaker-mlflow-2.png" alt="" width="1156" height="721" /></p><p>After it’s created, I receive an MLflow <a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/reference-arns.html">Amazon Resource Name (ARN)</a> for connecting from notebooks. The simplified management means no server sizing decisions or capacity planning required. I no longer need to choose between different configurations or manage infrastructure capacity, which means I can focus entirely on experimentation. You can learn how to use MLflow SDK at <a href="https://docs.aws.amazon.com/sagemaker/latest/dg/mlflow-track-experiments.html">Integrate MLflow with your environment in the Amazon SageMaker Developer Guide</a>.</p><p><img class="aligncenter size-full wp-image-101027 c6" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/15/news-2025-11-sagemaker-mlflow-6.png" alt="" width="2424" height="917" /></p><p>With MLflow 3.4 support, I can now access new capabilities for <a href="https://aws.amazon.com/generative-ai/">generative AI</a> development. MLflow Tracing captures detailed execution paths, inputs, outputs, and metadata throughout the development lifecycle, enabling efficient debugging across distributed AI systems.</p><p><img class="aligncenter size-full wp-image-100713 c6" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/11/news-2025-11-sagemaker-mlflow-4.png" alt="" width="1920" height="734" /></p><p>This new capability also introduces cross-domain access and cross-account access through <a href="https://aws.amazon.com/ram/?trk=c4ea046f-18ad-4d23-a1ac-cdd1267f942c&amp;sc_channel=el">AWS Resource Access Manager (AWS RAM)</a> share. This enhanced collaboration means that teams across different AWS domains and accounts can share MLflow instances securely, breaking down organizational silos.<img class="aligncenter size-full wp-image-100714 c6" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/11/news-2025-11-sagemaker-mlflow-5.png" alt="" width="1177" height="770" /></p><p><strong>Better together: Pipelines integration<br /></strong> <a href="https://aws.amazon.com/sagemaker/ai/pipelines/">Amazon SageMaker Pipelines</a> is integrated with MLflow. SageMaker Pipelines is a serverless workflow orchestration service purpose-built for <a href="https://aws.amazon.com/sagemaker/ai/mlops/?trk=c4ea046f-18ad-4d23-a1ac-cdd1267f942c&amp;sc_channel=el">machine learning operations (MLOps) and large language model operations (LLMOps) automation</a>—the practices of deploying, monitoring, and managing ML and LLM models in production. You can easily build, execute, and monitor repeatable end-to-end AI workflows with an intuitive drag-and-drop UI or the Python SDK.</p><p><img class="aligncenter size-full wp-image-101647 c6" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/23/news-2025-11-sagemaker-mlflow-rev1-0.jpg" alt="" width="1737" height="1006" /></p><p>From a pipeline, a default MLflow App will be created if one doesn’t already exist. The experiment name can be defined and metrics, parameters, and artifacts are logged to the MLflow App as defined in your code. SageMaker AI with MLflow is also integrated with familiar SageMaker AI model development capabilities like <a href="https://aws.amazon.com/sagemaker/ai/jumpstart/">SageMaker AI JumpStart</a> and <a href="https://docs.aws.amazon.com/sagemaker/latest/dg/model-registry.html">Model Registry</a>, enabling end-to-end workflow automation from data preparation through model fine-tuning.</p><p><strong>Things to know<br /></strong> Here are key points to note:</p><ul><li><strong>Pricing</strong> – The new serverless MLflow capability is offered at no additional cost. Note there are service limits that apply.</li>
<li><strong>Availability</strong> – This capability is available in the following AWS Regions: US East (N. Virginia, Ohio), US West (N.California, Oregon), Asia Pacific (Mumbai, Seoul, Singapore, Sydney, Tokyo), Canada (Central), Europe (Frankfurt, Ireland, London, Paris, Stockholm), South America (São Paulo).</li>
<li><strong>Automatic upgrades:</strong> MLflow in-place version upgrades happen automatically, providing access to the latest features without manual migration work or compatibility concerns. The service currently supports MLflow 3.4, providing access to the latest capabilities including enhanced tracing features.</li>
<li><strong>Migration support</strong> – You can use the open source MLflow export-import tool available at <a href="https://github.com/mlflow/mlflow-export-import">mlflow-export-import</a> to help migrate from existing Tracking Servers, whether they’re from SageMaker AI, self-hosted, or otherwise to serverless MLflow (MLflow Apps).</li>
</ul><p>Get started with serverless MLflow by visiting <a href="https://aws.amazon.com/sagemaker/ai/studio/">Amazon SageMaker AI Studio</a> and creating your first MLflow App. Serverless MLflow is also supported in SageMaker Unified Studio for additional workflow flexibility.</p><p>Happy experimenting!<br />— <a href="https://www.linkedin.com/in/donnieprakoso">Donnie</a></p></section><aside id="Comments" class="blog-comments"><div data-lb-comp="aws-blog:cosmic-comments" data-env="prod" data-content-id="adccea26-3df0-4984-a4d9-f770f5f22c43" data-title="Accelerate AI development using Amazon SageMaker AI with serverless MLflow" data-url="https://aws.amazon.com/blogs/aws/accelerate-ai-development-using-amazon-sagemaker-ai-with-serverless-mlflow/"><p data-failed-message="Comments cannot be loaded… Please refresh and try again.">Loading comments…</p></div>
</aside>]]></summary>
    <link href="https://aws.amazon.com/blogs/aws/accelerate-ai-development-using-amazon-sagemaker-ai-with-serverless-mlflow/"/>
    <updated>2025-12-02T17:02:00+01:00</updated>
  </entry>
  <entry>
    <id>https://aws.amazon.com/blogs/aws/amazon-fsx-for-netapp-ontap-now-integrates-with-amazon-s3-for-seamless-data-access/</id>
    <title><![CDATA[Amazon FSx for NetApp ONTAP now integrates with Amazon S3 for seamless data access]]></title>
    <summary><![CDATA[<section class="blog-post-content lb-rtxt"><table id="amazon-polly-audio-table"><tbody><tr><td id="amazon-polly-audio-tab">
<p></p></td></tr></tbody>


</table><p>Today, we’re announcing the ability to access your data in <a href="https://aws.amazon.com/fsx/netapp-ontap/?https://aws.amazon.com/fsx/netapp-ontap/">Amazon FSx for NetApp ONTAP</a> file systems using <a href="https://aws.amazon.com/pm/serv-s3/?trk=7c8639c6-87c6-47d6-9bd0-a5812eecb848&amp;sc_channel=el">Amazon Simple Storage Service (Amazon S3)</a>. With this capability, you can use your enterprise file data to augment generative AI applications with <a href="https://aws.amazon.com/bedrock/knowledge-bases/?trk=7c8639c6-87c6-47d6-9bd0-a5812eecb848&amp;sc_channel=el">Amazon Bedrock Knowledge Bases for Retrieval Augmented Generation (RAG)</a>, train <a href="https://aws.amazon.com/ai/machine-learning/?trk=7c8639c6-87c6-47d6-9bd0-a5812eecb848&amp;sc_channel=el">machine learning (ML)</a> models with <a href="https://aws.amazon.com/sagemaker/?https://aws.amazon.com/sagemaker/">Amazon SageMaker</a>, generate insights with Amazon S3 integrated third-party services, use comprehensive research capabilities in AI-powered business intelligence (BI) tools such as <a href="https://aws.amazon.com/quicksuite/?trk=7c8639c6-87c6-47d6-9bd0-a5812eecb848&amp;sc_channel=el">Amazon Quick Suite</a>, and run analyses using Amazon S3 based cloud-native applications, all while your file data continues to reside in your FSx for NetApp ONTAP file system.</p><p>Amazon FSx for NetApp ONTAP is the first and fully AWS managed NetApp ONTAP file system in the cloud to migrate on-premises applications that rely on NetApp ONTAP or other <a href="https://aws.amazon.com/what-is/nas/?trk=7c8639c6-87c6-47d6-9bd0-a5812eecb848&amp;sc_channel=el">network-attached storage (NAS)</a> appliances to AWS without having to change how you manage your data. FSx for NetApp ONTAP provides the popular capabilities, high performance, and data management APIs of ONTAP file systems with the added benefits of the AWS Cloud, such as simpliﬁed management, on-demand scaling, and seamless integration with other AWS services.</p><p>Over the years, AWS has developed a broad range of industry-leading AI, ML, and analytics services and applications that work with data in Amazon S3 that organizations use to innovate faster, discover new insights, and make even better data-driven decisions. However, some organizations want to use these services with their enterprise file data stored in NetApp ONTAP or other NAS appliances.</p><p><strong>How to get started<br /></strong> You can create and attach an S3 Access Point to your FSx for ONTAP file system using the <a href="https://console.aws.amazon.com/fsx/?https://console.aws.amazon.com/fsx/?trk=7c8639c6-87c6-47d6-9bd0-a5812eecb848&amp;sc_channel=el">Amazon FSx console</a>, the <a href="https://aws.amazon.com/cli/">AWS Command Line Interface (AWS CLI)</a>, or the <a href="https://aws.amazon.com/developer/tools/?trk=7c8639c6-87c6-47d6-9bd0-a5812eecb848&amp;sc_channel=el">AWS SDK</a>.</p><p>I have an existing FSx for ONTAP file system <code>demo-create-s3access</code> which I created by following the steps in the <a href="https://docs.aws.amazon.com/fsx/latest/ONTAPGuide/creating-file-systems.html?trk=7c8639c6-87c6-47d6-9bd0-a5812eecb848&amp;sc_channel=el">Creating file systems in the FSx for ONTAP documentation</a>. Using the Amazon FSx console I now choose the file system ID <code>fs-0c45b011a7f071d70</code> to access the full details of the file system.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/17/s3access1.png"><img class="aligncenter size-large wp-image-101213" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/17/s3access1-1024x241.png" alt="" width="1024" height="241" /></a></p><p>I’ll attach the access point to the volume of the file system. I choose the volume <code>vol1</code> and then select <strong>Create S3 Access Point</strong> from the <strong>Actions</strong> dropdown menu.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/17/s3access2.png"><img class="aligncenter size-large wp-image-101215" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/17/s3access2-1024x422.png" alt="" width="1024" height="422" /></a><br />I enter details such as the <strong>access point name</strong>, <a href="https://docs.aws.amazon.com/efs/latest/ug/enforce-identity-access-points.html?trk=7c8639c6-87c6-47d6-9bd0-a5812eecb848&amp;sc_channel=el">the type of <strong>file system user identity</strong></a> and the <strong>network configuration</strong>, then choose <strong>Create s3 Access Point</strong> to finalize the process.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/17/s3access3.png"><img class="aligncenter size-full wp-image-101216" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/17/s3access3.png" alt="" width="889" height="946" /></a><br />After it’s created, the access point <code>my-s3-accesspoint</code> is ready to allow access to the file data stored in my file system <code>demo-create-s3access</code> from Amazon S3. <a href="https://aws.amazon.com/s3/features/access-points/?trk=7c8639c6-87c6-47d6-9bd0-a5812eecb848&amp;sc_channel=el">Amazon Access Points</a> are S3 endpoints that can be attached to Amazon FSx volumes and used to perform Amazon S3 object operations.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/26/s3access5-3.png"><img class="aligncenter size-large wp-image-102059" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/26/s3access5-3-1024x378.png" alt="" width="1024" height="378" /></a><br />I can now bring proprietary data stored in the file system <code>demo-create-s3access</code> to Amazon S3 for use in applications that work with Amazon S3 while my file data continues to reside in the FSx for NetApp ONTAP file system using the access point <code>my-s3-accesspoint</code> (this data remains accessible through the file protocols).</p><p>For the walkthrough in this post, I’ll integrate with Quick Suite.</p><p><strong>Integrating decades of enterprise file data with the latest AI powered BI tools on AWS</strong> <strong><br /></strong> In the <a href="https://docs.aws.amazon.com/quicksuite/latest/userguide/signing-in.html">Quick Suite Console</a>, in the left navigation pane, I choose <strong>Connections</strong>, then select <strong>Integrations</strong>. Before you begin, make sure that you have the correct permissions to the Amazon S3 AWS resource. You can control the AWS resources that Quick Suite can access by <a href="https://docs.aws.amazon.com/quicksuite/latest/userguide/accessing-data-sources.html">following the Amazon Quick Suite user guide</a>.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/20/qsuite1.png"><img class="aligncenter size-large wp-image-101484" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/20/qsuite1-1024x506.png" alt="" width="1024" height="506" /></a><br />After I’ve selected the <strong>Amazon S3 integration</strong> I enter my Amazon S3 Access Point alias as the <strong>S3 bucket URL</strong>, leave the rest of the information as default, then choose <strong>Create and continue</strong>.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/26/qsuite2-1.png"><img class="aligncenter size-large wp-image-102058" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/26/qsuite2-1-1024x481.png" alt="" width="1024" height="481" /></a><br />I finalize the process by providing the <strong>Name</strong> of the knowledge base, the <strong>Description</strong>, then choose <strong>Create</strong>.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/20/qsuite3.png"><img class="aligncenter size-large wp-image-101489" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/20/qsuite3-1024x472.png" alt="" width="1024" height="472" /></a><br />After the knowledge base has been created it’s automatically synchronized, it’s now available for interaction.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/20/qsuite4.png"><img class="aligncenter size-large wp-image-101490" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/20/qsuite4-1024x224.png" alt="" width="1024" height="224" /></a><br />I want to learn more about the <a href="https://aws.eu/">AWS European Sovereign Cloud</a> so I’ve updated the file system (accessed through the S3 Access Point <code>my-s3-accesspoin-iyytkgz83djdjj7abn3u711supfgkuse1b-ext-s3alias</code>) with the AWS whitepaper on this topic. In the chat in Amazon Quick Suite. I start asking the first question “<em>do we have any documentation on the europe sovereignty cloud?</em>“. To answer my question, <a href="https://docs.aws.amazon.com/quicksuite/latest/userguide/use-agents.html">the chat agent accesses and analyzes various types of data sources I have permission to use</a>, including uploaded files in my current conversation, spaces I have access to, knowledge bases from my integrations, and more.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/20/qsuite6.png"><img class="aligncenter size-full wp-image-101493" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/20/qsuite6.png" alt="" width="544" height="748" /></a></p><p>When I verify the source, I see that the document I uploaded to my file system is listed as one of the sources.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/20/qsuite7.png"><img class="aligncenter size-full wp-image-101494" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/20/qsuite7.png" alt="" width="552" height="466" /></a></p><p><strong>Other use cases of Amazon S3 Access Points for Amazon FSx for NetApp ONTAP</strong><br />Earlier, we looked at use cases such as connecting an organization’s proprietary file data to Amazon Quick Suite for advanced business intelligence. Additionally, Amazon S3 Access Points for Amazon FSx for NetApp ONTAP can be used to seamlessly integrate enterprise file data with comprehensive analytics services, such as <a href="https://aws.amazon.com/athena/?trk=7c8639c6-87c6-47d6-9bd0-a5812eecb848&amp;sc_channel=el">Amazon Athena for serverless SQL queries</a> or <a href="https://aws.amazon.com/glue/?trk=7c8639c6-87c6-47d6-9bd0-a5812eecb848&amp;sc_channel=el">AWS Glue for ETL processing</a>, to name a few.</p><p>Amazon S3 Access Points for Amazon FSx for NetApp ONTAP are also suitable for data access from <a href="https://aws.amazon.com/serverless/?trk=7c8639c6-87c6-47d6-9bd0-a5812eecb848&amp;sc_channel=el">serverless</a> compute workloads that are cloud-native with containerized microservices that require flexible access to shared enterprise datasets, such as configuration files, reference data, content libraries, model artifacts, and application assets.</p><p><strong>Now available</strong><br />You can get started today using the Amazon FSx console, AWS CLI, or AWS SDK to attach Amazon S3 Access Points to your Amazon FSx for NetApp ONTAP file systems. The feature is available in the following <a href="https://aws.amazon.com/about-aws/global-infrastructure/regions_az/?trk=7c8639c6-87c6-47d6-9bd0-a5812eecb848&amp;sc_channel=el">AWS Regions</a>: Africa (Cape Town), Asia Pacific (Hong Kong, Hyderabad, Jakarta, Melbourne, Mumbai, Osaka, Seoul, Singapore, Sydney, Tokyo), Canada (Central, Calgary), Europe (Frankfurt, Ireland, London, Milan, Paris, Spain, Stockholm, Zurich), Israel (Tel Aviv), Middle East (Bahrain, UAE), South America (Sao Paulo), US East (N. Virginia, Ohio), and US West (N. California Oregon). You’re billed by Amazon S3 for the requests and data transfer costs through your S3 Access Point, in addition to your standard Amazon FSx charges. Learn more on the <a href="https://aws.amazon.com/fsx/netapp-ontap/pricing/?trk=7c8639c6-87c6-47d6-9bd0-a5812eecb848&amp;sc_channel=el">Amazon FSx for NetApp ONTAP pricing page</a>.</p><p>PS: Writing a blog post at AWS is always a team effort, even when you see only one name under the post title. In this case, I want to thank <a href="https://www.linkedin.com/in/luke-miller-1a937a66/">Luke Miller</a>, for his expertise and generous help with technical guidance, which made this overview possible and comprehensive.</p><p>– <a href="https://linkedin.com/in/veliswa-boya">Veliswa Boya</a>.</p></section><aside id="Comments" class="blog-comments"><div data-lb-comp="aws-blog:cosmic-comments" data-env="prod" data-content-id="e0fc427f-eddd-4358-9b7f-22de78c37f37" data-title="Amazon FSx for NetApp ONTAP now integrates with Amazon S3 for seamless data access" data-url="https://aws.amazon.com/blogs/aws/amazon-fsx-for-netapp-ontap-now-integrates-with-amazon-s3-for-seamless-data-access/"><p data-failed-message="Comments cannot be loaded… Please refresh and try again.">Loading comments…</p></div>
</aside>]]></summary>
    <link href="https://aws.amazon.com/blogs/aws/amazon-fsx-for-netapp-ontap-now-integrates-with-amazon-s3-for-seamless-data-access/"/>
    <updated>2025-12-02T16:59:00+01:00</updated>
  </entry>
  <entry>
    <id>https://aws.amazon.com/blogs/aws/introducing-amazon-nova-2-lite-a-fast-cost-effective-reasoning-model/</id>
    <title><![CDATA[Introducing Amazon Nova 2 Lite, a fast, cost-effective reasoning model]]></title>
    <summary><![CDATA[<section class="blog-post-content lb-rtxt"><table><tbody><tr><td>
<p></p></td></tr></tbody>


</table><p>Today, we’re releasing <a href="https://aws.amazon.com/ai/generative-ai/nova/">Amazon Nova 2 Lite</a>, a fast, cost-effective reasoning model for everyday workloads. Available in <a href="https://aws.amazon.com/bedrock">Amazon Bedrock</a>, the model offers industry-leading price performance and helps enterprises and developers build capable, reliable, and efficient agentic-AI applications. For organizations who need AI that truly understands their domain, Nova 2 Lite is the best model to <a href="https://aws.amazon.com/blogs/aws/introducing-amazon-nova-forge-build-your-own-frontier-models-using-nova">use with Nova Forge</a> to build their own frontier intelligence.</p><p>Nova 2 Lite supports extended thinking, including step-by-step reasoning and task decomposition, before providing a response or taking action. Extended thinking is off by default to deliver fast, cost-optimized responses, but when deeper analysis is needed, you can turn it on and choose from three thinking budget levels: low, medium, or high, giving you control over the speed, intelligence, and cost tradeoff.</p><p>Nova 2 Lite supports text, image, video, document as input and offers a one million-token context window, enabling expanded reasoning and richer in-context learning. In addition, Nova 2 Lite can be customized for your specific business needs. The model also includes access to two built-in tools: web grounding and a code interpreter. Web grounding retrieves publicly available information with citations, while the code interpreter allows the model to run and evaluate code within the same workflow.</p><p>Amazon Nova 2 Lite demonstrates strong performance across diverse evaluation benchmarks. The model excels in core intelligence across multiple domains including instruction following, math, and video understanding with temporal reasoning. For agentic workflows, Nova 2 Lite shows reliable function calling for task automation and precise UI interaction capabilities. The model also demonstrates strong code generation and practical software engineering problem-solving abilities.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/12/02/nova-2-lite-bench-4.png"><img class="aligncenter size-full wp-image-102507" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/12/02/nova-2-lite-bench-4.png" alt="Amazon Nova 2 Lite benchmarks" width="1694" height="2672" /></a></p><p><strong>Nova 2 Lite is built to meet your company’s needs<br /></strong> Nova 2 Lite can be used for a broad range of your everyday AI tasks. It offers the best combination of price, performance, and speed. Early customers are using Nova 2 Lite for customer service chatbots, document processing, and business process automation.</p><p>Nova 2 Lite can help support workloads across many different use cases:</p><ul><li>Business applications – Automate business process workflow, intelligent document processing (IDP), customer support, and web search to improve productivity and outcomes</li>
<li>Software engineering – Generate code, debugging, refactoring, and migrating systems to accelerate development and increase efficiency</li>
<li>Business intelligence and research – Use long-horizon reasoning and web grounding to analyze internal and external sources to uncover insights, and make informed decisions</li>
</ul><p>For specific requirements, Nova 2 Lite is also available for customization on both Amazon Bedrock and <a href="https://aws.amazon.com/sagemaker/ai">Amazon SageMaker AI</a>.</p><p><strong>Using Amazon Nova 2 Lite<br /></strong> In the <a href="https://console.aws.amazon.com/bedrock">Amazon Bedrock console</a>, you can use the <strong>Chat/Text playground</strong> to quickly test the new model with your prompts. To integrate the model into your applications, you can use any <a href="https://aws.amazon.com/tools/">AWS SDKs</a> with the Amazon Bedrock <a href="https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InvokeModel.html">InvokeModel</a> and <a href="https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_Converse.html">Converse</a> API. Here’s a sample invocation using the <a href="https://aws.amazon.com/sdk-for-python/">AWS SDK for Python (Boto3)</a>.</p><pre class="lang-python">import boto3
AWS_REGION="us-east-1"
MODEL_ID="global.amazon.nova-2-lite-v1:0"
MAX_REASONING_EFFORT="low" # low, medium, high
bedrock_runtime = boto3.client("bedrock-runtime", region_name=AWS_REGION)
# Enable extended thinking for complex problem-solving
response = bedrock_runtime.converse(
    modelId=MODEL_ID,
    messages=[{
        "role": "user",
        "content": [{"text": "I need to optimize a logistics network with 5 warehouses, 12 distribution centers, and 200 retail locations. The goal is to minimize total transportation costs while ensuring no location is more than 50 miles from a distribution center. What approach should I take?"}]
    }],
    additionalModelRequestFields={
        "reasoningConfig": {
            "type": "enabled", # enabled, disabled (default)
            "maxReasoningEffort": MAX_REASONING_EFFORT
        }
    }
)
# The response will contain reasoning blocks followed by the final answer
for block in response["output"]["message"]["content"]:
    if "reasoningContent" in block:
        reasoning_text = block["reasoningContent"]["reasoningText"]["text"]
        print(f"Nova's thinking process:\n{reasoning_text}\n")
    elif "text" in block:
        print(f"Final recommendation:\n{block['text']}")</pre><p>You can also use the new model with agentic frameworks that supports Amazon Bedrock and deploy the agents using <a href="https://aws.amazon.com/bedrock/agentcore/">Amazon Bedrock AgentCore</a>. In this way, you can build agents for a broad range of tasks. Here’s the sample code for an interactive multi-agent system using the <a href="https://strandsagents.com/">Strands Agents</a> SDK. The agents have access to multiple tools, including read and write file access and the possibility to run shell commands.</p><pre class="lang-python">from strands import Agent
from strands.models import BedrockModel
from strands_tools import calculator, editor, file_read, file_write, shell, http_request, graph, swarm, use_agent, think
AWS_REGION="us-east-1"
MODEL_ID="global.amazon.nova-2-lite-v1:0"
MAX_REASONING_EFFORT="low" # low, medium, high
SYSTEM_PROMPT = (
    "You are a helpful assistant. "
    "Follow the instructions from the user. "
    "To help you with your tasks, you can dynamically create specialized agents and orchestrate complex workflows."
)
bedrock_model = BedrockModel(
    region_name=AWS_REGION,
    model_id=MODEL_ID,
    additional_request_fields={
        "reasoningConfig": {
            "type": "enabled", # enabled, disabled (default)
            "maxReasoningEffort": MAX_REASONING_EFFORT
        }
    }
)
agent = Agent(
    model=bedrock_model,
    system_prompt=SYSTEM_PROMPT,
    tools=[calculator, editor, file_read, file_write, shell, http_request, graph, swarm, use_agent, think]
)
while True:
    try:
        prompt = input("\nEnter your question (or 'quit' to exit): ").strip()
        if prompt.lower() in ['quit', 'exit', 'q']:
            break
        if len(prompt) &gt; 0:
            agent(prompt)
    except KeyboardInterrupt:
        break
    except EOFError:
        break
print("\nGoodbye!")</pre><p><strong>Things to know Amazon Nova 2 Lite is now available in Amazon Bedrock via global <a href="https://docs.aws.amazon.com/bedrock/latest/userguide/cross-region-inference.html">cross-Region inference</a> in multiple locations. For Regional availability and future roadmap, visit <a href="https://builder.aws.com/capabilities/">AWS Capabilities by Region</a>.</strong></p><p><strong>Nova 2 Lite includes built-in safety controls to promote <a href="https://aws.amazon.com/ai/responsible-ai/">responsible AI</a> use, with content moderation capabilities that help maintain appropriate outputs across a wide range of applications.</strong></p><p><strong>To understand the costs, see <a href="https://aws.amazon.com/bedrock/pricing/">Amazon Bedrock pricing</a>. To learn more, visit the <a href="https://docs.aws.amazon.com/nova/latest/userguide/what-is-nova.html">Amazon Nova User Guide</a>.</strong></p><p><strong>Start building with Nova 2 Lite today. To experiment with the new model, visit the <a href="https://nova.amazon.com/">Amazon Nova interactive website</a>. Try the model in the <a href="https://console.aws.amazon.com/bedrock">Amazon Bedrock console</a>, and <a href="https://repost.aws/tags/TAQeKlaPaNRQ2tWB6P7KrMag/amazon-bedrock?trk=ba8b32c9-8088-419f-9258-82e9375ad130&amp;sc_channel=el">share your feedback on AWS re:Post</a>.</strong></p><p><strong>— <a href="https://x.com/danilop">Danilo</a></strong></p></section><aside class="blog-comments"><div data-lb-comp="aws-blog:cosmic-comments" data-env="prod" data-content-id="7e29d5a7-24f2-4407-91e7-285fe33bd6f8" data-title="Introducing Amazon Nova 2 Lite, a fast, cost-effective reasoning model" data-url="https://aws.amazon.com/blogs/aws/introducing-amazon-nova-2-lite-a-fast-cost-effective-reasoning-model/"><p data-failed-message="Comments cannot be loaded… Please refresh and try again."><strong>Loading comments…</strong></p></div>
</aside>]]></summary>
    <link href="https://aws.amazon.com/blogs/aws/introducing-amazon-nova-2-lite-a-fast-cost-effective-reasoning-model/"/>
    <updated>2025-12-02T16:59:00+01:00</updated>
  </entry>
  <entry>
    <id>https://aws.amazon.com/blogs/aws/new-aws-security-agent-secures-applications-proactively-from-design-to-deployment-preview/</id>
    <title><![CDATA[New AWS Security Agent secures applications proactively from design to deployment (preview)]]></title>
    <summary><![CDATA[<table><tbody><tr><td>
<p></p></td></tr></tbody>


</table><p>Today, we’re announcing <a href="https://aws.amazon.com/security-agent">AWS Security Agent</a> in preview, a frontier agent that proactively secures your applications throughout the development lifecycle. It conducts automated application security reviews tailored to your organizational requirements and delivers context-aware penetration testing on demand. By continuously validating application security from design to deployment, it helps prevent vulnerabilities early in development.</p><p>Static application security testing (SAST) tools examine code without runtime context, whereas dynamic application security testing (DAST) tools assess running applications without application-level context. Both types of tools are one-dimensional because they don’t understand your application context. They don’t understand how your application is designed, what security threats it faces, and where and how it runs. This forces security teams to manually review everything, creating delays. Penetration testing is even slower—you either wait weeks for an external vendor or your internal security team to find time. When every application requires a manual security review and penetration test, the backlog grows quickly. Applications wait weeks or months for security validation before they can launch. This creates a gap between the frequency of software releases and the frequency of security evaluations. Security is not applied to the entire portfolio of applications, leaving customers exposed and knowingly shipping vulnerable code to meet deadlines. Over 60 percent of organizations update web applications weekly or more often, while nearly 75 percent test web applications monthly or less often. A <a href="https://checkmarx.com/report-future-of-appsec-2025/">2025 report from Checkmarx</a> found that 81 percent of organizations knowingly deploy vulnerable code to meet delivery deadlines.</p><p>AWS Security Agent is context-aware—it understands your entire application. It understands your application design, your code, and your specific security requirements. It continuously scans for security violations automatically and runs penetration tests on-demand instantly without scheduling. The penetration testing agent creates a customized attack plan informed by the context it has learned from your security requirements, design documents, and source code, and dynamically adapts as it runs based on what it discovers, such as endpoints, status and error codes, and credentials. This helps surface deeper, more sophisticated vulnerabilities before production, ensuring your application is secure before it launches without delays or surprises.</p><p>“SmugMug is excited to add AWS Security Agent to our automated security portfolio. AWS Security Agent transforms our security ROI by enabling pen test assessments that complete in hours rather than days, at a fraction of manual testing costs. We can now assess our services more frequently, dramatically decreasing the time to identify and address issues earlier in the software development lifecycle.” says Erik Giberti, Sr. Director of Product Engineering at SmugMug.</p><p><strong>Get started with AWS Security Agent</strong><br />AWS Security Agent provides design security review, code security review, and on-demand penetration testing capabilities. Design and code review check organizational security requirements that you define, and penetration testing learns application context from source code and specifications to identify vulnerabilities. To get started, navigate to the <a href="https://console.aws.amazon.com/securityagent/">AWS Security Agent console</a>. The console landing page provides an overview of how AWS Security Agent delivers continuous security assessment across your development lifecycle.</p><p>The <strong>Get started with AWS Security Agent</strong> panel on the right side of the landing page guides you through initial configuration. Choose <strong>Set up AWS Security Agent</strong> to create your first agent space and begin performing security reviews on your applications.</p><p><img class="aligncenter wp-image-102309 size-full" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/29/1211213423442380-0-1.png" alt="" width="1305" height="779" /></p><p>Provide an <strong>Agent space name</strong> to identify which agent you’re interacting with across different security assessments. An agent space is an organizational container that represents a distinct application or project you want to secure. Each agent space has its own testing scope, security configuration, and dedicated web application domain. We recommend creating one agent space per application or project to maintain clear boundaries and organized security assessments. You can optionally add a <strong>Description</strong> to provide context about the agent space’s purpose for other administrators.</p><p>When you create the first agent space in the AWS Management Console, AWS creates the Security Agent Web Application. The Security Agent Web Application is where users conduct design reviews and execute penetration tests within the boundaries established by administrators in the console. Users select which agent space to work in when conducting design reviews or executing penetration tests.</p><p>During the setup process, AWS Security Agent offers two options for managing user access to the Security Agent Web Application: <strong>Single Sign-On (SSO) with IAM Identity Center</strong>, which enables team-wide SSO access by integrating with <a href="https://aws.amazon.com/iam/identity-center/">AWS IAM Identity Center</a>, or <strong>IAM users</strong>, which allows only <a href="https://aws.amazon.com/iam/">AWS Identity and Access Management (IAM)</a> users of this AWS account to access the Security Agent Web Application directly through the console and is best for quick setup or access without SSO configuration. When you choose the SSO option, AWS Security Agent creates an IAM Identity Center instance to provide centralized authentication and user management for AppSec team members who will access design reviews, code reviews, and penetration testing capabilities through the Security Agent Web Application.</p><p>The permissions configuration section helps you control how AWS Security Agent accesses other AWS services, APIs, and accounts. You can create a default IAM role that AWS Security Agent will use to access resources, or choose an existing role with appropriate permissions.</p><p>After completing the initial configuration, choose <strong>Set up AWS Security Agent</strong> to create the agent.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/18/1211213423442380-1.png"><img class="alignnone size-full wp-image-101346" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/18/1211213423442380-1.png" alt="" width="1924" height="898" /></a></p><p>After creating an agent space, the agent configuration page displays three capability cards: Design review, Code review, and Penetration testing. While not required to operate the penetration testing, if you plan to use design review or code review capabilities, you can configure which security requirements will guide those assessments. AWS Security Agent includes AWS managed requirements, and you can optionally define custom requirements tailored to your organization. You can also manage which team members have access to the agent.</p><p><strong>Security requirements</strong><br />AWS Security Agent enforces organizational security requirements that you define so that applications comply with your team’s policies and standards. Security requirements specify the controls and policies that your applications must follow during both design and code review phases.</p><p>To manage security requirements, navigate to <strong>Security requirements</strong> in the navigation pane. These requirements are shared across all agent spaces and apply to both design and code reviews.</p><p><strong>Managed security requirements</strong> are based on industry standards and best practices. These requirements are ready to use, maintained by AWS, and you can enable them instantly without configuration.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/25/1211213423442380-19.png"><img class="alignnone size-full wp-image-101913" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/25/1211213423442380-19.png" alt="" width="3032" height="1228" /></a></p><p>When creating a custom security requirement, you specify the control name and description that defines the policy. For example, you might create a requirement called <code>Network Segmentation Strategy Defined</code> that requires designs to define clear network segmentation separating workload components into logical layers based on data sensitivity. Or you might define <code>Short Session Timeouts for Privileged and PII Access</code> to mandate specific timeout durations for administrative and personally identifiable information (PII) access. Another example is <code>Customer-Managed Encryption Keys Required</code>, which requires designs to specify customer managed <a href="https://aws.amazon.com/kms/">AWS Key Management Service (AWS KMS)</a> keys rather than AWS managed keys for encrypting sensitive data at rest. AWS Security Agent evaluates designs and code against these enabled requirements, identifying policy violations.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/18/1211213423442380-security-requirements-1.png"><img class="alignnone size-full wp-image-101310" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/18/1211213423442380-security-requirements-1.png" alt="" width="1915" height="562" /></a></p><p><strong>Design security review</strong><br />The design review capability analyzes architectural documents and product specifications to identify security risks before code is written. AppSec teams upload design documents through the AWS Security Agent console or ingest them from S3 and other connected services. AWS Security Agent assesses compliance with organizational security requirements and provides remediation guidance.</p><p>Before conducting design reviews, confirm you’ve configured the security requirements that AWS Security Agent will check. You can get started with AWS managed security requirements or define custom requirements tailored to your organization, as described in the <strong>Security requirements</strong> section.</p><p>To get started with the <strong>Design review</strong>, choose <strong>Admin access</strong> under <strong>Web app access</strong> to access the web app interface. When logged in, choose <strong>Create design review</strong>. Enter a <strong>Design review name</strong> to identify the assessment—for example, when assessing a new feature design that extends your application—and upload up to five design files. Choose <strong>Start design review</strong> to begin the assessment against your enabled security requirements.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/18/1211213423442380-agent.png"><img class="alignnone size-full wp-image-101312" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/18/1211213423442380-agent.png" alt="" width="1924" height="805" /></a></p><p>After completing a design review, the design review detail page displays the review status, completion date, and files reviewed in the Details section. The <strong>Findings summary</strong> shows the count of findings across four compliance status categories:</p><ul><li><strong>Non-compliant</strong> – The design violates or inadequately addresses the security requirement.</li>
<li><strong>Insufficient data</strong> – The uploaded files don’t contain enough information to determine compliance.</li>
<li><strong>Compliant</strong> – The design meets the security requirement based on the uploaded documentation.</li>
<li><strong>Not applicable</strong> – The security requirement’s relevance criteria indicate it doesn’t apply to this system design.</li>
</ul><p>The <strong>Findings summary</strong> section helps you quickly assess which security requirements need attention. Non-compliant findings require updates to your design documents, while Insufficient data findings indicate gaps in the documentation where security teams should work with application teams to gather additional clarity before AWS Security Agent can complete the assessment.</p><p>The <strong>Files reviewed</strong> section displays all uploaded documents with options to search and download the original files.</p><p>The <strong>Review findings</strong> section lists each security requirement evaluated during the review along with its compliance status. In this example, the findings include <strong>Network Segmentation Strategy Defined</strong>, <strong>Customer-Managed Encryption Keys Required</strong>, and <strong>Short Session Timeouts for Privileged and PII Access</strong>. These are the custom security requirements defined earlier in the <strong>Security requirements</strong> section. You can search for specific security requirements or filter findings by compliance status to focus on items that require action.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/15/1211213423442380-ssample.png"><img class="alignnone size-full wp-image-101037" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/15/1211213423442380-ssample.png" alt="" width="1308" height="935" /></a></p><p>When you choose a specific finding, AWS Security Agent displays detailed justification explaining the compliance status and provides recommended remediation steps. This context-aware analysis helps you understand security concerns specific to your design rather than generic security guidance. For designs with noncompliant findings, you can update your documentation to address the security requirements and create a new design review to validate the improvements. You can also choose <strong>Clone this design review</strong> to create a new assessment based on the current configuration or choose <strong>Download report</strong> to export the complete findings for sharing with your team.</p><p>After validating that your application design meets organizational security requirements, the next step is enforcing those same requirements as developers write code.</p><p><strong>Code security review</strong><br />The code review capability analyzes pull requests in GitHub to identify security vulnerabilities and organizational policy violations. AWS Security Agent detects <a href="https://owasp.org/www-project-top-ten/">OWASP Top Ten</a> common vulnerabilities such as SQL injection, cross-site scripting, and inadequate input validation. It also enforces the same organizational security requirements used in design review, implementing code compliance with your team’s policies beyond common vulnerabilities.</p><p>When your application checks in new code, AWS Security Agent verifies compliance with organizational security requirements that go beyond common vulnerabilities. For example, if your organization requires audit logs to be retained for only 90 days, AWS Security Agent identifies when code configures a 365-day retention period and comments on the pull request with the specific violation. This catches policy violations that traditional security tools miss because the code is technically functional and secure.</p><p>To enable code review, choose <strong>Enable code review</strong> on the agent configuration page and connect your GitHub repositories. You can enable code review for specific repositories or connect repositories without enabling code review if you want to use them for penetration testing context instead.</p><p>For detailed setup instructions, visit the <a href="https://docs.aws.amazon.com/securityagent/latest/userguide/enable-code-review.html">AWS Security Agent documentation</a>.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/18/1211213423442380-code-review-3.png"><img class="alignnone size-full wp-image-101320" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/18/1211213423442380-code-review-3.png" alt="" width="1920" height="958" /></a></p><p><strong>On-demand penetration testing</strong><br />The on-demand penetration testing capability executes comprehensive security testing to discover and validate vulnerabilities through multistep attack scenarios. AWS Security Agent systematically discovers the application’s attack surface through reconnaissance and endpoint enumeration, then deploys specialized agents to execute security testing across 13 risk categories, including authentication, authorization, and injection attacks. When provided source code, API specifications, and business documentation, AWS Security Agent builds deeper context about the application’s architecture and business rules to generate more targeted test cases. It adapts testing based on application responses and adjusts attack strategies as it discovers new information during the assessment.</p><p>AWS Security Agent tests web applications and APIs against OWASP Top Ten vulnerability types, identifying exploitable issues that static analysis tools miss. For example, while dynamic application security testing (DAST) tools look for direct server-side template injection (SSTI) payloads, AWS Security Agent can combine SSTI attacks with error forcing and debug output analysis to execute more complex exploits. AppSec teams define their testing scope—target URLs, authentication details, threat models, and documentation—the same as they would brief a human penetration tester. Using this understanding, AWS Security Agent develops application context and autonomously executes sophisticated attack chains to discover and validate vulnerabilities. This transforms penetration testing from a periodic bottleneck into a continuous security practice, reducing risk exposure.</p><p>To enable penetration testing, choose <strong>Enable penetration test</strong> on the agent configuration page. You can configure target domains, VPC settings for private endpoints, authentication credentials, and additional context sources such as GitHub repositories or S3 buckets. You must verify ownership of each domain before AWS Security Agent can run penetration testing.</p><p>After enabling the capability, create and run penetration tests through the AWS Security Agent Web Application. For detailed setup and configuration instructions, visit the <a href="https://docs.aws.amazon.com/securityagent/latest/userguide/perform-penetration-test.html">AWS Security Agent documentation</a>.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/25/1211213423442380-18.png"><img class="alignnone size-full wp-image-101905" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/25/1211213423442380-18.png" alt="" width="2884" height="3633" /></a></p><p>After creating and running a penetration test, the detail page provides an overview of test execution and results. You can run new tests or modify the configuration from this page. The page displays information about the most recent execution, including start time, status, duration, and a summary of discovered vulnerabilities categorized by severity. You can also view a history of all previous test runs with their findings summaries.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/22/1211213423442380-4.png"><img class="alignnone size-full wp-image-101637" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/22/1211213423442380-4.png" alt="" width="2660" height="1807" /></a></p><p>For each run, the detail page provides three tabs. The <strong>Penetration test run overview</strong> tab displays high-level information about the execution, including duration and overall status. The <strong>Penetration test logs</strong> tab lists all tasks executed during the penetration test, providing visibility into how AWS Security Agent discovered vulnerabilities, including the security testing actions performed, application responses, and the reasoning behind each test. The <strong>Findings</strong> tab displays all discovered vulnerabilities with complete details, including descriptions, attack reasoning, steps to reproduce, impact, and remediation guidance.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/25/1211213423442380-12a.png"><img class="alignnone size-full wp-image-101875" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/25/1211213423442380-12a.png" alt="" width="3644" height="3004" /></a></p><p><strong>Join the preview</strong><br />To get started with AWS Security Agent, visit the AWS Security Agent console and create your first agent to begin automating design reviews, code reviews, and penetration testing across your development lifecycle. During the preview period, AWS Security Agent is free of charge.</p><p>AWS Security Agent is available in the US East (N. Virginia) Region.</p><p>To learn more, visit the AWS Security Agent <a href="https://aws.amazon.com/security-agent">product page</a> and <a href="https://docs.aws.amazon.com/securityagent/latest/userguide/what-is.html">technical documentation</a>.</p><a href="https://www.linkedin.com/in/esrakayabali/">— Esra</a>]]></summary>
    <link href="https://aws.amazon.com/blogs/aws/new-aws-security-agent-secures-applications-proactively-from-design-to-deployment-preview/"/>
    <updated>2025-12-02T16:58:00+01:00</updated>
  </entry>
  <entry>
    <id>https://aws.amazon.com/blogs/aws/aws-security-hub-now-generally-available-with-near-real-time-analytics-and-risk-prioritization/</id>
    <title><![CDATA[AWS Security Hub now generally available with near real-time analytics and risk prioritization]]></title>
    <summary><![CDATA[<section class="blog-post-content lb-rtxt"><table><tbody><tr><td>
<p></p></td></tr></tbody>


</table><p>Today, <a href="https://aws.amazon.com/security-hub/">AWS Security Hub</a> is generally available, transforming how security teams identify and respond to critical security risks across their AWS environments. These new capabilities were first announced in <a href="https://aws.amazon.com/blogs/aws/unify-your-security-with-the-new-aws-security-hub-for-risk-prioritization-and-response-at-scale-preview/">preview</a> at <a href="https://reinforce.awsevents.com/">AWS re:Inforce 2025</a>. Security Hub prioritizes your critical security issues and unifies your security operations to help you respond at scale by correlating and enriching signals across multiple AWS security services. Security Hub provides near real-time risk analytics, trends, unified enablement, streamlined pricing, and automated correlation that transforms security signals into actionable insights.</p><p>Organizations deploying multiple security tools need to manually correlate signals across different consoles, creating operational overhead that can delay detection and response times. Security teams use various tools for threat detection, vulnerability management, security posture monitoring, and sensitive data discovery, but extracting value from the findings these tools generate requires significant manual effort to understand relationships and determine priority.</p><p>Security Hub addresses these challenges through built-in integration that unifies your cloud security operations. Available for individual accounts or entire <a href="https://aws.amazon.com/organizations/">AWS Organizations</a> accounts, Security Hub automatically aggregates and correlates signals from <a href="https://aws.amazon.com/guardduty/">Amazon GuardDuty</a>, <a href="https://aws.amazon.com/inspector/">Amazon Inspector</a>, <a href="https://docs.aws.amazon.com/securityhub/latest/userguide/what-is-securityhub.html">AWS Security Hub Cloud Security Posture Management (AWS Security Hub CSPM)</a>, and <a href="https://aws.amazon.com/macie/">Amazon Macie</a>, organizing them by threats, exposures, resources, and security coverage. This unified approach reduces manual correlation work, helping you quickly identify critical issues, understand coverage gaps, and prioritize remediation based on severity and impact.</p><p><strong>What’s new in general availability</strong><br />Since the preview announcement, Security Hub has added several new features.</p><p><strong>Historical trends</strong><br />Security Hub includes a Trends feature through the <strong>Summary</strong> dashboard that provides up to 1 year of historical data for findings and resources across your organization. The Summary dashboard displays an overview of your exposures, threats, resources, and security coverage through customizable widgets that you can add, remove, and arrange based on your operational needs.</p><p>The dashboard includes a <strong>Trends overview</strong> widget that displays period-over-period analysis for day-over-day, week-over-week, and month-over-month comparisons, helping you track whether your security posture is improving or degrading. Trend widgets for <strong>Active threat findings</strong>, <strong>Active exposure findings</strong>, and <strong>Resource trends</strong> provide visualizations of average counts over selectable time periods including 5 days, 30 days, 90 days, 6 months, and 1 year. You can filter these visualizations by severity levels such as critical, high, medium, and low, and hover over specific points in time to review detailed counts.</p><p>The <strong>Summary</strong> dashboard also includes widgets that display current exposure summaries prioritized by severity, threat summaries showing malicious or suspicious activity, and resource inventories organized by type and associated findings.</p><p>The <strong>Security coverage</strong> widget helps you identify gaps in your security service deployment across your organization. This widget tracks which AWS accounts and Regions have security services enabled, helping you understand where you might lack visibility into threats, vulnerabilities, misconfigurations, or sensitive data. The widget displays account coverage across security capabilities including vulnerability management by Amazon Inspector, threat detection by GuardDuty, sensitive data discovery by Amazon Macie, and posture management by AWS Security Hub CSPM. Coverage percentages show which security checks passed or failed across your AWS accounts and Regions where Security Hub is enabled.</p><p>You can apply filters to widgets using shared filters that apply across all widgets, finding filters for exposure and threat data, or resource filters for inventory data. You can create and save filter sets using and/or operators to define specific criteria for your security analysis. Dashboard customizations, including saved filter sets and widget layouts, are saved automatically and persist across sessions.</p><p>If you configure cross-Region aggregation, the <strong>Summary</strong> dashboard includes findings from all linked Regions when viewing from your home Region. For delegated administrator accounts in AWS Organizations, data includes findings for both the administrator account and member accounts. Security Hub retains trends data for 1 year from the date findings are generated. After 1 year, trends data is automatically deleted.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/17/1211216165556962-5.png"><img class="alignnone size-full wp-image-101152" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/17/1211216165556962-5.png" alt="" width="1924" height="1892" /></a></p><p><strong>Near real-time risk analytics<br /></strong> Security Hub now calculates exposures in near real-time and includes threat correlation from GuardDuty alongside existing vulnerability and misconfiguration analysis. When GuardDuty detects threats, Amazon Inspector identifies vulnerabilities, or AWS Security Hub CSPM discovers misconfigurations, Security Hub automatically correlates these findings and updates associated exposures. This advancement provides immediate feedback on your security posture, helping you quickly identify new exposures and verify that remediation actions have reduced risk as expected.</p><p>Security Hub correlates findings across AWS Security Hub CSPM, Amazon Inspector, Amazon Macie, Amazon GuardDuty, and other security services to identify exposures that could lead to security incidents. This correlation helps you understand when multiple security issues combine to create critical risk. Security Hub enriches security signals with context by analyzing resource associations, potential impact, and relationships between signals. For example, if Security Hub identifies an <a href="https://aws.amazon.com/s3/">Amazon Simple Storage Service (Amazon S3)</a> bucket containing sensitive data with versioning disabled, Object Lock disabled, and MFA delete disabled, remediating any component triggers automatic calculation, helping you verify remediation effectiveness without waiting for scheduled assessments.</p><p>The <strong>Exposure</strong> page organizes findings by title and severity, helping you focus on critical issues first. The page includes an <strong>Overview</strong> section with a trends graph that displays the average count of exposure findings over the last 90 days, segmented by severity level. This visualization helps you track changes in your exposure posture over time and identify patterns in security risk.</p><p>Exposure findings are grouped by title with expandable rows showing the count of affected resources and overall severity. Each exposure title describes the potential security impact, such as “Potential Data Destruction: S3 bucket with versioning, Object Lock, and MFA delete disabled” or “Potential Remote Execution: EC2 instance is reachable from VPC and has software vulnerabilities.” You can filter exposures using saved filter sets or quick filters based on severity levels including critical, high, medium, and low. The interface also provides filtering by account ID, resource type, and accounts, helping you quickly narrow down exposures relevant to specific parts of your infrastructure.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/17/1211216165556962-6.png"><img class="alignnone size-full wp-image-101153" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/17/1211216165556962-6.png" alt="" width="1924" height="1430" /></a></p><p>Security Hub generates exposures as soon as findings are available. For example, when you deploy an <a href="https://aws.amazon.com/ec2/">Amazon Elastic Compute Cloud (Amazon EC2)</a> instance that is publicly accessible and Amazon Inspector detects a highly exploitable vulnerability while AWS Security Hub CSPM identifies the public accessibility configuration, Security Hub automatically correlates these findings to generate an exposure without waiting for a scheduled assessment. This near-real time correlation helps you identify critical risks in newly deployed resources and take action before they can be exploited.</p><p>When you select an exposure finding, the details page displays the exposure type, primary resource, Region, account, age, and creation time. The <strong>Overview</strong> section shows contributing traits that represent the security issues directly contributing to the exposure scenario. These traits are organized by categories such as <strong>Reachability</strong>, <strong>Vulnerability</strong>, <strong>Sensitive data</strong>, <strong>Misconfiguration</strong>, and <strong>Assumability</strong>.</p><p>The details page includes a <strong>Potential attack path</strong> tab that provides a visual graph showing how potential attackers could access and take control of your resources. This visualization displays the relationships between the primary resource (such as an EC2 instance), involved resources (such as VPC, subnet, network interface, security group, <a href="https://aws.amazon.com/iam/">AWS Identity and Access Management (IAM)</a> instance profile, IAM role, IAM policy, and volumes), and contributing traits. The graph helps you understand the complete attack surface and identify which security controls need adjustment.</p><p>The <strong>Traits</strong> tab lists all security issues contributing to the exposure, and the <strong>Resources</strong> tab shows all affected resources. The <strong>Remediation</strong> section provides prioritized guidance with links to documentation, recommending which traits to address first to reduce risk most effectively. By using this comprehensive view, you can investigate specific exposures, understand the full context of security risks, and track remediation progress as your team addresses vulnerabilities, misconfigurations, and other security gaps across your environment.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/07/1211216165556962-4.png"><img class="alignnone size-full wp-image-100499" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/07/1211216165556962-4.png" alt="" width="1929" height="1162" /></a></p><p><strong>Expanded partner integrations</strong><br />Security Hub supports integration with Jira and ServiceNow for incident management workflows. When viewing a finding, you can create a ticket in your preferred system directly from the <a href="https://console.aws.amazon.com/securityhub/v2/">AWS Security Hub console</a> with finding details, severity, and recommended remediation steps automatically populated. You can also define <a href="https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-v2-automation-rules.html">automation rules</a> in Security Hub that automatically create tickets in Atlassian’s Jira Service Management and ServiceNow based on criteria you specify, such as severity level, resource type, or finding type. This helps you route critical security issues to your incident response teams without manual intervention.</p><p>Security Hub findings are formatted in the Open Cybersecurity Schema Framework (OCSF) schema, an open-source standard that enables security tools to share data seamlessly. Partners who have built integrations with the OCSF format with Security Hub include Cribl, CrowdStrike, Databee, DataDog, Dynatrace, Expel, Graylog, Netskope, Securonix, SentinelOne, Splunk a Cisco company, Sumo Logic, Tines, Upwind Security, Varonis, DTEX, and Zscaler. Additionally, service partners such as Accenture, Caylent, Deloitte, Optiv, PwC, and Wipro can help you adopt Security Hub and the OCSF schema.</p><p>Security Hub also supports automated response workflows through <a href="https://aws.amazon.com/eventbridge">Amazon EventBridge</a>. You can create <a href="https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-rules.html">EventBridge rules</a> that identify findings based on criteria you specify and route them to targets such as <a href="https://aws.amazon.com/lambda/">AWS Lambda</a> functions or <a href="https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-automation.html">AWS Systems Manager Automation runbooks</a> for processing and remediation. This helps you act on findings programmatically without manual intervention.</p><p><strong>Now available<br /></strong> If you currently use AWS Security Hub CSPM, Amazon GuardDuty, Amazon Inspector, or Amazon Macie, you can access these capabilities by navigating to the <a href="https://console.aws.amazon.com/securityhub/v2/">AWS Security Hub console</a>. If you’re a new customer, you can enable Security Hub through the <a href="https://console.aws.amazon.com/">AWS Management Console</a> and configure the security services appropriate for your workloads. Security Hub automatically consumes findings from enabled services, making the findings available in the unified console and creating correlated exposure findings based on the ingested security data.</p><p>For Regional availability, visit our <a href="https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/?p=ngi&amp;loc=4">AWS Services by Region</a> page. Near real-time exposure calculation and the Trends feature are included at no additional charge. Security Hub uses a streamlined, resource-based pricing model that consolidates charges across integrated AWS security services. The console includes a cost estimator to help you plan and forecast security investments across your AWS accounts and Regions before deployment. For detailed information about capabilities, supported integrations, and pricing, visit the AWS Security Hub <a href="https://aws.amazon.com/security-hub/">product page</a> and <a href="https://docs.aws.amazon.com/securityhub/latest/userguide/what-is-securityhub-v2.html">technical documentation</a>.</p><a href="https://www.linkedin.com/in/esrakayabali/">— Esra</a></section><aside class="blog-comments"><div data-lb-comp="aws-blog:cosmic-comments" data-env="prod" data-content-id="161a60e3-36e6-4a0f-99e1-18d220143ffd" data-title="AWS Security Hub now generally available with near real-time analytics and risk prioritization" data-url="https://aws.amazon.com/blogs/aws/aws-security-hub-now-generally-available-with-near-real-time-analytics-and-risk-prioritization/"><p data-failed-message="Comments cannot be loaded… Please refresh and try again.">Loading comments…</p></div>
</aside>]]></summary>
    <link href="https://aws.amazon.com/blogs/aws/aws-security-hub-now-generally-available-with-near-real-time-analytics-and-risk-prioritization/"/>
    <updated>2025-12-02T16:58:00+01:00</updated>
  </entry>
  <entry>
    <id>https://aws.amazon.com/blogs/aws/amazon-guardduty-adds-extended-threat-detection-for-amazon-ec2-and-amazon-ecs/</id>
    <title><![CDATA[Amazon GuardDuty adds Extended Threat Detection for Amazon EC2 and Amazon ECS]]></title>
    <summary><![CDATA[<section class="blog-post-content lb-rtxt"><table id="amazon-polly-audio-table"><tbody><tr><td id="amazon-polly-audio-tab">
<p></p></td></tr></tbody>


</table><p>Today, we’re announcing new enhancements to <a href="https://aws.amazon.com/guardduty/">Amazon GuardDuty</a> <a href="https://docs.aws.amazon.com/guardduty/latest/ug/guardduty-extended-threat-detection.html">Extended Threat Detection</a> with the addition of two attack sequence findings for <a href="https://aws.amazon.com/ec2/">Amazon Elastic Compute Cloud (Amazon EC2)</a> instances and <a href="https://aws.amazon.com/ecs/">Amazon Elastic Container Service (Amazon ECS)</a> tasks. These new findings build on the existing Extended Threat Detection capabilities, which already combine sequences involving <a href="https://aws.amazon.com/iam/?nc2=type_a">AWS Identity and Access Management (IAM)</a> credential misuse, unusual <a href="https://aws.amazon.com/s3/">Amazon Simple Storage service (Amazon S3)</a> bucket activity, and <a href="https://aws.amazon.com/eks/?nc2=type_a">Amazon Elastic Kubernetes Service (Amazon EKS)</a> cluster compromise. By adding coverage for EC2 instance groups and ECS clusters, this launch expands sequence-level visibility to virtual machine and container environments that support the same application. Together, these capabilities provide a more consistent and unified way to detect multistage activity across diverse <a href="https://aws.amazon.com">Amazon Web Services (AWS)</a> workloads.</p><p>Modern cloud environments are dynamic and distributed, often running virtual machines, containers, and serverless workloads at scale. Security teams strive to maintain visibility across these environments and connect related activities that might indicate complex, multistage attack sequences. These sequences can involve multiple steps, such as establishing initial access and persistence, providing missing credentials or performing unexpected data access, that unfold over time and across different sources. GuardDuty Extended Threat Detection automatically links these signals using AI and <a href="https://aws.amazon.com/ai/machine-learning/">machine learning (ML)</a> models trained at AWS scale to build a complete picture of the activity and surface high-confidence insights to help customers prioritize response actions. By combining evidence from diverse sources, this analysis produces high-fidelity, unified findings that would otherwise be difficult to infer from individual events.</p><p><strong class="c6">How it works<br /></strong> Extended Threat Detection analyzes multiple types of security signals, including runtime activity, malware detections, <a href="https://docs.aws.amazon.com/vpc/latest/userguide/flow-logs.html">VPC Flow Logs</a>, DNS queries, and <a href="https://aws.amazon.com/cloudtrail/">AWS CloudTrail</a> events to identify patterns that represent a multistage attack across Amazon EC2 and Amazon ECS workloads. Detection works with the <a href="https://docs.aws.amazon.com/guardduty/latest/ug/guardduty_data-sources.html">GuardDuty foundational plan</a>, and turning on <a href="https://docs.aws.amazon.com/guardduty/latest/ug/runtime-monitoring.html">Runtime Monitoring</a> for EC2 or ECS adds deeper process and network-level telemetry that strengthens signal analysis and increases the completeness of each attack sequence.</p><p>The new attack sequence findings combine runtime and other observed behaviors across the environment into a single critical-severity sequence. Each sequence includes an incident summary, a timeline of observed events, mapped MITRE ATT&amp;CK® tactics and techniques, and remediation guidance to help you understand how the activity unfolded and which resources were affected.</p><p>EC2 instances and ECS tasks are often created and replaced automatically through Auto Scaling groups, shared launch templates, <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html">Amazon Machine Images (AMIs)</a>, IAM instance profiles, or cluster-level deployments. Because these resources commonly operate as part of the same application, activity observed across them might originate from a single underlying compromise. The new EC2 and ECS findings analyze these shared attributes and consolidate related signals into one sequence when GuardDuty detects a pattern affecting the group.</p><p>When a sequence is detected, the <a href="https://console.aws.amazon.com/guardduty/home?#/summary">GuardDuty console</a> highlights any critical-severity sequence findings on the Summary page, with the affected EC2 instance group or ECS cluster already identified. Selecting a finding opens a consolidated view that shows how the resources are connected, which signals contributed to the sequence, and how the activity progressed over time, helping you quickly understand the scope of impact across virtual machine and container workloads.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/25/findings-1.png"><img class="aligncenter size-full wp-image-101802" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/25/findings-1.png" alt="" width="3012" height="1514" /></a></p><p>In addition to viewing sequences in the console, you can also see these findings in <a href="https://aws.amazon.com/security-hub/?nc2=type_a">AWS Security Hub</a>, where they appear on the new exposure dashboards alongside other GuardDuty findings to help you understand your overall security risk in one place. This detailed view establishes the context for interpreting how the analysis brings related signals together into a broader attack sequence.</p><p>Together, the analysis model and grouping logic give you a clearer, consolidated view of activity across virtual machine and container workloads, helping you focus on the events that matter instead of investigating numerous individual findings. By unifying related behaviors into a single sequence, Extended Threat Detection helps you assess the full context of an attack path and prioritize the most urgent remediation actions.</p><p><strong class="c6">Now available</strong><br />Amazon GuardDuty Extended Threat Detection with expanded coverage for EC2 instances and ECS tasks is now available in all <a href="https://docs.aws.amazon.com/glossary/latest/reference/glos-chap.html#region">AWS Regions</a> where GuardDuty is offered. You can start using this capability today to detect coordinated, multistage activity across virtual machine and container workloads by combining signals from runtime activity, malware execution, and AWS API activity.</p><p>This expansion complements the existing Extended Threat Detection capabilities for Amazon EKS, providing unified visibility into coordinated, multistage activity across your AWS compute environment. To learn more, visit the <a href="https://aws.amazon.com/guardduty/">Amazon GuardDuty product page</a>.</p><p>–<a href="https://www.linkedin.com/in/zhengyubin714/">Betty</a></p></section><aside id="Comments" class="blog-comments"><div data-lb-comp="aws-blog:cosmic-comments" data-env="prod" data-content-id="9f84afe5-e4f5-404b-87c8-954cef44b328" data-title="Amazon GuardDuty adds Extended Threat Detection for Amazon EC2 and Amazon ECS" data-url="https://aws.amazon.com/blogs/aws/amazon-guardduty-adds-extended-threat-detection-for-amazon-ec2-and-amazon-ecs/"><p data-failed-message="Comments cannot be loaded… Please refresh and try again.">Loading comments…</p></div>
</aside>]]></summary>
    <link href="https://aws.amazon.com/blogs/aws/amazon-guardduty-adds-extended-threat-detection-for-amazon-ec2-and-amazon-ecs/"/>
    <updated>2025-12-02T16:57:00+01:00</updated>
  </entry>
  <entry>
    <id>https://aws.amazon.com/blogs/aws/introducing-amazon-nova-forge-build-your-own-frontier-models-using-nova/</id>
    <title><![CDATA[Introducing Amazon Nova Forge: Build your own frontier models using Nova]]></title>
    <summary><![CDATA[<section class="blog-post-content lb-rtxt"><table><tbody><tr><td>
<p></p></td></tr></tbody>


</table><p>Organizations are rapidly expanding their use of generative AI across all parts of the business. Applications requiring deep domain expertise or specific business context need models that truly understand their proprietary knowledge, workflows, and unique requirements.</p><p>While techniques like <a href="https://aws.amazon.com/what-is/prompt-engineering/">prompt engineering</a> and <a href="https://aws.amazon.com/what-is/retrieval-augmented-generation/">Retrieval Augmented Generation (RAG)</a> work well for many use cases, they have fundamental limitations when it comes to embedding specialized knowledge into a model’s core understanding. Supervised fine-tuning and reinforcement learning help in customizing the model, but they operate too late in the development lifecycle, layering modifications on top of models that are a fully trained, and therefore difficult to steer to specific domains of interest.</p><p>When organizations attempt deeper customization through <a href="https://docs.aws.amazon.com/nova/latest/userguide/customize-fine-tune-hyperpod-cpt.html">Continued Pre-Training (CPT)</a> using only their proprietary data, they often encounter catastrophic forgetting, where models lose their foundational capabilities as they learn new content. At the same time, the data, compute, and cost needed for training a model from scratch are still a prohibitive barrier for most organizations.</p><p>Today, we’re introducing <a href="https://aws.amazon.com/ai/generative-ai/nova/">Amazon Nova Forge</a>, a new service to build your own frontier models using Nova. Nova Forge customers can start their development from early model checkpoints, blend their datasets with Amazon Nova-curated training data, and host their custom models securely on AWS. Nova Forge is the easiest and most cost-effective way to build your own frontier model.</p><p><strong>Use cases and applications</strong><br />Nova Forge is designed for organizations with access to proprietary or industry-specific data who want to build AI that truly understands their domain. This includes:</p><ul><li><strong>Manufacturing and automation</strong> – Building models that understand specialized processes, equipment data, and industry-specific workflows</li>
<li><strong>Research and development</strong> – Creating models trained on proprietary research data and domain-specific knowledge</li>
<li><strong>Content and media</strong> – Developing models that understand brand voice, content standards, and specific moderation requirements</li>
<li><strong>Specialized industries</strong> – Training models on industry-specific terminology, regulations, and best practices</li>
</ul><p>Depending on the specific use cases, Nova Forge can be used to add differentiated capabilities, enhance task-specific accuracy, reduce costs, and lower latency.</p><p><strong>How Nova Forge works</strong><br />Nova Forge addresses the limitations of current customization approaches by allowing you to start model development from early checkpoints across pre-training, mid-training, and post-training phases. You can blend your proprietary data with Amazon Nova-curated data throughout all training phases, running training using proven recipes on <a href="https://aws.amazon.com/sagemaker/ai">Amazon SageMaker AI</a> fully managed infrastructure. This data mixing approach significantly reduces catastrophic forgetting compared to training with raw data alone, helping preserve foundational skills—including core intelligence, general instruction following capabilities, and safety benefits—while incorporating your specialized knowledge.</p><p>Nova Forge provides the ability to use reward functions in your own environment for <a href="https://aws.amazon.com/what-is/reinforcement-learning/">reinforcement learning (RL)</a>. This allows the model to learn from feedback generated in environments that are representative of your use cases. Beyond single-step evaluations, you can also use your own orchestrator to manage multi-turn rollouts, enabling RL training for complex agent workflows and sequential decision-making tasks. Whether you’re using chemistry tools to score molecular designs, or robotics simulations that reward efficient task completion and penalize collisions, you can connect your proprietary environments directly.</p><p>You can also take advantage of the built-in responsible AI toolkit available in Nova Forge to configure the safety and content moderation settings of your model. You can adjust settings to meet your specific business needs in areas like safety, security, and handling of sensitive content.</p><p><strong>Getting started with Nova Forge</strong><br />Nova Forge integrates seamlessly with your existing AWS workflows. You can use the familiar tools and infrastructure in Amazon SageMaker AI to run your training, then import your custom Nova models as private models on <a href="https://aws.amazon.com/bedrock">Amazon Bedrock</a>. This gives you the same security, consistent APIs, and broader AWS integrations as any model in Amazon Bedrock.</p><p>In <a href="https://aws.amazon.com/sagemaker/ai/studio/">Amazon SageMaker Studio</a>, you can now build your frontier model with Amazon Nova.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/26/nova-forge-sagemaker-console.png"><img class="aligncenter size-full wp-image-102086" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/26/nova-forge-sagemaker-console.png" alt="Amazon Nova Forge in the SageMaker AI console" width="1317" height="1010" /></a></p><p>To start building the model, choose which checkpoint to use: pre-trained, mid-trained, or post-trained. You can also upload your dataset here or use existing datasets.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/30/nova-forge-checkpoint-1.png"><img class="aligncenter size-full wp-image-102391" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/30/nova-forge-checkpoint-1.png" alt="Amazon Nova Forge checkpoints" width="1784" height="1230" /></a></p><p>You can blend your training data by mixing in curated datasets provided by Nova. These datasets, categorized by domain, can help your model to preserve general performance and prevent overfitting or catastrophic forgetting.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/12/01/nova-forge-data-mixing-1.png"><img class="aligncenter size-full wp-image-102428" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/12/01/nova-forge-data-mixing-1.png" alt="Amazon Nova Forge data mixing" width="1723" height="2654" /></a></p><p>Optionally, you can choose to use Reinforcement Fine-Tuning (RFT) to improve factual accuracy and reduce hallucinations in specific domains.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/30/nova-forge-rft-1.png"><img class="aligncenter size-full wp-image-102392" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/30/nova-forge-rft-1.png" alt="" width="1768" height="1328" /></a></p><p>When training completes, import the model into Amazon Bedrock and start using it in your applications.</p><p><strong>Things to know</strong><br /><a href="https://aws.amazon.com/ai/generative-ai/nova/">Amazon Nova Forge</a> is available in the US East (N. Virginia) <a href="https://aws.amazon.com/about-aws/global-infrastructure/regions_az/">AWS Region</a>. The program includes access to multiple Nova model checkpoints, training recipes to mix proprietary data with Amazon Nova-curated training data, proven training recipes, and integration with Amazon SageMaker AI and Amazon Bedrock.</p><p>Learn more in the <a href="https://docs.aws.amazon.com/nova/latest/userguide/what-is-nova.html">Amazon Nova User Guide</a> and explore Nova Forge from the <a href="https://console.aws.amazon.com/sagemaker/home">Amazon SageMaker AI console</a>.</p><p>Organizations interested in expert assistance can also reach out to our <a href="https://aws.amazon.com/ai/generative-ai/innovation-center/">Generative AI Innovation Center</a> for additional support with their model development initiatives.</p><p>— <a href="https://x.com/danilop">Danilo</a></p></section><aside class="blog-comments"><div data-lb-comp="aws-blog:cosmic-comments" data-env="prod" data-content-id="9f742034-8cde-4e4d-97b3-efbdf7fd885d" data-title="Introducing Amazon Nova Forge: Build your own frontier models using Nova" data-url="https://aws.amazon.com/blogs/aws/introducing-amazon-nova-forge-build-your-own-frontier-models-using-nova/"><p data-failed-message="Comments cannot be loaded… Please refresh and try again.">Loading comments…</p></div>
</aside>]]></summary>
    <link href="https://aws.amazon.com/blogs/aws/introducing-amazon-nova-forge-build-your-own-frontier-models-using-nova/"/>
    <updated>2025-12-02T16:53:00+01:00</updated>
  </entry>
  <entry>
    <id>https://aws.amazon.com/blogs/aws/introducing-amazon-nova-2-sonic-next-generation-speech-to-speech-model-for-conversational-ai/</id>
    <title><![CDATA[Introducing Amazon Nova 2 Sonic: Our new speech-to-speech model for conversational AI]]></title>
    <summary><![CDATA[<section class="blog-post-content lb-rtxt"><table><tbody><tr><td>
<p></p></td></tr></tbody>


</table><p>Today, we are announcing the general availability of <a href="https://aws.amazon.com/ai/generative-ai/nova/speech/">Amazon Nova 2 Sonic</a>, a speech-to-speech foundation model that brings natural, real-time voice conversations to your applications. The model delivers industry-leading conversational quality, pricing, and best-in-class speech understanding for developers to build voice applications.</p><p>Amazon has been a leader in voice-based technology for over a decade, and earlier this year, we <a href="https://aws.amazon.com/blogs/aws/introducing-amazon-nova-sonic-human-like-voice-conversations-for-generative-ai-applications/">introduced the first generation of Nova Sonic</a> to solve the fundamental challenge of creating truly fluid voice interactions—preserving the acoustic context to adapt the speech response to not just to what the user said but how they said it. With Nova 2 Sonic, we have built on that foundation by making the model more capable and more accessible improving model intelligence and agentic capabilities, expanding language support, and adding a broad range of new features to provide more intuitive, human-like voice interactions.</p><p>Nova 2 Sonic delivers expressive voices, masculine and feminine voices in each of the supported languages with native expressivity, natural turn-taking, and seamless handling of user interruptions. Human preference evaluations show that listeners consistently favor Nova 2 Sonic output over other leading models for overall listening experience.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/12/02/nova-2-sonic-conversation-quality-1.png"><img class="aligncenter size-full wp-image-102502" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/12/02/nova-2-sonic-conversation-quality-1.png" alt="Amazon Nova 2 Sonic conversation quality" width="1614" height="2820" /></a></p><p>Nova 2 Sonic delivers strong intelligence and more reliable agentic behavior, supported by improvements across key evaluation benchmarks. The model outperforms other leading conversational AI models on <a href="https://huggingface.co/blog/big-bench-audio-release">Big Bench Audio</a>, an evaluation dataset for assessing reasoning capabilities with audio input. Its <a href="https://www.salesforce.com/blog/bfcl-audio-benchmark/">BFCL benchmark</a> score highlights more accurate and consistent function calling, while <a href="https://arxiv.org/abs/2501.10132">ComplexFuncBench</a> results reflect better handling of multi-step, constraint-heavy tasks. We used <a href="https://commonvoice.mozilla.org/">Common Voice</a> to demonstrate improved automatic speech recognition (ASR) accuracy, and <a href="https://arxiv.org/abs/2311.07911">Instruction-Following Evaluation (IFEval)</a> to show higher accuracy in following detailed, structured instructions.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/12/02/nova-2-sonic-table-4.png"><img class="aligncenter size-full wp-image-102499" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/12/02/nova-2-sonic-table-4.png" alt="Amazon Nova 2 Sonic benchmarks" width="2272" height="1002" /></a></p><p><strong>Improved speech understanding</strong><br />The underlying speech recognition capabilities have been significantly enhanced in Nova 2 Sonic. The model now handles alphanumeric inputs, short utterances, and 8KHz telephony speech input with improved accuracy. It’s also more robust when dealing with different accents and background noise—critical for real-world deployment scenarios.</p><p><strong>Expanded global reach with polyglot voices</strong><br />One of the most significant updates in Nova 2 Sonic is expanded language support. Beyond the original English, French, Italian, German, and Spanish languages, Nova 2 Sonic now supports Portuguese and Hindi.</p><p>Beyond supporting multiple languages, Nova 2 Sonic introduces polyglot voices—individual voices that can switch between languages within the same conversation. The Tiffany voice, for example, can now speak all supported languages fluidly in a single interaction. This offers advanced <a href="https://en.wikipedia.org/wiki/Code-switching">code-switching</a> (the linguistic term for mixing languages within sentences) capabilities that handle mixed-language sentences naturally. For example, to respond back in user’s preferred language when the user switches languages from one turn to the next in the same conversational dialog.</p><p>For developers, this means you can build applications that serve global audiences without needing separate voice models for each language. A customer support application could handle a dialogue that starts in English and switches to Spanish mid-conversation, maintaining the same flow and voice characteristics throughout.</p><p><strong>Natural turn-taking</strong><br />Turn-taking has been enhanced with configurable voice activity detection sensitivity. Developers can set this to high, medium, or low depending on their use case. High sensitivity optimizes for the fastest response times, while low sensitivity gives users more time to complete their thoughts. This is useful, for example, for educational applications or to provide conversational AI for users with different communication preferences.</p><p><strong>Seamless crossmodal interactions</strong><br />With crossmodal support, users can switch between text and voice input within the same session. This is valuable for applications where users might want to speak some requests and type others—perhaps speaking a quick question but typing a complex address or technical specification.</p><p>The implementation maintains context across modalities, so a user could start a conversation by typing a question, receive a spoken response, then continue with voice input without losing the current thread. This creates more fluid, flexible interactions that adapt to how users actually want to communicate.</p><p>You can now use the crossmodal feature to prompt the model in text to enunciate a personalized welcome greeting at the beginning of the dialog (to speak first), or use text metadata representing keypad tones to navigate interactive voice response (IVR) applications. For example, when making an outbound call with Nova 2 Sonic to make a reservation on behalf of the user or leave a voicemail.</p><p><strong>Advanced multiagent capabilities</strong><br />Nova 2 Sonic introduces asynchronous tool calling that improves how speech-based conversational AI handles complex, multi-step tasks. When the model needs to call external tools or services, it doesn’t pause but continues to respond to new user input while tools run in the background.</p><p>Here’s how this works in practice: A user might ask “What’s the weather like?” and immediately follow up with “What is next on my task list?” Nova 2 Sonic processes all these requests, responds to the question immediately, and then provides the weather and task information as the respective tools return their results.</p><p>Just as we naturally handle multiple concurrent topics in a discussion, this capability supports sophisticated interactions that can manage multiple unrelated tasks while maintaining engagement and responsiveness.</p><p><strong>Enhanced telephony and platform integration</strong><br />Recognizing that many conversational AI applications need to work across different communication channels, Nova 2 Sonic now includes direct integration with leading telephony providers including <a href="https://aws.amazon.com/connect/">Amazon Connect</a>, <a href="https://www.vonage.com/">Vonage</a>, <a href="https://www.twilio.com/">Twilio</a>, and <a href="https://www.audiocodes.com/">Audiocodes</a>, and media platforms like <a href="https://livekit.io/">LiveKit</a> and <a href="https://www.pipecat.ai/">Pipecat</a>.</p><p>These integrations handle the complex technical requirements of phone-based interactions, such as audio codec optimization, session lifecycle management, bidirectional input/output event handling, and the acoustic challenges of telephony systems. For developers, this means you can deploy Nova 2 Sonic-powered applications directly into existing call center infrastructure or build new phone-based services without managing the underlying telephony complexity.</p><p><strong>Getting started with Nova 2 Sonic</strong><br />Nova 2 Sonic is available through <a href="https://aws.amazon.com/bedrock">Amazon Bedrock</a> using the model ID <code>amazon.nova-2-sonic-v1:0</code>. If you’re already using Nova Sonic in your applications, updating to the new version is straightforward—simply update the model ID in your existing code, and your application will immediately benefit from the enhancements that don’t require additional configurations.</p><p>The model uses the same bidirectional streaming API as the original Nova Sonic, so your existing integration patterns and event handling code will continue to work. New features like crossmodal input and configurable turn taking are available through additional parameters and events that you can adopt incrementally.</p><p>To get started with the code examples for multiple programming languages, see the <a href="https://github.com/aws-samples/amazon-nova-samples/tree/main/speech-to-speech">Amazon Nova Sonic Speech-to-Speech Model Samples</a>.</p><p><strong>Things to know</strong><br /><a href="https://aws.amazon.com/ai/generative-ai/nova/speech/">Amazon Nova 2 Sonic</a> is available in the US East (N. Virginia), US West (Oregon), Asia Pacific (Tokyo), and Europe (Stockholm) <a href="https://aws.amazon.com/about-aws/global-infrastructure/regions_az/">AWS Region</a>. For Regional availability and future roadmap, visit <a href="https://builder.aws.com/capabilities/">AWS Capabilities by Region</a>.</p><p>Nova 2 Sonic maintains the industry-leading price performance and low latency of the original Nova Sonic. Pricing information is available on the <a href="https://aws.amazon.com/bedrock/pricing/">Amazon Bedrock pricing page</a>.</p><p>The model supports the same robust security and compliance features as other Amazon Bedrock models, including encryption in transit and at rest, <a href="https://docs.aws.amazon.com/vpc/latest/privatelink/create-interface-endpoint.html">VPC endpoints</a>, and integration with <a href="https://aws.amazon.com/iam/">AWS Identity and Access Management (IAM)</a> for fine-grained access control.</p><p>Nova 2 Sonic includes built-in safety controls to promote <a href="https://aws.amazon.com/ai/responsible-ai/?trk=e61dee65-4ce8-4738-84db-75305c9cd4fe&amp;sc_channel=el">responsible AI</a> use, with content moderation capabilities that help maintain appropriate outputs across a wide range of applications.</p><p>To learn more about Amazon Nova 2 Sonic and start building, check out the <a href="https://docs.aws.amazon.com/nova/latest/userguide/speech.html">Nova Sonic section of the Amazon Nova User Guide</a> for detailed implementation guidance.</p><p>— <a href="https://x.com/danilop">Danilo</a></p></section><aside class="blog-comments"><div data-lb-comp="aws-blog:cosmic-comments" data-env="prod" data-content-id="6ae303be-bc94-407d-8745-4b508fe14c4d" data-title="Introducing Amazon Nova 2 Sonic: Our new speech-to-speech model for conversational AI" data-url="https://aws.amazon.com/blogs/aws/introducing-amazon-nova-2-sonic-next-generation-speech-to-speech-model-for-conversational-ai/"><p data-failed-message="Comments cannot be loaded… Please refresh and try again.">Loading comments…</p></div>
</aside>]]></summary>
    <link href="https://aws.amazon.com/blogs/aws/introducing-amazon-nova-2-sonic-next-generation-speech-to-speech-model-for-conversational-ai/"/>
    <updated>2025-12-02T16:53:00+01:00</updated>
  </entry>
  <entry>
    <id>https://aws.amazon.com/blogs/aws/build-reliable-ai-agents-for-ui-workflow-automation-with-amazon-nova-act-now-generally-available/</id>
    <title><![CDATA[Build reliable AI agents for UI workflow automation with Amazon Nova Act, now generally available]]></title>
    <summary><![CDATA[<section class="blog-post-content lb-rtxt"><table><tbody><tr><td>
<p></p></td></tr></tbody>


</table><p><a href="https://aws.amazon.com/blogs/machine-learning/amazon-nova-act-sdk-preview-path-to-production-for-browser-automation-agents/?trk=c4ea046f-18ad-4d23-a1ac-cdd1267f942c&amp;sc_channel=el">Earlier this year</a>, we released a research preview of Nova Act, demonstrating the potential of AI agents to interact with user interfaces and automate complex workflows. Developers experimented with Nova Act and told us they wanted to bring these automation agents to production.</p><p>But bringing agents to production required much more than just model access. Developers were spending significant time orchestrating workflows, refining prompts, choosing the right tools, and stitching together disparate components to achieve reliable automation. The challenge wasn’t just intelligence—it was reliability, integration, and speed to production. So we built a fully integrated solution for production-ready browser automation.</p><p>Today, we’re announcing the general availability of <a href="https://console.aws.amazon.com/nova-act/home?trk=c4ea046f-18ad-4d23-a1ac-cdd1267f942c&amp;sc_channel=el">Amazon Nova Act</a>, a new <a href="https://aws.amazon.com/">Amazon Web Services (AWS)</a> service that helps developers build, deploy, and manage fleets of reliable AI agents for automating production UI workflows. Nova Act delivers over 90% task reliability at scale while offering the fastest time to value and ease of implementation compared to other AI frameworks.</p><p>Here’s a quick look at the Nova Act console.</p><p><img class="aligncenter size-full wp-image-101928 c6" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/25/2025-news-nova-act-5-1.png" alt="" width="1920" height="968" /></p><p>Nova Act addresses the challenge of building reliable browser automation at enterprise scale. Powered by a custom Amazon Nova 2 Lite model, Nova Act excels at driving browsers, support calling APIs, and escalating to humans when needed. The service has core capabilities for web quality assurance (QA) testing, data entry, data extraction, and checkout flows.</p><p>Most models today are trained in isolation, separate from the orchestrator and actuators that execute tasks, which reduces reliability. Nova Act approaches this differently by using reinforcement learning while the agents run inside custom synthetic environments (“web gyms”) that simulate real-world UIs. This vertical integration across the model, orchestrator, tools, and SDK, all trained together, unlocks higher completion rates at scale. The result is an agentic system that doesn’t merely work occasionally but is reliable at scale, with reasoning and adaptability to handle changes.</p><p><img class="aligncenter wp-image-102486 size-full" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/12/02/2025-news-nova-act-rev-2-2.png" alt="" width="4082" height="2027" /></p><p><strong>Getting started with Nova Act<br /></strong> Nova Act provides an integrated developer experience that takes you from prototype to production in hours. Let me walk you through the journey.</p><p><strong>Start in the playground<br /></strong> We begin by visiting <a href="https://nova.amazon.com/act?trk=c4ea046f-18ad-4d23-a1ac-cdd1267f942c&amp;sc_channel=el">nova.amazon.com/act</a> to access <strong>Nova Act Playground</strong>.There, we can quickly experiment and see Nova Act in action.</p><p><img class="aligncenter size-full wp-image-101895 c6" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/25/2025-news-nova-act-1.png" alt="" width="1920" height="912" /></p><p>For these tests, we use <a href="https://nova.amazon.com/act/gym?trk=c4ea046f-18ad-4d23-a1ac-cdd1267f942c&amp;sc_channel=el">Nova Act Gym</a>, a simulated browser environment designed for testing Nova Act agents. We’re using a <a href="https://nova.amazon.com/act/gym/next-dot?trk=c4ea046f-18ad-4d23-a1ac-cdd1267f942c&amp;sc_channel=el">fictional travel booking website</a> to terrestrial exoplanets.</p><p><img class="aligncenter wp-image-102337 size-full c6" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/29/2025-news-nova-act-rev-1-1.png" alt="" width="1192" height="1014" /></p><p>Here we can quickly prototype workflows using natural language commands without writing any code. We enter the URL to automate and describe the actions Nova Act needs to perform. We can add additional actions by choosing <strong>Add an action</strong>.</p><p><img class="aligncenter size-full wp-image-102158 c6" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/27/2025-news-nova-act-rev-3.png" alt="" width="1451" height="913" /></p><p>After defining the actions, we run the Nova Act agent in a live browser session. This way, we can validate that the automation approach works as expected.</p><p><img class="aligncenter wp-image-102342 size-full c6" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/29/2025-news-nova-act-rev-1-2.png" alt="" width="1449" height="913" /></p><p>After we validate the workflow, we can export it to continue development in an <a href="https://aws.amazon.com/what-is/ide/">integrated development environment (IDE)</a> such as <a href="https://code.visualstudio.com/">Visual Studio Code (VS Code)</a>, <a href="https://kiro.dev/">Kiro</a>, and <a href="https://cursor.com/en">Cursor</a>.</p><p><img class="aligncenter size-full wp-image-102160 c6" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/27/2025-news-nova-act-rev-4-1.png" alt="" width="1451" height="913" /></p><p><strong>Refine in the IDE<br /></strong> At this stage, we need to refine the automation in a supported IDE. We use Kiro and install the <strong>Nova Act</strong> extension plugin.</p><p><img class="aligncenter size-full wp-image-101904 c6" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/25/2025-news-nova-act-6-1.png" alt="" width="1920" height="1005" /></p><p>The extension provides a notebook-style builder mode where we can test and debug each step individually. The live browser views show exactly what the agent is doing, while execution logs reveal the model’s reasoning and actions. This makes it straightforward to refine the workflow and handle edge cases.</p><p><img class="aligncenter size-full wp-image-101907 c6" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/25/2025-news-nova-act-7-1.png" alt="" width="1920" height="956" /></p><p>To learn how to use the Nova Act extension in your IDE, visit <a href="https://aws.amazon.com/blogs/aws/accelerate-ai-agent-development-with-the-nova-act-ide-extension/?trk=c4ea046f-18ad-4d23-a1ac-cdd1267f942c&amp;sc_channel=el">Accelerate AI agent development with the Nova Act IDE extension in the AWS News Blog</a>. The Nova Act extension includes templates to help you get started quickly with common workflow patterns.</p><p><img class="aligncenter size-full wp-image-102163 c6" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/27/2025-news-nova-act-rev-7.png" alt="" width="1403" height="432" /></p><p>With this release, the Nova Act IDE extension introduces dedicated tabs for authentication, builder mode, deployment, and running workflows—bringing the complete development lifecycle into your IDE. While the extension provides the easiest path to production, developers can also use the Nova Act command line interface (CLI) or SDK directly for more advanced deployment configurations.</p><p><img class="aligncenter size-full wp-image-102164 c6" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/27/2025-news-nova-act-rev-5-1.png" alt="" width="1838" height="800" /></p><p><strong>Deploy to AWS<br /></strong> When the workflow is ready for production, we navigate to the <strong>Deploy</strong> tab to deploy directly to AWS. We enter the workflow definition name (which must match the name in the script), select the <a href="https://docs.aws.amazon.com/glossary/latest/reference/glos-chap.html#region">AWS Region</a>, and optionally provide an existing <a href="https://aws.amazon.com/iam/">AWS Identity and Access Management (IAM)</a> role <a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/reference-arns.html">Amazon Resource Name (ARN)</a>. The extension packages the workflow into a Docker container, pushes it to <a href="https://aws.amazon.com/ecr/">Amazon Elastic Container Registry (Amazon ECR)</a>, creates the necessary IAM roles and <a href="https://aws.amazon.com/s3/">Amazon Simple Storage Service (Amazon S3)</a> buckets, and deploys it to <a href="https://aws.amazon.com/bedrock/agentcore/">Amazon Bedrock AgentCore</a> Runtime.</p><p><img class="aligncenter size-full wp-image-102167 c6" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/27/2025-news-nova-act-rev-9.png" alt="" width="1444" height="971" /></p><p>After it’s deployed, we can monitor the workflow execution through the Nova Act console. We navigate to <strong>Workflow definitions</strong>. The console provides observability dashboards, and when workflows need human input, we configure custom dashboards with notifications for supervisors to intervene.</p><p><img class="aligncenter size-full wp-image-102169 c6" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/27/2025-news-nova-act-rev-10-2.png" alt="" width="1689" height="872" /></p><p>Then, to select the workflow definition, we scroll down to find the workflow run.</p><p><img class="aligncenter size-full wp-image-102171 c6" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/27/2025-news-nova-act-rev-11-1.png" alt="" width="3024" height="1676" /></p><p>Here, we can see more information about the workflow run.</p><p><img class="aligncenter size-full wp-image-102172 c6" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/27/2025-news-nova-act-rev-12.png" alt="" width="3022" height="1571" /></p><p>From here, we track the workflow progress and execution logs. Each step shows the agent’s reasoning, actions, and browser screenshots—the same level of visibility we had while developing in the IDE, now available for monitoring production executions at scale.</p><p><img class="aligncenter size-full wp-image-102173 c6" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/27/2025-news-nova-act-rev-13.png" alt="" width="2262" height="1571" /></p><p>This straightforward progression from experimentation to production eliminates the weeks typically spent stitching together disparate tools and orchestration logic.</p><p><strong>Better together: Nova Act and Strands Agents<br /></strong> As agent systems mature, the need for specialized agents to work together seamlessly becomes essential. Nova Act integrates naturally with the <a href="https://strandsagents.com/latest/">Strands Agents</a> framework, so you can build comprehensive multi-agent workflows without custom integration work. Strands provides the orchestration layer for coordinating agent systems across domains, while Nova Act delivers specialized reliability for browser-forward UI automation. This out-of-the-box compatibility reflects how modern agent architectures should work—purpose-built components that integrate to solve complex business problems.</p><p>Developers can use Strands to coordinate complex workflows where Nova Act handles the browser automation components as specialized tools, combining them with other agents. Teams can use this architecture to harness Nova Act purpose-built UI automation capabilities within broader agent ecosystems orchestrated by Strands.</p><p><strong>Things to know<br /></strong> Here are key points to note:</p><ul><li><strong>Integration</strong> – Works with Strands Agents framework for building complex multi-agent workflows across domains.</li>
<li><strong>Pricing</strong> – Visit the <a href="https://aws.amazon.com/nova/pricing/?trk=c4ea046f-18ad-4d23-a1ac-cdd1267f942c&amp;sc_channel=el">Amazon Nova Act pricing page</a> for details.</li>
<li><strong>Nova Act and responsible AI</strong> – Nova Act includes built-in safety controls and content moderation capabilities to promote <a href="https://aws.amazon.com/ai/responsible-ai/?trk=c4ea046f-18ad-4d23-a1ac-cdd1267f942c&amp;sc_channel=el">responsible AI</a> use, incorporating advancements in reasoning and agentic safety and robustness to adversarial attacks.</li>
<li><strong>Availability</strong> – Amazon Nova Act is now available in US East (N. Virginia) AWS Region. For the latest Region availability, visit the <a href="https://builder.aws.com/build/capabilities/explore?trk=c4ea046f-18ad-4d23-a1ac-cdd1267f942c&amp;sc_channel=el">AWS Capabilities by Region</a> page.</li>
</ul><p>Get started with Nova Act by visiting <a href="https://nova.amazon.com/act?trk=c4ea046f-18ad-4d23-a1ac-cdd1267f942c&amp;sc_channel=el">nova.amazon.com/act</a> to obtain your API key and explore the playground.</p><p>Happy automating!<br />— <a>Danilo</a> &amp; <a href="https://www.linkedin.com/in/donnieprakoso">Donnie</a></p></section><aside class="blog-comments"><div data-lb-comp="aws-blog:cosmic-comments" data-env="prod" data-content-id="a1d172a4-9a14-4c6a-99ea-7e5161544eb2" data-title="Build reliable AI agents for UI workflow automation with Amazon Nova Act, now generally available" data-url="https://aws.amazon.com/blogs/aws/build-reliable-ai-agents-for-ui-workflow-automation-with-amazon-nova-act-now-generally-available/"><p data-failed-message="Comments cannot be loaded… Please refresh and try again.">Loading comments…</p></div>
</aside>]]></summary>
    <link href="https://aws.amazon.com/blogs/aws/build-reliable-ai-agents-for-ui-workflow-automation-with-amazon-nova-act-now-generally-available/"/>
    <updated>2025-12-02T16:41:00+01:00</updated>
  </entry>
  <entry>
    <id>https://aws.amazon.com/blogs/aws/aws-transform-for-mainframe-introduces-reimagine-capabilities-and-automated-testing-functionality/</id>
    <title><![CDATA[AWS Transform for mainframe introduces Reimagine capabilities and automated testing functionality]]></title>
    <summary><![CDATA[<section class="blog-post-content lb-rtxt"><table id="amazon-polly-audio-table"><tbody><tr><td id="amazon-polly-audio-tab">
<p></p></td></tr></tbody>


</table><p>In May, 2025, we <a href="https://aws.amazon.com/blogs/aws/accelerate-the-modernization-of-mainframe-and-vmware-workloads-with-aws-transform/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">launched AWS Transform for mainframe</a>, the first agentic AI service for modernizing mainframe workloads at scale. The AI-powered mainframe agent accelerates mainframe modernization by automating complex, resource-intensive tasks across every phase of modernization—from initial assessment to final deployment. You can streamline the migration of legacy mainframe applications, including COBOL, CICS, DB2, and VSAM to modern cloud environments—cutting modernization timelines from years to months.</p><p>Today, we’re announcing enhanced capabilities in <a href="https://aws.amazon.com/transform/mainframe/">AWS Transform for mainframe</a> that include AI-powered analysis features, support for the Reimagine modernization pattern, and testing automation. These enhancements solve two critical challenges in mainframe modernization: the need to completely transform applications rather than merely move them to the cloud, and the extensive time and expertise required for testing.</p><ul><li><strong>Reimagining mainframe modernization</strong> – This is a new AI-driven approach that completely reimagines the customer’s application architecture using modern patterns or moving from batch process to real-time functions. By combining the enhanced business logic extraction with new data lineage analysis and automated data dictionary generation from the legacy source code through <a href="https://aws.amazon.com/transform/">AWS Transform</a>, customers transform monolithic mainframe applications written in languages like COBOL into more modern architectural styles, like microservices.</li>
<li><strong>Automated testing</strong> – Customers can use new automated test plan generation, test data collection scripts, and test case automation scripts. AWS Transform for mainframe also provides functional testing tools for data migration, results validation, and terminal connectivity. These AI-powered capabilities work together to accelerate testing timelines and improve accuracy through automation.</li>
</ul><p>Let’s learn more about reimagining mainframe modernization and automated testing capabilities.</p><p><strong class="c6">How to reimagine mainframe modernization</strong><br />We recognize that mainframe modernization is not a one-size-fits-all proposition. Whereas tactical approaches focus on augmentation and maintaining existing systems, strategic modernization offers distinct paths: <strong>Replatform</strong>, <strong>Refactor</strong>, <strong>Replace</strong>, or the new <strong>Reimagine</strong>.</p><p>In the Reimagine pattern, AWS Transform AI-powered analysis combines mainframe system analysis with organizational knowledge to create detailed business and technical documentation and architecture recommendations. This helps preserve critical business logic while enabling modern cloud-native capabilities.</p><p>AWS Transform provides new advanced data analysis capabilities that are essential for successful mainframe modernization, including data lineage analysis and automated data dictionary generation. These features work together to define the structure and meaning to accompany the usage and relationships of mainframe data. Customers gain complete visibility into their data landscape, enabling informed decision-making for modernization. Their technical teams can confidently redesign data architectures while preserving critical business logic and relationships.</p><p><img class="aligncenter size-full wp-image-101760" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/24/2025-aws-transform-mainframe-reimage-workflow.png" alt="" width="2560" height="1115" /></p><p>The Reimagining strategy follows the principle of human in the loop validation, which means that AI-generated application specifications and code such as AWS Transform and <a href="https://kiro.dev">Kiro</a> are continuously validated by domain experts. This collaborative approach between AI capabilities and human judgment significantly reduces transformation risk while maintaining the speed advantages of AI-powered modernization.</p><p>The pathway has a three-phase methodology to transform legacy mainframe applications into cloud-native microservices:</p><p><img class="aligncenter wp-image-102250 size-full" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/28/2025-aws-transform-mainframe-reimage-step-1.png" alt="" width="1091" height="590" /></p><ul><li><strong>Reverse engineering</strong> to extract business logic and rules from existing COBOL or job control language (JCL) code using AWS Transform for mainframe.</li>
<li><strong>Forward engineering</strong> to generate microservice specification, modernized source code, infrastructure as code (IaC), and modernized database.</li>
<li><strong>Deploy and test</strong> to deploy the generated microservices to <a href="https://aws.amazon.com/">Amazon Web Services (AWS)</a> using IaC and to test the functionality of the modernized application.</li>
</ul><p>Although microservices architecture offers significant benefits for mainframe modernization, it’s crucial to understand that it’s not the best solution for every scenario. The choice of architectural patterns should be driven by the specific requirements and constraints of the system. The key is to select an architecture that aligns with both current needs and future aspirations, recognizing that architectural decisions can evolve over time as organizations mature their cloud-native capabilities.</p><p>The flexible approach supports both do-it-yourself and partner-led development, so you can use your preferred tools while maintaining the integrity of your business processes. You get the benefits of modern cloud architecture while preserving decades of business logic and reducing project risk.</p><p>To learn more about reimagining mainframe modernization, visit the <a href="https://app.storylane.io/share/5lf382pt46nu">interactive demo</a>.</p><p><strong class="c6">Automated testing in action</strong><br />The new automated testing feature supports IBM z/OS mainframe batch application stack at launch, which helps organizations address a wider range of modernization scenarios while maintaining consistent processes and tooling.</p><p>Here are the new mainframe capabilities:</p><ul><li><strong>Plan test cases</strong> – Create test plans from mainframe code, business logic, and scheduler plans.</li>
<li><strong>Generate test data collection scripts</strong> – Create JCL scripts for data collection from your mainframe to your test plan.</li>
<li><strong>Generate test automation scripts</strong> – Generate execution scripts to automate testing of modernized applications running in the target AWS environment.</li>
</ul><p>To get started with automated testing, you should set up a workspace, assign a specific role to each user, and invite them to onboard your workspace. To learn more, visit <a href="https://docs.aws.amazon.com/transform/latest/userguide/getting-started.html?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">Getting started with AWS Transform</a> in the AWS Transform User Guide.</p><p>Choose <strong>Create job</strong> in your workspace. You can see all types of supported transformation jobs. For this example, I select the <strong>Mainframe Modernization</strong> job to modernize mainframe applications.</p><p><img class="aligncenter wp-image-101398 size-full c7" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/19/2025-aws-transform-mainframe-0-create-job.jpg" alt="" width="2560" height="1067" /></p><p>After a new job is created, you can kick off modernization for tests generation. This workflow is sequential and it is a place for you to answer the AI agent’s questions, providing the necessary input. You can add your collaborators and specify resource location where the codebase or documentation is located in your <a href="https://aws.amazon.com/s3/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">Amazon Simple Storage Service (Amazon S3)</a> bucket.</p><p>I use a sample application for a credit card management system as the mainframe banking case with the presentation (BMS screens), business logic (COBOL) and data (VSAM/DB2), including online transaction processing and batch jobs.</p><p><img class="aligncenter wp-image-101763 size-full c7" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/24/2025-aws-transform-mainframe-1-job-plan.jpg" alt="" width="2414" height="1430" /></p><p>After finishing the steps of analyzing code, extracting business logic, decomposing code, planning migration wave, you can experience new automated testing capabilities such as planning test cases, generating test data collection scripts, and test automation scripts.</p><p><img class="aligncenter wp-image-101764 size-full c7" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/24/2025-aws-transform-mainframe-2-new-testing.jpg" alt="" width="2412" height="1428" /></p><p>The new testing workflow creates a test plan for your modernization project and generates test data collection scripts. You will have three planning steps:</p><ul><li><strong>Configure test plan inputs</strong> – You can link your test plan to your other job files. The test plan is generated based on analyzing the mainframe application code and can provide more details optionally using the extracted business logic, the technical documentation, the decomposition, and using a scheduler plan.</li>
<li><strong>Define test plan scope</strong> – You can define the entry point, the specific program where the application’s execution flow begins. For example, the JCL for a batch job. In the test plan, each functional test case is designed to start the execution from a specific entry point.</li>
<li><strong>Refine test plan</strong> – A test plan is made up of sequential test cases. You can reorder them, add new ones, merge multiple cases, or split one into two on the test case detail page. Batch test cases are composed of a sequence of JCLs following the scheduler plan.</li>
</ul><p>Generating test data collection scripts collects test data from mainframe applications for functional equivalence testing. This step actively generates JCL scripts that will help you gather test data from the sample application’s various data sources (such as VSAM files or DB2 databases) for use in testing the modernized application. The step is designed to create automated scripts that can extract test data from VSAM datasets, query DB2 tables for sample data, collect sequential data sets, and generate data collection workflows. After this step is completed, you’ll have comprehensive test data collection scripts ready to use.</p><p>To learn more about automated testing, visit <a href="https://docs.aws.amazon.com/transform/latest/userguide/transform-app-mainframe.html?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">Modernization of mainframe applications</a> in the AWS Transform User Guide.</p><p><strong class="c6">Now available</strong><br />The new capabilities in AWS Transform for mainframe are available today in all AWS Regions where AWS Transform for mainframe is offered. For Regional availability and future roadmap, visit the <a href="https://builder.aws.com/capabilities/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">AWS Capabilities by Region</a>. Currently, we offer our core features—including assessment and transformation—at no cost to AWS customers. To learn more, visit <a href="https://aws.amazon.com/transform/pricing/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">AWS Transform Pricing page</a>.</p><p>Give it a try in the <a href="https://console.aws.amazon.com/transform/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">AWS Transform console</a>. To learn more, visit the <a href="https://aws.amazon.com/transform/mainframe/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">AWS Transform for mainframe product page</a> and send feedback to <a href="https://repost.aws/tags/TAqR8fKf6YRWSKjeCt5C7cxA/aws-transform-for-mainframe?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">AWS re:Post for AWS Transform for mainframe</a> or through your usual AWS Support contacts.</p><p>— <a href="https://linkedin.com/in/channy/">Channy</a></p></section><aside id="Comments" class="blog-comments"><div data-lb-comp="aws-blog:cosmic-comments" data-env="prod" data-content-id="32d51d20-9121-475e-a011-94fb78eef679" data-title="AWS Transform for mainframe introduces Reimagine capabilities and automated testing functionality" data-url="https://aws.amazon.com/blogs/aws/aws-transform-for-mainframe-introduces-reimagine-capabilities-and-automated-testing-functionality/"><p data-failed-message="Comments cannot be loaded… Please refresh and try again.">Loading comments…</p></div>
</aside>]]></summary>
    <link href="https://aws.amazon.com/blogs/aws/aws-transform-for-mainframe-introduces-reimagine-capabilities-and-automated-testing-functionality/"/>
    <updated>2025-12-01T20:02:00+01:00</updated>
  </entry>
  <entry>
    <id>https://aws.amazon.com/blogs/aws/aws-transform-announces-full-stack-windows-modernization-capabilities/</id>
    <title><![CDATA[AWS Transform announces full-stack Windows modernization capabilities]]></title>
    <summary><![CDATA[<section class="blog-post-content lb-rtxt"><table id="amazon-polly-audio-table"><tbody><tr><td id="amazon-polly-audio-tab">
<p></p></td></tr></tbody>


</table><p>Earlier this year in May, we announced the general availability of <a href="https://aws.amazon.com/transform/net">AWS Transform for .NET</a>, the first agentic AI service for modernizing .NET applications at scale. During the early adoption period of the service, we received valuable feedback indicating that, in addition to .NET application modernization, you would like to modernize SQL Server and legacy UI frameworks. Your applications typically follow a three-tier architecture—presentation tier, application tier, and database tier—and you need a comprehensive solution that can transform all of these tiers in a coordinated way.</p><p>Today, based on your feedback, we’re excited to announce <a href="https://aws.amazon.com/transform/windows">AWS Transform for full-stack Windows modernization</a>, to offload complex, tedious modernization work across the Windows application stack. You can now identify application and database dependencies and modernize them in an orchestrated way through a centralized experience.</p><p>AWS Transform accelerates full-stack Windows modernization by up to five times across application, UI, database, and deployment layers. Along with porting .NET Framework applications to cross-platform .NET, it migrates SQL Server databases to <a href="https://aws.amazon.com/rds/aurora/features/">Amazon Aurora PostgreSQL-Compatible Edition</a> with intelligent stored procedure conversion and dependent application code refactoring. For validation and testing, AWS Transform deploys applications to <a href="https://aws.amazon.com/ec2">Amazon Elastic Compute Cloud (Amazon EC2)</a> Linux or <a href="https://aws.amazon.com/ecs/">Amazon Elastic Container Service (Amazon ECS)</a>, and provides customizable <a href="https://aws.amazon.com/cloudformation/">AWS CloudFormation</a> templates and deployment configurations for production use. AWS Transform has also added capabilities to modernize ASP.NET Web Forms UI to Blazor.</p><p>There is much to explore, so in this post I’ll provide the first look at AWS Transform for full-stack Windows modernization capabilities across all layers.</p><p><strong>Create a full-stack Windows modernization transformation job</strong><br />AWS Transform connects to your source code repositories and database servers, analyzes application and database dependencies, creates modernization waves, and orchestrates full-stack transformations for each wave.</p><p>To get started with AWS Transform, I first complete the onboarding steps outlined in the <a href="https://docs.aws.amazon.com/transform/latest/userguide/getting-started.html">getting started with AWS Transform user guide</a>. After onboarding, I sign in to the AWS Transform console using my credentials and create a job for full-stack Windows modernization.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/29/1.-createjob-1-2.png"><img class="aligncenter wp-image-102369 size-full" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/29/1.-createjob-1-2.png" alt="Create a new job for Windows Modernization" width="2698" height="1428" /></a> <a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/28/1.-createjob-2.png"><br /></a> <a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/29/1.-createjob-2-2.png"><img class="aligncenter wp-image-102371 size-full" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/29/1.-createjob-2-2.png" alt="Create a new job by choosing SQL Server Database Modernization" width="2750" height="1422" /></a></p><p>After creating the job, I complete the <a href="https://docs.aws.amazon.com/transform/latest/userguide/win-full-stack/sql-server-setup.html">prerequisites</a>. Then, I configure the <a href="https://docs.aws.amazon.com/transform/latest/userguide/win-full-stack/sql-server-create-job.html">database connector</a> for AWS Transform to securely access SQL Server databases running on Amazon EC2 and <a href="https://aws.amazon.com/rds/">Amazon Relational Database Service (Amazon RDS)</a>. The connector can connect to multiple databases within the same SQL Server instance.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/28/2.-DB-Connector.png"><img class="aligncenter wp-image-102223 size-full" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/28/2.-DB-Connector.png" alt="Create new database connector by adding connector name and AWS Account ID" width="1262" height="560" /></a></p><p>Next, I <a href="https://docs.aws.amazon.com/transform/latest/userguide/dotnet-creating-repo-connector.html">set up a connector</a> to connect to my source code repositories.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/28/3.-source-code-connector.png"><img class="aligncenter wp-image-102224 size-full" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/28/3.-source-code-connector.png" alt="Add a source code connector by adding Connection name, AWS Account ID and Code Connector Arn" width="1275" height="624" /></a></p><p>Furthermore, I have the option to choose if I would like AWS Transform to deploy the transformed applications. I choose <strong>Yes</strong> and provide the target AWS account ID and AWS Region for deploying the applications. The deployment option can be configured later as well.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/28/4.-deploy-apps.png"><img class="aligncenter wp-image-102225 size-full" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/28/4.-deploy-apps.png" alt="Choose if you would like to deploy transformed apps" width="2366" height="692" /></a></p><p>After the connectors are set up, AWS Transform connects to the resources and runs the validation to verify IAM roles, network settings, and related AWS resources.</p><p>After the successful validation, AWS Transform discovers databases and their associated source code repositories. It identifies dependencies between databases and applications to create waves for transforming related components together. Based on this analysis, AWS Transform creates a wave-based transformation plan.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/28/5.-start-assessment.png"><img class="aligncenter wp-image-102226 size-full" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/28/5.-start-assessment.png" alt="Start assessment for discovered database and source code repositories" width="1460" height="528" /></a></p><p><strong>Assessing database and dependent applications</strong><br />For the assessment, I review the databases and source code repositories discovered by AWS Transform and choose the appropriate branches for code repositories. AWS Transform scans these databases and source code repositories, then presents a list of databases along with their dependent .NET applications and transformation complexity.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/28/6.-start-wave-planning.png"><img class="aligncenter wp-image-102230 size-full" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/28/6.-start-wave-planning.png" alt="Start wave planning of asessed databases and dependent repositories" width="2372" height="936" /></a></p><p>I choose the target databases and repositories for modernization. AWS Transform analyzes these selections and generates a comprehensive <strong>SQL Modernization Assessment Report</strong> with a detailed wave plan. I download the report to review the proposed modernization plan. The report includes an executive summary, wave plan, dependencies between databases and code repositories, and complexity analysis.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/28/7.-SQL-Summary-report.png"><img class="aligncenter wp-image-102232 size-full" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/28/7.-SQL-Summary-report.png" alt="View SQL Modernization Assessment Report" width="2378" height="1530" /></a></p><p><strong>Wave transformation at scale</strong><br />The wave plan generated by AWS Transform consists of four steps for each wave. First, it converts the SQL Server schema to PostgreSQL. Second, it migrates the data. Third, it transforms the dependent .NET application code to make it PostgreSQL compatible. Finally, it deploys the application for testing.</p><p>Before converting the SQL Server schema, I can either create a new PostgreSQL database or choose an existing one as the target database.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/28/8.-choose-DB-1.png"><img class="aligncenter wp-image-102236 size-full" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/28/8.-choose-DB-1.png" alt="Choose or create target database" width="2384" height="1446" /></a></p><p>After I choose the source and target databases, AWS Transform generates conversion reports for my review. AWS Transform converts the SQL Server schema to PostgreSQL-compatible structures, including tables, indexes, constraints, and stored procedures.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/28/9.-download-conversion-report.png"><img class="aligncenter wp-image-102237 size-full" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/28/9.-download-conversion-report.png" alt="Download Schema conversion reports" width="2376" height="1506" /></a></p><p>For any schema that AWS Transform can’t automatically convert, I can manually address them in the <a href="https://aws.amazon.com/dms/">AWS Database Migration Service (AWS DMS)</a> console. Alternatively, I can fix them in my preferred SQL editor and update the target database instance.</p><p>After completing schema conversion, I have the option to proceed with data migration, which is an optional step. AWS Transform uses AWS DMS to migrate data from my SQL Server instance to the PostgreSQL database instance. I can choose to perform data migration later, after completing all transformations, or work with test data by loading it into my target database.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/28/10.-migrate-data.png"><img class="aligncenter wp-image-102239 size-full" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/28/10.-migrate-data.png" alt="Choose if you would like to migrate data" width="2380" height="1522" /></a></p><p>The next step is code transformation. I specify a target branch for AWS Transform to upload the transformed code artifacts. AWS Transform updates the codebase to make the application compatible with the converted PostgreSQL database.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/28/11.-target-branch.png"><img class="aligncenter wp-image-102240 size-full" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/28/11.-target-branch.png" alt="Specify target branch destination for transformed codebase" width="2378" height="1078" /></a></p><p>With this release, AWS Transform for full-stack Windows modernization supports only codebases in .NET 6 or later. For codebases in .NET Framework 3.1+, I first use AWS Transform for .NET to port them to cross-platform .NET. I’ll expand on this in a following section.</p><p>After the conversion is completed, I can view the source and target branches along with their code transformation status. I can also download and review the transformation report.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/28/12-download-code-report.png"><img class="aligncenter wp-image-102241 size-full" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/28/12-download-code-report.png" alt="Download transformation report" width="2376" height="1350" /></a></p><p><strong>Modernizing .NET Framework applications with UI layer</strong><br />One major feature we’re releasing today is the modernization of UI frameworks from ASP.NET Web Forms to Blazor. This is added to existing support for modernizing model-view-controller (MVC) Razor views to ASP.NET Core Razor views.</p><p>As mentioned previously, if I have a .NET application in legacy .NET Framework, then I continue using <a href="https://aws.amazon.com/blogs/aws/aws-transform-for-net-the-first-agentic-ai-service-for-modernizing-net-applications-at-scale/">AWS Transform for .NET to port it to cross-platform .NET</a>. For legacy applications with UIs built on ASP.NET Web Forms, AWS Transform now modernizes the UI layer to Blazor along with porting the backend code.</p><p>AWS Transform for .NET converts ASP.NET Web Forms projects to Blazor on ASP.NET Core, facilitating the migration of ASP.NET websites to Linux. The UI modernization feature is enabled by default in AWS Transform for .NET on both the AWS Transform web console and Visual Studio extension.</p><p>During the modernization process, AWS Transform handles the conversion of ASPX pages, ASCX custom controls, and code-behind files, implementing them as server-side Blazor components rather than web assembly. The following project and file changes are made during the transformation:</p><table class="c7" border="1" cellpadding="4" style="width: 913px; border-spacing: 4px;"><tbody><tr><td class="c6"><strong>From</strong></td>
<td class="c6"><strong>To</strong></td>
<td><strong>Description</strong></td>
</tr><tr><td>*.aspx, *.ascx</td>
<td>*.razor</td>
<td>.aspx pages and .ascx custom controls become .razor files</td>
</tr><tr><td>Web.config</td>
<td>appsettings.json</td>
<td>Web.config settings become appsettings.json settings</td>
</tr><tr><td>Global.asax</td>
<td>Program.cs</td>
<td>Global .asax code becomes Program.cs code</td>
</tr><tr><td>*.master</td>
<td>*layout.razor</td>
<td>Master files become layout.razor files</td>
</tr></tbody></table><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/24/web-forms-to-blazor-project-changes.png"><img class="aligncenter wp-image-101702 size-full" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/24/web-forms-to-blazor-project-changes.png" alt="Image showcasing how the specific project files are transformed" width="1155" height="1440" /></a></p><p><strong>Other new features in AWS Transform for .NET</strong><br />Along with UI porting, AWS Transform for .NET has added support for more transformation capabilities and enhanced developer experience. These new features include the following:</p><ul><li><strong>Port to .NET 10 and .NET Standard</strong> – AWS Transform now supports porting to .NET 10, the latest Long-Term Support (LTS) release, which was released on November 11, 2025. It also supports porting class libraries to .NET Standard, a formal specification for a set of APIs that are common across all .NET implementations. Furthermore, AWS Transform is now available with AWS Toolkit for Visual Studio 2026.</li>
<li><strong>Editable transformation report</strong> – After the assessment is complete, you can now view and customize the transformation plan based on your specific requirements and preferences. For example, you can update package replacement details.</li>
<li><strong>Real-time transformation updates with estimated remaining time</strong> – Depending on the size and complexity of the codebase, AWS Transform can take some time to complete the porting. You can now track transformation updates in real-time along with the estimated remaining time.</li>
<li><strong>Next steps markdown</strong> – After the transformation is complete, AWS Transform now generates a next steps markdown file with the remaining tasks to complete the porting. You can use this as a revised plan to repeat the transformation with AWS Transform or use AI code-companions to complete the porting.</li>
</ul><p><strong>Things to know</strong><br />Some more things to know are:</p><ul><li><strong>AWS Regions</strong> – AWS Transform for full-stack Windows modernization is generally available today in the US East (N. Virginia) <a href="https://docs.aws.amazon.com/glossary/latest/reference/glos-chap.html#region">Region</a>. For Regional availability and future roadmap, visit the <a href="https://builder.aws.com/capabilities/">AWS Capabilities by Region</a>.</li>
<li><strong>Pricing</strong> – Currently, there is <a href="https://aws.amazon.com/transform/pricing/">no added charge</a> for Windows modernization features of AWS Transform. Any resources you create or continue to use in your AWS account using the output of AWS Transform are billed according to their standard pricing. For limits and quotas, refer to the <a href="https://docs.aws.amazon.com/transform/latest/userguide/transform-limits.html">AWS Transform User Guide</a>.</li>
<li><strong>SQL Server versions supported</strong> – AWS Transform supports the transformation of SQL Server versions from 2008 R2 through 2022, including all editions (Express, Standard, and Enterprise). SQL Server must be hosted on Amazon RDS or Amazon EC2 in the same Region as AWS Transform.</li>
<li><strong>Entity Framework versions supported</strong> – AWS Transform supports the modernization of Entity Framework versions 6.3 through 6.5 and Entity Framework Core 1.0 through 8.0.</li>
<li><strong>Getting started</strong> – To get started, visit AWS Transform for full-stack Windows modernization <a href="https://docs.aws.amazon.com/transform/latest/userguide/win-full-stack/windows-full-stack.html">User Guide</a>.</li>
</ul><p>– <a href="https://www.linkedin.com/in/kprasadrao/">Prasad</a></p></section><aside id="Comments" class="blog-comments"><div data-lb-comp="aws-blog:cosmic-comments" data-env="prod" data-content-id="27e44519-9aa4-4e99-b696-5ab38b9908cc" data-title="AWS Transform announces full-stack Windows modernization capabilities" data-url="https://aws.amazon.com/blogs/aws/aws-transform-announces-full-stack-windows-modernization-capabilities/"><p data-failed-message="Comments cannot be loaded… Please refresh and try again.">Loading comments…</p></div>
</aside>]]></summary>
    <link href="https://aws.amazon.com/blogs/aws/aws-transform-announces-full-stack-windows-modernization-capabilities/"/>
    <updated>2025-12-01T20:01:00+01:00</updated>
  </entry>
  <entry>
    <id>https://aws.amazon.com/blogs/aws/introducing-aws-transform-custom-crush-tech-debt-with-ai-powered-code-modernization/</id>
    <title><![CDATA[Introducing AWS Transform custom: Crush tech debt with AI-powered code modernization]]></title>
    <summary><![CDATA[<table><tbody><tr><td>
<p></p></td></tr></tbody>


</table><p>Technical debt is one of the most persistent challenges facing enterprise development teams today. Studies show that organizations spend 20% of their IT budget on technical debt instead of advancing new capabilities. Whether it’s upgrading legacy frameworks, migrating to newer runtime versions, or refactoring outdated code patterns, these essential but repetitive tasks consume valuable developer time that could be spent on innovation.</p><p>Today, we’re excited to announce <a href="https://aws.amazon.com/transform/custom?trk=ac97e39c-d115-4d4a-b3fe-c695e0c9a7ee&amp;sc_channel=el">AWS Transform custom</a>, a new agent that fundamentally changes how organizations approach modernization at scale. This intelligent agent combines pre-built transformations for Java, Node.js, and Python upgrades with the ability to define custom transformations. By learning specific transformation patterns and automating them across entire codebases, customers using AWS Transform custom have achieved up to 80% reduction in execution time in many cases, freeing developers to focus on innovation.</p><p>You can define transformations using your documentation, natural language descriptions, and code samples. The service then applies these specific patterns consistently across hundreds or thousands of repositories, improving its effectiveness through both explicit feedback and implicit signals like developers’ manual fixes within your transformation projects.</p><p>AWS Transform custom offers both CLI and web interfaces to suit different modernization needs. You can use the CLI to define transformations through natural language interactions and execute them on local codebases, either interactively or autonomously. You can also integrate it into code modernization pipelines or workflows, making it ideal for machine-driven automation. Meanwhile, the web interface provides comprehensive campaign management capabilities, helping teams track and coordinate transformation progress across multiple repositories at scale.</p><p><strong>Language and framework modernization</strong><br />AWS Transform supports runtime upgrades without the need to provide additional information, understanding not only the syntax changes required but also the subtle behavioral differences and optimization opportunities that come with newer versions. The same intelligent approach applies to Node.js, Python and Java runtime upgrades, and even extends to infrastructure-level transitions, such as migrating workloads from x86 processors to AWS Graviton.</p><p>It also navigates framework modernization with sophistication. When organizations need to update their Spring Boot applications to take advantage of newer features and security patches, AWS Transform custom doesn’t merely update version numbers but understands the cascading effects of dependency changes, configuration updates, and API modifications.</p><p>For teams facing more dramatic shifts, such as migrating from Angular to React, AWS Transform custom can learn the patterns of component translation, state management conversion, and routing logic transformation that make such migrations successful.</p><p><strong>Infrastructure and enterprise-scale transformations</strong><br />The challenge of keeping up with evolving APIs and SDKs becomes particularly acute in cloud-based environments where services are continuously improving. AWS Transform custom supports <a href="https://builder.aws.com/build/tools?trk=ac97e39c-d115-4d4a-b3fe-c695e0c9a7ee&amp;sc_channel=el">AWS SDK</a> updates across a broad spectrum of programming languages that enterprises use including Java, Python, and JavaScript. The service understands not only the mechanical aspects of API changes, but also recognizes best practices and optimization opportunities available in newer SDK versions.</p><p><a href="https://aws.amazon.com/what-is/iac/?nc1=h_ls?trk=ac97e39c-d115-4d4a-b3fe-c695e0c9a7ee&amp;sc_channel=el">Infrastructure as Code</a> transformations represent another critical capability, especially as organizations evaluate different tooling strategies. Whether you’re converting <a href="https://aws.amazon.com/cdk/??trk=ac97e39c-d115-4d4a-b3fe-c695e0c9a7ee&amp;sc_channel=el">AWS Cloud Development Kit (AWS CDK)</a> templates to Terraform for standardization purposes, or updating <a href="https://aws.amazon.com/cloudformation/?trk=ac97e39c-d115-4d4a-b3fe-c695e0c9a7ee&amp;sc_channel=el">AWS CloudFormation</a> configurations to access new service features, AWS Transform custom understands the declarative nature of these tools and can maintain the intent and structure of your infrastructure definitions.</p><p>Beyond these common scenarios, AWS Transform custom excels at addressing the unique, organization-specific code patterns that accumulate over years of development. Every enterprise has its own architectural conventions, utility libraries, and coding standards that need to evolve over time. It can learn these custom patterns and help refactor them systematically so that institutional knowledge and best practices are applied consistently across the entire application portfolio.</p><p>AWS Transform custom is designed with enterprise development workflows in mind, enabling center of excellence teams and system integrators to define and execute organization-wide transformations while application developers focus on reviewing and integrating the transformed code. DevOps engineers can then configure integrations with existing continuous integration and continuous delivery (CI/CD) pipelines and source control systems. It also includes pre-built transformations for Java, Node.js and Python runtime updates which can be particularly useful for <a href="https://aws.amazon.com/lambda/?trk=ac97e39c-d115-4d4a-b3fe-c695e0c9a7ee&amp;sc_channel=el">AWS Lambda</a> functions, along with transformations for AWS SDK modernization to help teams get started immediately.</p><p><strong>Getting started<br /></strong> AWS Transform makes complex code transformations manageable through both pre-built and custom transformation capabilities. Let’s start by exploring how to use an existing transformation to address a common modernization challenge: upgrading AWS Lambda functions due to end-of-life (EOL) runtime support.</p><p>For this example, I’ll demonstrate migrating a Python 3.8 Lambda function to Python 3.13, as Python 3.8 reached EOL and is no longer receiving security updates. I’ll use the CLI for this demo, but I encourage you to also explore the web interface’s powerful campaign management capabilities.</p><p>First, I use the command <code>atx custom def list</code> to explore the available transformation definitions. You can also access this functionality through a conversational interface by typing only <code>atx</code> instead of issuing the command directly, if you prefer.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/17/Screenshot-2025-11-17-at-17.41.55.png"><img class="aligncenter size-full wp-image-101172" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/17/Screenshot-2025-11-17-at-17.41.55.png" alt="" width="644" height="30" /></a></p><p>This command displays all available transformations, including both AWS-managed defaults and any existing custom transformations created by users in my organization. AWS-managed transformations are identified by the AWS/ prefix, indicating they’re maintained and updated by AWS. In the results, I can see several options such as AWS/java-version-upgrade for Java runtime modernization, AWS/python-boto2-to-boto3-migration for updating Python AWS SDK usage, AWS/nodejs-version-upgrade for Node.js runtime updates.</p><p>For my Python 3.8 to 3.13 migration, I’ll use the AWS/python-version-upgrade transformation.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/29/marked-available-aws-transformations-1.png"><img class="aligncenter size-full wp-image-102336" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/29/marked-available-aws-transformations-1.png" alt="" width="3572" height="1722" /></a></p><p>You run a migration by using the <code>atx custom def exec</code> command.  Please consult <a href="https://docs.aws.amazon.com/transform/latest/userguide/custom.html?trk=ac97e39c-d115-4d4a-b3fe-c695e0c9a7ee&amp;sc_channel=el">the documentation</a> for more details about the command and all its options. Here, I run it against my project repository specifying the transformation name. I also add pytest to run unit tests for validation. More importantly, I use the <code>additionalPlanContext</code> section in the  <code>--configuration</code> input to specify which Python version I want to upgrade to. For reference, here’s the command I have for my demo (I’ve used multiple lines and indented it here for clarity):</p><pre class="lang-bash">atx custom def exec 
-p /mnt/c/Users/vasudeve/Documents/Work/Projects/ATX/lambda/todoapilambda 
-n AWS/python-version-upgrade
-C "pytest" 
--configuration 
    "additionalPlanContext= The target Python version to upgrade to is Python 3.13" 
-x -t</pre><p>AWS Transform then starts the migration process. It analyzes my Lambda function code, identifies Python 3.8-specific patterns, and automatically applies the necessary changes for Python 3.13 compatibility. This includes updating syntax for deprecated features, modifying import statements, and adjusting any version-specific behaviors.</p><p>After execution, it provides a comprehensive summary including a report on dependencies updated in requirements.txt with Python 3.13-compatible package versions, instances of deprecated syntax replaced with current equivalents, updated runtime configuration notes for AWS Lambda deployment, suggested test cases to validate the migration, and more. It also provides a body of evidence that serve as proof of success.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/23/Screenshot-2025-11-23-at-18.11.46.png"><img class="aligncenter size-full wp-image-101663" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/23/Screenshot-2025-11-23-at-18.11.46.png" alt="" width="1650" height="622" /></a></p><p>The migrated code lives in a local branch so you can review and merge when satisfied. Alternatively, you can keep providing feedback and reiterating until yo’re happy that the migration is fully complete and meets your expectations.</p><p>This automated process changes what would typically require hours of manual work into a streamlined, consistent upgrade that maintains code quality while maintaining compatibility with the newer Python runtime.</p><p><strong>Creating a new custom transformation<br /></strong> While AWS-managed transformations handle common scenarios effectively, you can also create custom transformations tailored to your organization’s specific needs. Let’s explore how to create a custom transformation to see how AWS Transform learns from your specific requirements.</p><p>I type <code>atx</code> to initialize the atx cli and start the process.</p><p>The first thing it asks me is if I want to use one of the existing transformations or create a new one. I choose to create a new one. Notice that from here on the whole conversation takes place using natural language, not commands. I typed <code>new one</code> but I could have typed <code>I want to create a new one</code> and it would’ve understood it exactly the same.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/10/13/creating-a-new-transformation.png"><img class="aligncenter size-full wp-image-99798" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/10/13/creating-a-new-transformation.png" alt="" width="1072" height="140" /></a></p><p>It then prompts me to provide more information about the kind of transformation I’d like to perform. For this demo, I’m going to migrate an Angular application, so I type <code>angular 16 to 19 application migration</code> which prompts the CLI to search for all transformations available for this type of migration. In my case, my team has already created and made available a few Angular migrations, so it shows me those. However, it warns me that none of them is an exact match to my specific request for migrating from Angular 16 to 19. It then asks if I’d like to select from one of the existing transformations listed or create a custom one.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/11/fixed-found-angular-migrations.png"><img class="aligncenter size-full wp-image-100748" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/11/fixed-found-angular-migrations.png" alt="" width="2488" height="562" /></a></p><p>I choose to create a custom one by continuing to use natural language and typing <code>create a new one</code> as a command. Again, this could be any variation of that statement provided that you indicate your intentions clearly. It follows by asking me a few questions including whether I have any useful documentation, example code or migration guides that I can provide to help customize the transformation plan.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/25/image-21-6.png"><img class="aligncenter size-full wp-image-101882" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/25/image-21-6.png" alt="" width="1410" height="405" /></a></p><p>For this demo, I’m only going to rely on AWS Transform to provide me with good defaults. I type <code>I don't have these details. Follow best practices.</code> and the CLI responds by telling me that it will create a comprehensive transformation definition for migrating Angular 16 to Angular 19.  Of course, I relied on the pre-trained data to generate results based on best practices. As usual, the recommendation is to provide as much information and relevant data as possible at this stage of the process for better results. However, you don’t need to have all the data upfront. You can keep on providing data at any time› as you iterate through the process of creating the custom transformation definition.</p><p>The transformation definition is generated as a markup file containing a summary and a comprehensive sequence of implementation steps grouped logically into phases such as premigration preparation, processing and partitioning, static dependency analysis, searching and applying specific transformation rules, and step-by-step migration and iterative validation.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/11/transformation-definition-created.png"><img class="aligncenter size-full wp-image-100752" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/11/transformation-definition-created.png" alt="" width="2476" height="1052" /></a></p><p>It’s interesting to see that AWS Transform opted for the best practice of doing incremental framework updates creating steps for migrating the application first to 17 then 18 then 19 instead of trying to go directly from 16 to 19 to minimize issues.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/11/marked-incremental-framework-update-copy.png"><img class="aligncenter size-full wp-image-100754" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/11/marked-incremental-framework-update-copy.png" alt="" width="1434" height="1090" /></a></p><p>Note that the plan includes various stages of testing and verification to confirm that the various phases can be concluded with confidence. At the very end, it also includes a final validation stage listing exit criteria that performs a comprehensive set of tests against all aspects of the application that will be used to accept the migration as successfully complete.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/11/marked-final-verification-and-validation.png"><img class="aligncenter size-full wp-image-100756" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/11/marked-final-verification-and-validation.png" alt="" width="1656" height="1074" /></a></p><p>After the transformation definition is created, AWS Transform asks me about what I would like to do next. I can choose to review or modify the transformation definition and I can reiterate through this process as much as I need until I arrive at one that I’m satisfied with. I can also choose to already apply this transformation definition to an Angular codebase. However, first I want to make this transformation available to my team members as well as myself so we can all use it again in the future. So, I choose option 4 to publish this transformation to the registry.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/11/marked-adding-transformation-to-the-registry.png"><img class="aligncenter size-full wp-image-100757" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/11/marked-adding-transformation-to-the-registry.png" alt="" width="1032" height="352" /></a></p><p>This custom transformation needs a name and a description of its objective which is displayed when users browse the registry. AWS Transforms automatically extracts those from context for me and asks me if I would like to modify them before going ahead. I like the sensible default of “Angular-16-to-19-Migration”, and the objective is clearly stated, so I choose to accept the suggestions and publish it by answering with <code>yes, looks good</code>.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/11/suggestions-for-transformation-name-and-objective-2.png"><img class="aligncenter size-full wp-image-100760" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/11/suggestions-for-transformation-name-and-objective-2.png" alt="" width="2488" height="452" /></a></p><p>Now that the transformation definition is created and published, I can use it and run it multiple times against any code repository. Let’s apply the transformation to a code repository with a project written in Angular 16. I now choose option 1 from the follow-up prompt and the CLI asks me for the path in my file system to the application that I want to migrate and, optionally, the build command that it should use.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/11/marked-choosing-to-apply-transformation-from-the-menu.png"><img class="aligncenter size-full wp-image-100761" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/11/marked-choosing-to-apply-transformation-from-the-menu.png" alt="" width="2490" height="878" /></a></p><p>After I provide that information, AWS Transform proceeds to analyze the code base and formulate a thorough step-by-step transformation plan based on the definition created earlier. After it’s done, it creates a JSON file containing the detailed migration plan specifically designed for applying our transformation definition to this code base. Similar to the process of creating the transformation definition, you can review and iterate through this plan as much as you need, providing it with feedback and adjusting it to any specific requirements you might have.</p><p>When I’m ready to accept the plan, I can use natural language to tell AWS Transform that we can start the migration process. I type <code>looks good, proceed</code> and watch the progress in my shell as it starts executing the plan and making the changes to my code base one step at a time.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/11/migration-has-started-1.png"><img class="aligncenter size-full wp-image-100763" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/11/migration-has-started-1.png" alt="" width="2006" height="964" /></a></p><p>The time it takes will vary depending on the complexity of the application. In my case, it took a few minutes to complete. After it has finished, it provides me with a transformation summary and the status of each one of the exit criteria that were included in the final verification phase of the plan alongside all the evidence to support the reported status. For example, the <strong>Application Build – Production</strong> criteria was listed as passed and some of the evidence provided included the incremental Git commits, the time that it took to complete the production build, the bundle size, the build output message, and the details about all the output files created.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/11/marked-transformation-is-finished.png"><img class="aligncenter size-full wp-image-100764" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/11/marked-transformation-is-finished.png" alt="" width="1377" height="508" /></a></p><p><strong>Conclusion<br /></strong> AWS Transform represents a fundamental shift in how organizations approach code modernization and technical debt. The service helps to transform what was at one time a fragmented, team-by-team effort into a unified, intelligent capability that eliminates knowledge silos, keeping your best practices and institutional knowledge available as scalable assets across the entire organization. This helps to accelerate modernization initiatives while freeing developers to spend more time on innovation and driving business value instead of focusing on repetitive maintenance and modernization tasks.</p><p><strong>Things to know</strong></p><p><a href="https://aws.amazon.com/transform/custom?trk=ac97e39c-d115-4d4a-b3fe-c695e0c9a7ee&amp;sc_channel=el">AWS Transform custom</a> is now generally available. Visit the <a href="https://docs.aws.amazon.com/transform/latest/userguide/custom-get-started.html?trk=ac97e39c-d115-4d4a-b3fe-c695e0c9a7ee&amp;sc_channel=el">get started guide</a> to start your first transformation campaign or check out <a href="https://docs.aws.amazon.com/transform/latest/userguide/custom.html?trk=ac97e39c-d115-4d4a-b3fe-c695e0c9a7ee&amp;sc_channel=el">the documentation</a> to learn more about setting up custom transformation definitions.</p>]]></summary>
    <link href="https://aws.amazon.com/blogs/aws/introducing-aws-transform-custom-crush-tech-debt-with-ai-powered-code-modernization/"/>
    <updated>2025-12-01T20:00:00+01:00</updated>
  </entry>
  <entry>
    <id>https://aws.amazon.com/blogs/aws/top-announcements-of-aws-reinvent-2025/</id>
    <title><![CDATA[Top announcements of AWS re:Invent 2025]]></title>
    <summary><![CDATA[<section class="blog-post-content lb-rtxt"><table id="amazon-polly-audio-table"><tbody><tr><td id="amazon-polly-audio-tab">
<p></p></td></tr></tbody>


</table><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/30/Matt-stage-RIV2024.jpg"><img class="alignright size-full wp-image-102395" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/30/Matt-stage-RIV2024.jpg" alt="Matt Garman stands on stage at re:Invent 2024" width="2048" height="1024" /></a>We’re rounding up the most exciting and impactful announcements from <a href="https://reinvent.awsevents.com/">AWS re:Invent 2025</a>, which takes place November 30-December 4 in Las Vegas. This guide highlights the innovations that will help you build, scale, and transform your business in the cloud.</p><p>We’ll update this roundup throughout re:Invent with our curation of the major announcements from each keynote session and more. To see the complete list of all AWS launches, visit <a href="https://aws.amazon.com/new/">What’s New with AWS</a>.</p><p>(This post was updated Nov. 30, 2025.)</p><hr /><h3>Analytics</h3><p><a href="https://aws.amazon.com/blogs/aws/aws-clean-rooms-launches-privacy-enhancing-synthetic-dataset-generation-for-ml-model-training">AWS Clean Rooms launches privacy-enhancing dataset generation for ML model training</a><br />Train ML models on sensitive collaborative data by generating synthetic datasets that preserve statistical patterns while protecting individual privacy through configurable noise levels and protection against re-identification.</p><h3>Compute</h3><p><a href="https://aws.amazon.com/blogs/aws/introducing-aws-lambda-managed-instances-serverless-simplicity-with-ec2-flexibility">Introducing AWS Lambda Managed Instances: Serverless simplicity with EC2 flexibility</a><br />Run Lambda functions on EC2 compute while maintaining serverless simplicity—enabling access to specialized hardware and cost optimizations through EC2 pricing models, with AWS handling all infrastructure management.</p><h3>Containers</h3><p><a href="https://aws.amazon.com/blogs/aws/announcing-amazon-eks-capabilities-for-workload-orchestration-and-cloud-resource-management">Announcing Amazon EKS Capabilities for workload orchestration and cloud resource management</a><br />Streamline Kubernetes development with fully managed platform capabilities that handle workload orchestration and cloud resource management, eliminating infrastructure maintenance while providing enterprise-grade reliability and security.</p><h3>Networking &amp; Content Delivery</h3><p><a href="https://aws.amazon.com/blogs/aws/introducing-amazon-route-53-global-resolver-for-secure-anycast-dns-resolution-preview/">Introducing Amazon Route 53 Global Resolver for secure anycast DNS resolution (preview)</a><br />Simplify hybrid DNS management with a unified service that resolves public and private domains globally through secure, anycast-based resolution while reducing operational overhead and maintaining consistent security controls.</p><h3>Partner Network</h3><p><a href="https://aws.amazon.com/blogs/aws/aws-partner-central-now-available-in-aws-management-console">AWS Partner Central now available in AWS Management Console</a><br />Access Partner Central directly through the AWS Console to streamline your journey from customer to Partner—manage solutions, opportunities, and marketplace listings in one unified interface with enterprise-grade security.</p><h3>Security, Identity, &amp; Compliance</h3><p><a href="https://aws.amazon.com/blogs/aws/simplify-iam-policy-creation-with-iam-policy-autopilot-a-new-open-source-mcp-server-for-builders">Simplify IAM policy creation with IAM Policy Autopilot, a new open source MCP server for builders</a><br />Speed up AWS development with an open source tool that analyzes your code to generate valid IAM policies, providing AI coding assistants with up-to-date AWS service knowledge and reliable permission recommendations.</p></section><aside id="Comments" class="blog-comments"><div data-lb-comp="aws-blog:cosmic-comments" data-env="prod" data-content-id="671df162-7878-4951-92e8-e11113f0188d" data-title="Top announcements of AWS re:Invent 2025" data-url="https://aws.amazon.com/blogs/aws/top-announcements-of-aws-reinvent-2025/"><p data-failed-message="Comments cannot be loaded… Please refresh and try again.">Loading comments…</p></div>
</aside>]]></summary>
    <link href="https://aws.amazon.com/blogs/aws/top-announcements-of-aws-reinvent-2025/"/>
    <updated>2025-12-01T03:17:00+01:00</updated>
  </entry>
  <entry>
    <id>https://aws.amazon.com/blogs/aws/introducing-amazon-route-53-global-resolver-for-secure-anycast-dns-resolution-preview/</id>
    <title><![CDATA[Introducing Amazon Route 53 Global Resolver for secure anycast DNS resolution (preview)]]></title>
    <summary><![CDATA[<table id="amazon-polly-audio-table"><tbody><tr><td id="amazon-polly-audio-tab">
<p></p></td></tr></tbody>


</table><p>Today, we’re announcing <a href="http://aws.amazon.com/route53/global-resolver">Amazon Route 53 Global Resolver</a>, a new Amazon Route 53 service that provides secure and reliable DNS resolution globally for queries from anywhere (preview). You can use Global Resolver to resolve DNS queries to public domains on the internet and private domains associated with Route 53 private hosted zones. Route 53 Global Resolver offers network administrators a unified solution to resolve queries from authenticated clients and sources in on-premises data centers, branch offices, and remote locations through globally distributed anycast IP addresses. This service includes built-in security controls including DNS traffic filtering, support for encrypted queries, and centralized logging to help organizations reduce operational overhead while maintaining compliance with security requirements.</p><p>Organizations with hybrid deployments face operational complexity when managing DNS resolution across distributed environments. Resolving public internet domains and private application domains often requires maintaining split DNS infrastructure, which increases cost and administrative burden especially when replicating to multiple locations. Network administrators must configure custom forwarding solutions, deploy Route 53 Resolver endpoints for private domain resolution, and implement separate security controls across different locations. Additionally, they must configure and maintain multi-Region failover strategies for Route 53 Resolver endpoints and provide consistent security policy enforcement across all Regions while testing failover scenarios.</p><p>Route 53 Global Resolver has key capabilities that address these challenges. The service resolves both public internet domains and Route 53 private hosted zones, eliminating the need for separate split-DNS forwarding. It provides DNS resolution through multiple protocols, including DNS over UDP (Do53), DNS-over-HTTPS (DoH), and DNS-over-TLS (DoT). Each deployment provides a single set of common IPv4 and IPv6 anycast IP addresses that route queries to the nearest AWS Region, reducing latency for distributed client populations.</p><p>Route 53 Global Resolver provides integrated security features equivalent to Route 53 Resolver DNS Firewall. Administrators can configure filtering rules using <a href="https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/gr-managed-domain-lists.html">AWS Managed Domain Lists</a> that provide flexible controls with lists classified by DNS threats (malware, spam, phishing) or web content (adult sites, gambling, social networking) that might not be safe for work or create custom domain lists by importing domains from a file. Advanced threat protection detects and blocks domain generation algorithm (DGA) patterns and DNS tunneling attempts. For encrypted DNS traffic, Route 53 Global Resolver supports DoH and DoT protocols to protect queries from unauthorized access during transit.</p><p>Route 53 Global Resolver only accepts traffic from known clients that need to authenticate with the Resolver. For Do53, DoT, and DoH connections, administrators can configure IP and CIDR allowlists. For DoH and DoT connections, token-based authentication provides granular access control with customizable expiration periods and revocation capabilities. Administrators can assign tokens to specific client groups or individual devices based on organizational requirements.</p><p>Route 53 Global Resolver supports DNSSEC validation to verify the authenticity and integrity of DNS responses from public nameservers. It also includes EDNS Client Subnet support, which forwards client subnet information to enable more accurate geographic-based DNS responses from content delivery networks.</p><p><strong>Getting started with Route 53 Global Resolver<br /></strong> This walkthrough shows how to configure Route 53 Global Resolver for an organization with offices on the US East and West coasts that needs to resolve both public domains and private applications hosted in Route 53 private hosted zones. To configure Route 53 Global Resolver, go to the AWS Management Console, choose <strong>Global resolvers</strong> from the navigation pane, and choose <strong>Create global resolver</strong>.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/07/1211186841717760-0a.png"><img class="alignnone size-full wp-image-100486" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/07/1211186841717760-0a.png" alt="" width="1924" height="840" /></a></p><p>In the <strong>Resolver details</strong> section, enter a <strong>Resolver name</strong> such as <code>corporate-dns-resolver</code>. Add an optional description like <code>DNS resolver for corporate offices and remote clients</code>. In the <strong>Regions</strong> section, choose the AWS Regions where you want the resolver to operate, such as US East (N. Virginia) and US West (Oregon). The anycast architecture routes DNS queries from your clients to the nearest selected Region.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/10/28/1211186841717760-1a.png"><img class="alignnone size-full wp-image-100159" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/10/28/1211186841717760-1a.png" alt="" width="1924" height="1212" /></a></p><p>After the resolver is created, the console displays the resolver details, including the anycast IPv4 and IPv6 addresses that you will use for DNS queries. You can proceed to create a DNS view by choosing <strong>Create DNS view</strong> to configure client authentication and DNS query resolution settings.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/10/28/1211186841717760-2.png"><img class="alignnone size-full wp-image-100160" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/10/28/1211186841717760-2.png" alt="" width="1924" height="916" /></a></p><p>In the <strong>Create DNS view</strong> section, enter a <strong>DNS view name</strong> such as <code>primary-view</code> and optionally add a <strong>Description</strong> like <code>DNS view for corporate offices</code>. A DNS view helps you create different logical groupings for your clients and sources, and determine the DNS resolution for those groups. This helps you maintain different DNS filtering rules and private hosted zone resolution policies for different clients in your organization.</p><p>For <strong>DNSSEC validation</strong>, choose <strong>Enable</strong> to verify the authenticity of DNS responses from public DNS servers. For <strong>Firewall rules fail open behavior</strong>, choose <strong>Disable</strong> to block DNS queries when firewall rules can’t be evaluated, which provides additional security. For <strong>EDNS client subnet</strong>, keep <strong>Enable</strong> selected to forward client location information to DNS servers, which allows content delivery networks to provide more accurate geographic responses. DNS view creation might take a few minutes to become operational.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/10/28/1211186841717760-3.png"><img class="alignnone size-full wp-image-100163" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/10/28/1211186841717760-3.png" alt="" width="1924" height="1093" /></a></p><p>After the DNS view is created and operational, configure DNS Firewall rules to filter network traffic by choosing <strong>Create rule</strong>. In the <strong>Create DNS Firewall rules</strong> section, enter a <strong>Rule name</strong> such as <code>block-malware-domains</code> and optionally add a description. For <strong>Rule configuration type</strong>, you can choose <strong>Customer managed domain lists</strong>, <strong>AWS managed domain lists</strong> provided by AWS or <strong>DNS Firewall Advanced protection</strong>.</p><p>For this walkthrough, choose <strong>AWS managed domain lists</strong>. In the <strong>Domain lists</strong> dropdown, choose one or more AWS managed lists such as <strong>Threat – Malware</strong> to block known malicious domains. You can leave <strong>Query type</strong> empty to apply the rule to all DNS query types. In this example, choose <strong>A</strong> to apply this rule only to IPv4 address queries. In the <strong>Rule action</strong> section, select <strong>Block</strong> to prevent DNS resolution for domains that match the selected lists. For <strong>Response to send for Block action</strong>, keep <strong>NODATA</strong> selected to indicate that the query was successful but no response is available, then choose <strong>Create rules</strong>.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/10/28/1211186841717760-4c.png"><img class="alignnone size-full wp-image-100172" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/10/28/1211186841717760-4c.png" alt="" width="1924" height="2126" /></a></p><p>The next step is to configure access sources to specify which IP addresses or CIDR blocks are allowed to send DNS queries to the resolver. Navigate to the <strong>Access sources</strong> tab in the <strong>DNS view</strong> and then choose <strong>Create access source</strong>.</p><p>In the <strong>Access source details</strong> section, enter a <strong>Rule name</strong> such as <code>office-networks</code> to identify the access source. In the <strong>CIDR block</strong> field, enter the IP address range for your offices to allow queries from that network. For <strong>Protocol</strong>, select <strong>Do53</strong> for standard DNS queries over UDP or choose <strong>DoH</strong> or <strong>DoT</strong> if you want to require encrypted DNS connections from clients. After configuring these settings, choose <strong>Create access source</strong> to allow the specified network to send DNS queries to the resolver.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/10/28/1211186841717760-5a.png"><img class="alignnone size-full wp-image-100174" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/10/28/1211186841717760-5a.png" alt="" width="1924" height="1119" /></a></p><p>Next, navigate to the <strong>Access tokens</strong> tab in the <strong>DNS view</strong> to create token-based authentication for clients and choose <strong>Create access token</strong>. In the <strong>Access token details</strong> section, enter a <strong>Token name</strong> such as <code>remote-clients-token</code>. For <strong>Token expiry</strong>, select an expiration period from the dropdown based on your security requirements, such as <strong>365 days</strong> for long-term client access, or choose a shorter duration like <strong>30 days</strong> or <strong>90 days</strong> for tighter access control. After configuring these settings, choose <strong>Create access token</strong> to generate the token, which clients can use to authenticate DoH and DoT connections to the resolver.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/10/28/1211186841717760-6.png"><img class="alignnone size-full wp-image-100176" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/10/28/1211186841717760-6.png" alt="" width="1924" height="797" /></a></p><p>After the access token is created, navigate to the <strong>Private hosted zones</strong> tab in the <strong>DNS view</strong> to associate Route 53 private hosted zones with the DNS view so that the resolver can resolve queries for your private application domains. Choose <strong>Associate private hosted zone</strong> and in the <strong>Private hosted zones</strong> section, select a private hosted zone from the list that you want the resolver to handle. After selecting the zone, choose <strong>Associate</strong> to enable the resolver to respond to DNS queries for these private domains from your configured access sources.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/10/28/1211186841717760-7.png"><img class="alignnone size-full wp-image-100177" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/10/28/1211186841717760-7.png" alt="" width="1924" height="513" /></a></p><p>With the DNS view configured, firewall rules created, access sources and tokens defined, and private hosted zones associated, the Route 53 Global Resolver setup is complete and ready to handle DNS queries from your configured clients.</p><p>After creating your Route 53 Global Resolver, you need to configure your DNS clients to send queries to the resolver’s anycast IP addresses. The configuration method depends on the access control you configured in your DNS view:</p><ul><li><strong>For IP-based access sources (CIDR blocks)</strong> – Conﬁgure your source clients to point DNS traﬃc to the Route 53 Global Resolver anycast IP addresses provided in the resolver details. Global Resolver will only allow access from allowlisted IPs that you have specified in your access sources. You can also associate the access sources to different DNS views to provide more granular DNS resolution views for different sets of IPs.</li>
<li><strong>For access token–based authentication</strong> – Deploy the tokens on your clients to authenticate DoH and DoT connections with Route 53 Global Resolver. You must also conﬁgure your clients to point the DNS traffic to the Route 53 Global Resolver anycast IP addresses provided in the resolver details.</li>
</ul><p>For detailed configuration instructions for your specific operating system and protocol, refer to the <a href="https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/gr-platform-configuration-examples.html">technical documentation</a>.</p><p><strong>Additional things to know<br /></strong> We’re renaming the existing Route 53 Resolver to Route 53 VPC Resolver. This naming change clarifies the architectural distinction between the two services. VPC Resolver operates Regionally within your VPCs to provide DNS resolution for resources in your Amazon VPC environment. VPC Resolver continues to support inbound and outbound resolver endpoints for hybrid DNS architectures within specific AWS Regions.</p><p>Route 53 Global Resolver complements Route 53 VPC Resolver by providing internet-reachable, global and private DNS resolution for on-premises and remote clients without requiring VPC deployment or private connections.</p><p>Existing VPC Resolver configurations remain unchanged and continue to function as configured. The renaming affects the service name in the AWS Management Console and documentation, but API operation names remain unchanged. If your architecture requires DNS resolution for resources within your VPCs, continue using VPC Resolver.</p><p><strong>Join the preview<br /></strong> Route 53 Global Resolver reduces operational overhead by providing unified DNS resolution for public and private domains through a single managed service. The global anycast architecture improves reliability and reduces latency for distributed clients. Integrated security controls and centralized logging help organizations maintain consistent security policies across all locations while meeting compliance requirements.</p><p>To learn more about Amazon Route 53 Global Resolver, visit the <a href="https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/gr-what-is-global-resolver.html">Amazon Route 53 documentation</a>.</p><p>You can start using Route 53 Global Resolver through the AWS Management Console in US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Europe (Frankfurt), Europe (Ireland), Europe (London), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Tokyo), and Asia Pacific (Sydney) Regions.</p><a href="https://www.linkedin.com/in/esrakayabali/">— Esra</a>]]></summary>
    <link href="https://aws.amazon.com/blogs/aws/introducing-amazon-route-53-global-resolver-for-secure-anycast-dns-resolution-preview/"/>
    <updated>2025-12-01T02:56:00+01:00</updated>
  </entry>
  <entry>
    <id>https://aws.amazon.com/blogs/aws/aws-clean-rooms-launches-privacy-enhancing-synthetic-dataset-generation-for-ml-model-training/</id>
    <title><![CDATA[AWS Clean Rooms launches privacy-enhancing synthetic dataset generation for ML model training]]></title>
    <summary><![CDATA[<section class="blog-post-content lb-rtxt"><table><tbody><tr><td>
<p></p></td></tr></tbody>


</table><p>Today, we’re announcing privacy-enhancing synthetic dataset generation for <a href="https://aws.amazon.com/clean-rooms/">AWS Clean Rooms</a>, a new capability that organizations and their partners can use to generate privacy-enhancing synthetic datasets from their collective data to train regression and classification <a href="https://aws.amazon.com/ai/machine-learning/">machine learning (ML)</a> models. You can use this feature to generate synthetic training datasets that preserve the statistical patterns of the original data, without the model having access to original records, opening new opportunities for model training that were previously not possible due to privacy concerns.</p><p>When building ML models, data scientists and analysts typically face a fundamental tension between data utility and privacy protection. Access to high-quality, granular data is essential for training accurate models that can recognize trends, personalize experiences, and drive business outcomes. However, using granular data such as user-level event data from multiple parties raises significant privacy concerns and compliance challenges. Organizations want to answer questions like, “What characteristics indicate a high-probability customer conversion?”, but training on the individual-level signals often conflicts with privacy policies and regulatory requirements.</p><p><strong>Privacy-enhancing synthetic dataset generation for custom ML<br /></strong> To address this challenge, we’re introducing privacy-enhancing synthetic dataset generation in <a href="https://aws.amazon.com/clean-rooms/ml/">AWS Clean Rooms ML</a>, which organizations can use to create synthetic versions of sensitive datasets that can be more securely used for ML model training. This capability uses advanced ML techniques to generate new datasets that maintain the statistical properties of the original data while de-identifying subjects from the original source data.</p><p>Traditional anonymization techniques such as <a href="https://en.wikipedia.org/wiki/Data_masking">masking</a> still carry the risk of re-identifying individuals in a dataset—knowing attributes about a person such as zip code and date of birth can be sufficient to identify them with census data. Privacy-enhancing synthetic dataset generation addresses this risk through a fundamentally different approach. The system trains a model that learns the essential statistical patterns of the original dataset, then generates synthetic records by sampling values from the original dataset and using the model to predict the predicted value column. Rather than merely copying or perturbing the original data, the system uses a model capacity reduction technique to mitigate the risk that the model will memorize information about individuals in the training data. The resulting synthetic dataset has the same schema and statistical characteristics as the original data, making it suitable for training classification and regression models. This approach quantifiably reduces the risk of re-identification.</p><p>Organizations using this capability have control over the privacy parameters, including the amount of noise applied and the level of protection against membership <a href="https://en.wikipedia.org/wiki/Inference_attack">inference attacks</a>, where an adversary attempts to determine whether a specific individual’s data was included in the training set. After generating the synthetic dataset, AWS Clean Rooms provides detailed metrics to help customers and their compliance teams understand the quality of the synthetic dataset across two critical dimensions: fidelity to the original data and privacy preservation. The fidelity score uses <a href="https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence">KL-divergence</a> to measure how similar the synthetic data is to the original dataset, and the privacy score quantifies how likely the dataset is protected from membership inference attacks.</p><p><strong>Working with synthetic data in AWS Clean Rooms<br /></strong> Getting started with privacy-enhancing synthetic dataset generation follows the established AWS Clean Rooms ML custom models workflow, with new steps to specify privacy requirements and review quality metrics. Organizations begin by creating <a href="https://docs.aws.amazon.com/clean-rooms/latest/userguide/working-with-configured-tables.html">configured tables</a> with <a href="https://docs.aws.amazon.com/clean-rooms/latest/userguide/analysis-rules.html">analysis rules</a> using their preferred data sources, then join or create a collaboration with their partners and associate their tables with that collaboration.</p><p>The new capability introduces an enhanced analysis template where data owners define not only the SQL query that creates the dataset but also specify that the resulting dataset must be synthetic. Within this template, organizations classify columns to indicate which column the ML model will predict and which columns contain categorical versus numerical values. Critically, the template also includes privacy thresholds that the generated synthetic data must meet to be made available for training. These include an epsilon value that specifies how much noise must be present in the synthetic data to protect against <a href="https://en.wikipedia.org/wiki/Data_re-identification">re-identification</a>, and a minimum protection score against membership inference attacks. Setting these thresholds appropriately requires understanding your organization’s specific privacy and compliance requirements, and we recommend engaging with your legal and compliance teams during this process.</p><p>After all data owners review and approve the analysis template, a collaboration member creates a machine learning input channel that references the template. AWS Clean Rooms then begins the synthetic dataset generation process, which typically completes within a few hours depending on the size and complexity of the dataset. If the generated synthetic dataset meets the required privacy thresholds defined in the analysis template, a synthetic machine learning input channel becomes available along with detailed quality metrics. Data scientists can review the actual protection score achieved against a simulated membership inference attack.</p><p>Once satisfied with the quality metrics, organizations can proceed to train their ML models using the synthetic dataset within the AWS Clean Rooms collaboration. Depending on the use case, they can export the trained model weights or continue to run inference jobs within the collaboration itself.</p><p><strong>Let’s try it out<br /></strong> When creating a new AWS Clean Rooms collaboration, I can now set who pays for synthetic dataset generation.</p><p><img class="alignnone size-large wp-image-101337" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/18/image-27-1-1024x969.png" alt="" width="1024" height="969" /></p><p>After my Collaboration is configured, I can choose <strong>Require analysis template output to be synthetic</strong> when creating a new analysis template.</p><p><img class="alignnone size-large wp-image-101336" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/18/image-26-1-879x1024.png" alt="" width="879" height="1024" /></p><p>After my synthetic analysis template is ready, I can use it when running protected queries and view all the relevant ML input channel details.</p><p><img class="alignnone size-large wp-image-101335" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/18/image-25-1-1024x1008.png" alt="Clean Rooms Synthetic Data Console" width="1024" height="1008" /></p><p><strong>Now available<br /></strong> You can start using privacy-enhancing synthetic dataset generation through AWS Clean Rooms today. The feature is available in all commercial <a href="https://aws.amazon.com/about-aws/global-infrastructure/regions_az/">AWS Regions</a> where AWS Clean Rooms is available. Learn more about it in the <a href="https://docs.aws.amazon.com/clean-rooms/latest/userguide/what-is.html">AWS Clean Rooms documentation</a>.</p><p>Privacy-enhancing synthetic dataset generation is billed separately based on usage. You pay only for the compute used to generate your synthetic dataset, charged as Synthetic Data Generation Units (SDGUs). The number of SDGUs varies based on the size and complexity of your original dataset. This fee can be configured as a payer setting, meaning any collaboration member can agree to pay the costs. For more information on pricing, refer to the <a href="https://aws.amazon.com/clean-rooms/pricing/">AWS Clean Rooms pricing page</a>.</p><p>The initial release supports training classification and regression models on tabular data. The synthetic datasets work with standard ML frameworks and can be integrated into existing model development pipelines without requiring changes to your workflows.</p><p>This capability represents a significant advancement in privacy-enhanced machine learning. Organizations can unlock the value of sensitive user-level data for model training while mitigating the risk that sensitive information about individual users could be leaked. Whether you’re optimizing advertising campaigns, personalizing insurance quotes, or enhancing fraud detection systems, privacy-enhancing synthetic dataset generation makes it possible to train more accurate models through data collaboration while respecting individual privacy.</p></section><aside id="Comments" class="blog-comments"><div data-lb-comp="aws-blog:cosmic-comments" data-env="prod" data-content-id="399d867d-04a4-4d32-9e89-024296e2bb9c" data-title="AWS Clean Rooms launches privacy-enhancing synthetic dataset generation for ML model training" data-url="https://aws.amazon.com/blogs/aws/aws-clean-rooms-launches-privacy-enhancing-synthetic-dataset-generation-for-ml-model-training/"><p data-failed-message="Comments cannot be loaded… Please refresh and try again.">Loading comments…</p></div>
</aside>]]></summary>
    <link href="https://aws.amazon.com/blogs/aws/aws-clean-rooms-launches-privacy-enhancing-synthetic-dataset-generation-for-ml-model-training/"/>
    <updated>2025-12-01T02:55:00+01:00</updated>
  </entry>
  <entry>
    <id>https://aws.amazon.com/blogs/aws/aws-partner-central-now-available-in-aws-management-console/</id>
    <title><![CDATA[AWS Partner Central now available in AWS Management Console]]></title>
    <summary><![CDATA[<section class="blog-post-content lb-rtxt"><table><tbody><tr><td>
<p></p></td></tr></tbody>


</table><p>Today, we’re announcing that <a href="https://console.aws.amazon.com/partnercentral/home">AWS Partner Central</a> is now available directly in the <a href="https://console.aws.amazon.com">AWS Management Console</a>, creating a unified experience that transforms how you engage with AWS as both customers and AWS Partners.</p><p>As someone who has worked with countless AWS customers over the years, I’ve observed how organizations evolve in their AWS journey. Many of our most successful Partners began as AWS customers—first using our services to build their own infrastructure and solutions, then expanding to create offerings for others. Seeing this natural progression from customer to Partner, we recognized an opportunity to streamline these traditionally separate experiences into one unified journey.</p><p>As AWS evolved, so did the needs of our Partner community. Organizations today operate in multiple capacities: using AWS services for their own infrastructure while simultaneously building and delivering solutions for their customers. Modern businesses need streamlined workflows that support their growth from AWS customer to Partner to <a href="https://aws.amazon.com/marketplace/">AWS Marketplace</a> Seller, with enterprise-grade security features that match how they actually work with AWS today.</p><p><strong>A new unified console experience<br /></strong> The integration of AWS Partner Central into the Console represents a fundamental shift in partnership accessibility. For existing AWS customers, such as you, becoming an AWS Partner is now as clear as accessing any other AWS service. The familiar console interface provides direct access to partnership opportunities, program benefits, and AWS Marketplace capabilities without needing separate logins or navigation between different systems.</p><p>Getting started as an AWS Partner now takes only a few clicks within your existing console environment. You can discover partnership opportunities, understand program requirements, and begin your Partner journey without leaving the AWS interface you already know and trust.</p><p>The console integration creates an intuitive pathway for existing customers to transition into AWS Marketplace Sellers. You can now access AWS Marketplace Seller capabilities alongside your existing AWS services, managing both your infrastructure and AWS Marketplace business from a single interface. Private offer requests and negotiations can be managed directly within AWS Partner Central, and you can manage your AWS Marketplace listings alongside your other AWS activities through streamlined workflows.</p><p><strong>Becoming an AWS Partner<br /></strong> The unified console experience provides access to comprehensive partnership benefits designed to accelerate your business growth.</p><p><strong>Join</strong> the <a href="https://aws.amazon.com/partners/">AWS Partner Network (APN)</a> and complete your Partner and AWS Marketplace Seller requirements seamlessly within the same interface. Enroll in Partner Paths that align with your customer solutions to build, market, list, and sell in AWS Marketplace while growing alongside AWS. When you are established, use the Partner programs to <strong>differentiate</strong> your solution, list in AWS Marketplace to improve your go-to-market discoverability, and build AWS expertise through certifications to drive profitability by capturing new revenue streams. <strong>Scale</strong> your business by selling or reselling software and professional services in AWS Marketplace, helping you accelerate deals, boost revenue, and expand your customer reach to new geographies, industries, and segments.</p><p>Throughout your journey, you can continue using <a href="https://aws.amazon.com/q/">Amazon Q</a> in the console, which provides personalized guidance through AWS Partner Assistant.</p><p><strong>Let’s see the new Partner Central console<br /></strong> The new AWS Partner Central is accessible like any other AWS service from the console. Among many new capabilities, it provides four key sections that support Partner operations and business growth within the AWS Partner Network:</p><p><strong>1. It helps you sell your solutions</strong></p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/12/2025-11-12_14-18-22.png"><img class="aligncenter size-large wp-image-100807" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/12/2025-11-12_14-18-22-1024x532.png" alt="AWS Partner Central - Solutions" width="1024" height="532" /></a></p><p>You can create and publish solutions that address specific customer needs through AWS Marketplace. Solutions are made up of products such as software as a service (SaaS), Amazon Machine Images (AMI), containers, professional services, AI agents and tools, and more. The solutions management capability guides you through building offerings that include both products you own and those you are authorized to resell. You can craft compelling value propositions and descriptions that clearly communicate your solution benefits to potential buyers browsing AWS Marketplace.</p><p>I choose <strong>Create solution</strong> to start listing a new solution in the AWS Marketplace, as shown in the following figure.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/12/2025-11-12_14-22-34.png"><img class="aligncenter size-large wp-image-100808" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/12/2025-11-12_14-22-34-1024x805.png" alt="AWS Partner Central - Create solution" width="1024" height="805" /></a></p><p><strong>2. It helps you update and manage your Partner profile</strong></p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/12/2025-11-12_14-30-32.png"><img class="aligncenter size-large wp-image-100809" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/12/2025-11-12_14-30-32-1024x462.png" alt="AWS Partner Central - Manage profile" width="1024" height="462" /></a></p><p>Your Partner profile showcases your organization’s expertise and capabilities to the AWS community. You control how your business appears to potential customers and Partners by highlighting the industry segments you serve and describing your primary products or services. Profile visibility settings provide you with the option to choose whether your information is public or private.</p><p><strong>3. It helps you track opportunities</strong></p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/12/2025-11-12_14-28-32.png"><img class="aligncenter size-large wp-image-100810" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/12/2025-11-12_14-28-32-1024x577.png" alt="AWS Partner Central - Track Opportunities" width="1024" height="577" /></a></p><p>You can manage your pipeline of AWS customers, supporting joint collaborations with AWS on customer engagements. You monitor these prospects using clear status indicators: approved, rejected, draft, and pending approval. The opportunity dashboard shows stages, estimated AWS Monthly Recurring Revenue, and other key metrics that help you understand your pipeline. You can create more opportunities directly within the console and export data for your own reporting and analysis.</p><p><strong>4. It provides you with the ability to discover and connect with other Partners</strong></p><p>After becoming an AWS Partner, you get access to the AWS Partners network, where you can search for other Partners. You can connect with them to collaborate on sales opportunities and expand your customer outreach.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/12/2025-11-12_14-36-39.png"><img class="aligncenter size-large wp-image-100811" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/12/2025-11-12_14-36-39-1024x677.png" alt="AWS Partner Central - Discover and Search for partners" width="1024" height="677" /></a></p><p>You search through available Partners using filters for industry, location, Partner program type, and specialization. The centralized dashboard shows your active connections, pending requests, and connection history, so that you can manage business relationships and identify collaboration opportunities that can expand your reach. Like all other AWS services, these Partner connection capabilities are now available as APIs, which provide automation and integration into your existing workflows.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/12/2025-11-12_14-36-48.png"><img class="aligncenter size-large wp-image-100812" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/12/2025-11-12_14-36-48-1024x462.png" alt="AWS Partner Central - Manage contact requests" width="1024" height="462" /></a></p><p>These capabilities work together within the new AWS Partner Central console, accessible directly from the console, helping you transition from AWS customer to successful Partner with enterprise-grade security and streamlined workflows.</p><p><strong>The technical foundation: Migrating the identity system<br /></strong> This unified console experience is made possible by our migration to a modern identity system built on <a href="https://aws.amazon.com/iam/">AWS Identity and Access Management (IAM)</a>. We’ve transitioned from legacy identity infrastructure to IAM Identity Center, providing enterprise-grade security capabilities including single sign-on capabilities and multi-factor authentication. With security as job zero, this migration provides new and existing Partners with the possibility to connect their own identity providers to AWS Partner Central. It provides seamless integration with existing enterprise authentication systems while removing the complexity of managing separate credentials across different services.</p><p><strong>One more thing<br /></strong> APIs are the core of what we do at AWS, and AWS Partner Central is no different. You can automate and streamline your co-sell workflows by connecting your business tools to AWS Partner Central. <a href="https://docs.aws.amazon.com/partner-central/latest/APIReference/welcome.html">The APIs offered by AWS Partner Central</a> help you accelerate APN benefits—from Account Management (Account API) and Solution Management (Solution API) to co-selling with Opportunity and Leads APIs, and Benefits APIs for faster benefit activation.</p><p>You can use these APIs to engage with AWS and grow your Partner business from your own CRM tools.</p><p><strong>Get started today<br /></strong> This integration between the console and AWS Partner Central reflects our commitment to reducing complexity and improving the Partner experience. We’re bringing AWS Partner Central into the console to create a more intuitive path for organizations to grow with AWS from initial customer adoption through to full partnership engagement and AWS Marketplace success.</p><p>Your journey from AWS customer to successful AWS Partner and AWS Marketplace Seller starts with a few clicks in your console. I encourage you to explore the new unified experience today and discover how AWS Partner Central in the console can accelerate your organization’s growth and success within the AWS community.</p><p>Ready to get started? Visit <a href="https://console.aws.amazon.com/partnercentral/home">AWS Partner Central</a> in your console to learn more about the <a href="https://aws.amazon.com/partners/">AWS Partner Network</a> and discover the partnership path that’s right for your organization.</p><a href="https://linktr.ee/sebsto">— seb</a></section><aside class="blog-comments"><div data-lb-comp="aws-blog:cosmic-comments" data-env="prod" data-content-id="505b77ce-aad9-4a3a-b0a3-a637cfe35657" data-title="AWS Partner Central now available in AWS Management Console" data-url="https://aws.amazon.com/blogs/aws/aws-partner-central-now-available-in-aws-management-console/"><p data-failed-message="Comments cannot be loaded… Please refresh and try again.">Loading comments…</p></div>
</aside>]]></summary>
    <link href="https://aws.amazon.com/blogs/aws/aws-partner-central-now-available-in-aws-management-console/"/>
    <updated>2025-12-01T02:55:00+01:00</updated>
  </entry>
  <entry>
    <id>https://aws.amazon.com/blogs/aws/introducing-aws-lambda-managed-instances-serverless-simplicity-with-ec2-flexibility/</id>
    <title><![CDATA[Introducing AWS Lambda Managed Instances: Serverless simplicity with EC2 flexibility]]></title>
    <summary><![CDATA[<section class="blog-post-content lb-rtxt"><table><tbody><tr><td>
<p></p></td></tr></tbody>


</table><p>Today, we’re announcing AWS Lambda Managed Instances, a new capability you can use to run <a href="https://aws.amazon.com/lambda/">AWS Lambda</a> functions on your <a href="https://aws.amazon.com/ec2">Amazon Elastic Compute Cloud (Amazon EC2)</a> compute while maintaining serverless operational simplicity. This enhancement addresses a key customer need: accessing specialized compute options and optimizing costs for steady-state workloads without sacrificing the serverless development experience you know and love.</p><p>Although Lambda eliminates infrastructure management, some workloads require specialized hardware, such as specific CPU architectures, or cost optimizations from Amazon EC2 purchasing commitments. This tension forces many teams to manage infrastructure themselves, sacrificing the serverless benefits of Lambda only to access the compute options or pricing models they need. This often leads to a significant architectural shift and greater operational responsibility.</p><p><strong>Lambda Managed Instances<br /></strong> You can use Lambda Managed Instances to define how your Lambda functions run on EC2 instances. <a href="https://aws.amazon.com">Amazon Web Services (AWS)</a> handles setting up and managing these instances in your account. You get access to the latest generation of Amazon EC2 instances, and AWS handles all the operational complexity—instance lifecycle management, OS patching, load balancing, and auto scaling. This means you can select compute profiles optimized for your specific workload requirements, like high-bandwidth networking for data-intensive applications, without taking on the operational burden of managing Amazon EC2 infrastructure.</p><p>Each execution environment can process multiple requests rather than handling just one request at a time. This can significantly reduce compute consumption, because your code can efficiently share resources across concurrent requests instead of spinning up separate execution environments for each invocation. Lambda Managed Instances provides access to Amazon EC2 commitment-based pricing models such as <a href="https://aws.amazon.com/savingsplans/compute-pricing/">Compute Savings Plans</a> and <a href="https://aws.amazon.com/ec2/pricing/reserved-instances/">Reserved Instances</a>, which can provide up to a 72% discount over <a href="https://aws.amazon.com/ec2/pricing/on-demand/">Amazon EC2 On-Demand pricing</a>. This offers significant cost savings for steady-state workloads while maintaining the familiar Lambda programming model.</p><p><strong>Let’s try it out<br /></strong> To take Lambda Managed Instances for a spin, I first need to create a <strong>Capacity provider</strong>. As shown in the following image, there is a new tab for creating these in the navigation pane under <strong>Additional resources</strong>.</p><p><img class="alignnone size-large wp-image-101357" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/18/Screenshot-2025-11-18-at-3.39.38%E2%80%AFPM-1024x552.png" alt="Lambda Managed Instances Console" width="1024" height="552" /></p><p>Creating a Capacity provider is where I specify the <a href="https://aws.amazon.com/vpc/">virtual private cloud (VPC)</a>, subnet configuration and security groups. With a capacity provider configuration, I can also tell Lambda where to provision and manage the instances.</p><p><img class="alignnone size-large wp-image-101358" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/18/Screenshot-2025-11-18-at-3.40.14%E2%80%AFPM-908x1024.png" alt="" width="908" height="1024" /></p><p>I can also specify the EC2 instance types I’d like to include or exclude, or I can choose to include all instance types for high diversity. Additionally, I can specify a few controls related to auto scaling, including the Maximum vCPU count, and if I want to use Auto scaling or use a CPU policy.</p><p><img class="alignnone wp-image-101909 size-large" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/25/Screenshot-2025-11-25-at-11.56.32%E2%80%AFAM-1024x931.png" alt="" width="1024" height="931" /></p><p>After I have my capacity provider configured, I can choose it through its <a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/reference-arns.html">Amazon Resource Name (ARN)</a> when I go to create a new Lambda function. Here I can also select the memory allocation I want along with a memory-to-vCPU ratio.</p><p><img class="alignnone size-large wp-image-101360" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/18/Screenshot-2025-11-18-at-3.42.13%E2%80%AFPM-1024x780.png" alt="" width="1024" height="780" /></p><p><strong>Working with Lambda Managed Instances<br /></strong> Now that we’ve seen the basic setup, let’s explore how Lambda Managed Instances works in more detail. The feature organizes EC2 instances into capacity providers that you configure through the <a href="https://console.aws.amazon.com/lambda/">Lambda console</a>, <a href="https://aws.amazon.com/cli/">AWS Command Line Interface (AWS CLI)</a>, or <a href="https://aws.amazon.com/what-is/iac/">infrastructure as code (IaC)</a> tools such as <a href="https://aws.amazon.com/cloudformation/">AWS CloudFormation</a>, <a href="https://aws.amazon.com/serverless/sam/">AWS Serverless Application Model (AWS SAM)</a>, <a href="https://aws.amazon.com/cdk/">AWS Cloud Development Kit (AWS CDK)</a> and <a href="https://developer.hashicorp.com/terraform">Terraform</a>. Each capacity provider defines the compute characteristics you need, including instance type, networking configuration, and scaling parameters.</p><p>When creating a capacity provider, you can choose from the latest generation of EC2 instances to match your workload requirements. For cost-optimized general-purpose compute, you could choose <a href="https://aws.amazon.com/ec2/graviton/">AWS Graviton4</a> based instances that deliver excellent price performance. If you’re not sure which instance type to select, AWS Lambda provides optimized defaults that balance performance and cost based on your function configuration.</p><p>After creating a capacity provider, you attach your Lambda functions to it through a straightforward configuration change. Before attaching a function, you should review your code for programming patterns that can cause issues in multiconcurrency environments, such as writing to or reading from file paths that aren’t unique per request or using shared memory spaces and variables across invocations.</p><p>Lambda automatically routes requests to preprovisioned execution environments on the instances, eliminating cold starts that can affect first-request latency. Each execution environment can handle multiple concurrent requests through the multiconcurrency feature, maximizing resource utilization across your functions. When additional capacity is needed during traffic increases, AWS automatically launches new instances within tens of seconds and adds them to your capacity provider. The capacity provider can absorb traffic spikes of up to 50% without needing to scale by default, but built-in circuit breakers protect your compute resources during extreme traffic surges by temporarily throttling requests with 429 status codes if the capacity provider reaches maximum provisioned capacity and additional capacity is still being spun up.</p><p>The operational and architectural model remains serverless throughout this process. AWS handles instance provisioning, OS patching, security updates, load balancing across instances, and automatic scaling based on demand. AWS automatically applies security patches and bug fixes to operating system and runtime components, often without disrupting running applications. Additionally, instances have a maximum 14-day lifetime to align with industry security and compliance standards. You don’t need to write automatic scaling policies, configure load balancers, or manage instance lifecycle yourself, and your function code, event source integrations, <a href="https://aws.amazon.com/iam/">AWS Identity and Access Management (AWS IAM)</a> permissions, and <a href="https://aws.amazon.com/cloudwatch/">Amazon CloudWatch</a> monitoring remain unchanged.</p><p><strong>Now available<br /></strong> You can start using Lambda Managed Instances today through the Lambda console, AWS CLI, or AWS SDKs. The feature is available in US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Tokyo), and Europe (Ireland) Regions. For Regional availability and future roadmap, visit the <a href="https://builder.aws.com/capabilities/">AWS Capabilities by Region</a>. Learn more about it in the <a href="https://docs.aws.amazon.com/lambda/latest/dg/lambda-managed-instances.html">AWS Lambda documentation</a>.</p><p>Pricing for Lambda Managed Instances has three components. First, you pay standard Lambda request charges of $0.20 per million invocations. Second, you pay standard Amazon EC2 instance charges for the compute capacity provisioned. Your existing Amazon EC2 pricing agreements, including Compute Savings Plans and Reserved Instances, can be applied to these instance charges to reduce costs for steady-state workloads. Third, you pay a compute management fee of 15% calculated on the EC2 on-demand instance price to cover AWS’s operational management of your instances. Note that unlike traditional Lambda functions, you are not charged separately for execution duration per request. The multiconcurrency feature helps further optimize costs by reducing the total compute time required to process your requests.</p><p>The initial release supports the latest versions of Node.js, Java, .NET and Python runtimes, with support for other languages coming soon. The feature integrates with existing Lambda workflows including function versioning, aliases, <a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Lambda-Insights.html">AWS CloudWatch Lambda Insights</a>, <a href="https://aws.amazon.com/systems-manager/features/appconfig/">AWS AppConfig</a> extensions, and deployment tools like AWS SAM and AWS CDK. You can migrate existing Lambda functions to Lambda Managed Instances without changing your function code (as long as it has been validated to be thread safe for multiconcurrency) making it easy to adopt this capability for workloads that would benefit from specialized compute or cost optimization.</p><p>Lambda Managed Instances represents a significant expansion of Lambda’s capabilities, which means you can run a broader range of workloads while preserving the serverless operational model. Whether you’re optimizing costs for high-traffic applications, or accessing the latest processor architectures like Graviton4, this new capability provides the flexibility you need without operational complexity. We’re excited to see what you build with Lambda Managed Instances.</p></section><aside class="blog-comments"><div data-lb-comp="aws-blog:cosmic-comments" data-env="prod" data-content-id="21a0a224-c35e-46db-8775-c5633f547469" data-title="Introducing AWS Lambda Managed Instances: Serverless simplicity with EC2 flexibility" data-url="https://aws.amazon.com/blogs/aws/introducing-aws-lambda-managed-instances-serverless-simplicity-with-ec2-flexibility/"><p data-failed-message="Comments cannot be loaded… Please refresh and try again.">Loading comments…</p></div>
</aside>]]></summary>
    <link href="https://aws.amazon.com/blogs/aws/introducing-aws-lambda-managed-instances-serverless-simplicity-with-ec2-flexibility/"/>
    <updated>2025-12-01T02:55:00+01:00</updated>
  </entry>
  <entry>
    <id>https://aws.amazon.com/blogs/aws/simplify-iam-policy-creation-with-iam-policy-autopilot-a-new-open-source-mcp-server-for-builders/</id>
    <title><![CDATA[Simplify IAM policy creation with IAM Policy Autopilot, a new open source MCP server for builders]]></title>
    <summary><![CDATA[<section class="blog-post-content lb-rtxt"><table><tbody><tr><td>
<p></p></td></tr></tbody>


</table><p>Today, we’re announcing IAM Policy Autopilot, a new open source <a href="https://modelcontextprotocol.io/docs/getting-started/intro">Model Context Protocol (MCP)</a> server that analyzes your application code and helps your AI coding assistants generate <a href="https://aws.amazon.com/iam/">AWS Identity and Access Management (IAM)</a> identity-based policies. IAM Policy Autopilot accelerates initial development by providing builders with a starting point that they can review and further refine. It integrates with AI coding assistants such as <a href="https://kiro.dev">Kiro</a>, <a href="https://www.claude.com/product/claude-code">Claude Code</a>, <a href="https://cursor.com">Cursor</a>, and <a href="https://cline.bot">Cline</a>, and it provides them with <a href="https://aws.amazon.com/iam/">AWS Identity and Access Management (IAM)</a> knowledge and understanding of the latest AWS services and features. IAM Policy Autopilot is available at no additional cost, runs locally, and you can get started by visiting our <a href="https://github.com/awslabs/iam-policy-autopilot">GitHub repository</a>.</p><p><a href="https://aws.amazon.com">Amazon Web Services (AWS)</a> applications require IAM policies for their roles. Builders on AWS, from developers to business leaders, engage with IAM as part of their workflow. Developers typically start with broader permissions and refine them over time, balancing rapid development with security. They often use AI coding assistants in hopes of accelerating development and authoring IAM permissions. However, these AI tools don’t fully understand the nuances of IAM and can miss permissions or suggest invalid actions. Builders seek solutions that provide reliable IAM knowledge, integrate with AI assistants and get them started with policy creation, so that they can focus on building applications.</p><p><strong>Create valid policies with AWS knowledge<br /></strong> IAM Policy Autopilot addresses these challenges by generating <a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_identity-vs-resource.html">identity-based IAM policies</a> directly from your application code. Using deterministic code analysis, it creates reliable and valid policies, so you spend less time authoring and debugging permissions. IAM Policy Autopilot incorporates AWS knowledge, including published AWS <a href="https://docs.aws.amazon.com/service-authorization/latest/reference/service-reference.html">service reference implementation</a>, to stay up to date. It uses this information to understand how code and SDK calls map to IAM actions and stays current with the latest AWS services and operations.</p><p>The generated policies provide a starting point for you to review and scope down to implement least privilege permissions. As you modify your application code—whether adding new AWS service integrations or updating existing ones—you only need to run IAM Policy Autopilot again to get updated permissions.</p><p><strong>Getting started with IAM Policy Autopilot<br /></strong> Developers can get started with IAM Policy Autopilot in minutes by downloading and integrating it with their workflow.</p><p>As an MCP server, IAM Policy Autopilot operates in the background as builders converse with their AI coding assistants. When your application needs IAM policies, your coding assistants can call IAM Policy Autopilot to analyze AWS SDK calls within your application and generate required identity-based IAM policies, providing you with necessary permissions to start with. After permissions are created, if you still encounter Access Denied errors during testing, the AI coding assistant invokes IAM Policy Autopilot to analyze the denial and propose targeted IAM policy fixes. After you review and approve the suggested changes, IAM Policy Autopilot updates the permissions.</p><p>You can also use IAM Policy Autopilot as a standalone command line interface (CLI) tool to generate policies directly or fix missing permissions. Both the CLI tool and the MCP server provide the same policy creation and troubleshooting capabilities, so you can choose the integration that best fits your workflow.</p><p>When using IAM Policy Autopilot, you should also understand the best practices to maximize its benefits. IAM Policy Autopilot generates identity-based policies and doesn’t create resource-based policies, permission boundaries, service control policies (SCPs) or resource control policies (RCPs). IAM Policy Autopilot generates policies that prioritize functionality over minimal permissions. You should always review the generated policies and refine if necessary so they align with your security requirements before deploying them.</p><p><strong>Let’s try it out<br /></strong> To set up IAM Policy Autopilot, I first need to install it on my system. To do so, I just need to run a one-liner script:</p><p><code>curl https://github.com/awslabs/iam-policy-autopilot/raw/refs/heads/main/install.sh | bash</code></p><p>Then I can follow the instructions to install any MCP server for my IDE of choice. Today, I’m using <a href="https://kiro.dev/docs/mcp/configuration/">Kiro</a>!</p><p><img class="alignnone wp-image-101238 size-full" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/18/image-20-4.png" alt="" width="918" height="482" /></p><p>In a new chat session in Kiro, I start with a straightforward prompt, where I ask Kiro to read the files in my <code>file-to-queue</code> folder and create a new <a href="https://aws.amazon.com/cloudformation/">AWS CloudFormation</a> file so I can deploy the application. This folder contains an automated <a href="https://aws.amazon.com/s3/">Amazon Simple Storage Service (Amazon S3)</a> file router that scans a bucket and sends notifications to <a href="https://aws.amazon.com/sqs/">Amazon Simple Queue Service (Amazon SQS)</a> queues or <a href="https://aws.amazon.com/eventbridge/">Amazon EventBridge</a> based on configurable prefix-matching rules, enabling event-driven workflows triggered by file locations.</p><p>The last part asks Kiro to make sure I’m including necessary IAM policies. This should be enough to get Kiro to use the IAM Policy Autopilot MCP server.</p><p><img class="alignnone size-large wp-image-101239" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/18/image-23-2-1024x610.png" alt="" width="1024" height="610" /></p><p>Next, Kiro uses the IAM Policy Autopilot MCP server to generate a new policy document, as depicted in the following image. After it’s done, Kiro will move on to building out our CloudFormation template and some additional documentation and relevant code files.</p><p><img class="alignnone size-large wp-image-101240" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/18/image-21-2-1024x774.png" alt="IAM Policy Autopilot" width="1024" height="774" /></p><p>Finally, we can see our generated CloudFormation template with a new policy document, all generated using the IAM Policy Autopilot MCP server!</p><p><img class="alignnone wp-image-101244 size-large" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/18/image-24-1-1024x557.png" alt="IAM Policy Autopilot" width="1024" height="557" /></p><p><strong>Enhanced development workflow<br /></strong> IAM Policy Autopilot integrates with AWS services across multiple areas. For core AWS services, IAM Policy Autopilot analyzes your application’s usage of services such as Amazon S3, <a href="https://aws.amazon.com/lambda">AWS Lambda</a>, Amazon DynamoDB, <a href="https://aws.amazon.com/ec2">Amazon Elastic Compute Cloud (Amazon EC2)</a>, and Amazon CloudWatch Logs, then generates necessary permissions your code needs based on the SDK calls it discovers. After the policies are created, you can copy the policy directly into your CloudFormation template, AWS Cloud Development Kit (AWS CDK) stack, or Terraform configuration. You can also prompt your AI coding assistants to integrate it for you.</p><p>IAM Policy Autopilot also complements existing IAM tools such as <a href="https://aws.amazon.com/iam/access-analyzer/">AWS IAM Access Analyzer</a> by providing functional policies as a starting point, which you can then validate using IAM Access Analyzer policy validation or refine over time with unused access analysis.</p><p><strong>Now available<br /></strong> IAM Policy Autopilot is available as an <a href="https://github.com/awslabs/iam-policy-autopilot">open source tool on GitHub</a> at no additional cost. The tool currently supports Python, TypeScript, and Go applications.</p><p>These capabilities represent a significant step forward in simplifying the AWS development experience so builders of different experience levels can develop and deploy applications more efficiently.</p></section><aside class="blog-comments"><div data-lb-comp="aws-blog:cosmic-comments" data-env="prod" data-content-id="ca29e884-a0ce-4b9a-ac0c-325993a047cb" data-title="Simplify IAM policy creation with IAM Policy Autopilot, a new open source MCP server for builders" data-url="https://aws.amazon.com/blogs/aws/simplify-iam-policy-creation-with-iam-policy-autopilot-a-new-open-source-mcp-server-for-builders/"><p data-failed-message="Comments cannot be loaded… Please refresh and try again.">Loading comments…</p></div>
</aside>]]></summary>
    <link href="https://aws.amazon.com/blogs/aws/simplify-iam-policy-creation-with-iam-policy-autopilot-a-new-open-source-mcp-server-for-builders/"/>
    <updated>2025-12-01T02:55:00+01:00</updated>
  </entry>
  <entry>
    <id>https://aws.amazon.com/blogs/aws/announcing-amazon-eks-capabilities-for-workload-orchestration-and-cloud-resource-management/</id>
    <title><![CDATA[Announcing Amazon EKS Capabilities for workload orchestration and cloud resource management]]></title>
    <summary><![CDATA[<section class="blog-post-content lb-rtxt"><table><tbody><tr><td>
<p></p></td></tr></tbody>


</table><p>Today, we’re announcing <a href="https://aws.amazon.com/eks/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">Amazon Elastic Kubernetes Service (Amazon EKS)</a> Capabilities, an extensible set of Kubernetes-native solutions that streamline workload orchestration, <a href="https://aws.amazon.com/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">Amazon Web Services (AWS)</a> cloud resource management, and Kubernetes resource composition and orchestration. These fully managed, integrated platform capabilities include open source Kubernetes solutions that many customers are using today, such as <a href="https://argoproj.github.io/cd/">Argo CD</a>, <a href="https://github.com/aws-controllers-k8s/community">AWS Controllers for Kubernetes</a>, and <a href="https://kro.run/">Kube Resource Orchestrator</a>.</p><p>With EKS Capabilities, you can build and scale Kubernetes applications without managing complex solution infrastructure. Unlike typical in-cluster installations, these capabilities actually run in EKS service-owned accounts that are fully abstracted from customers.</p><p>With AWS managing infrastructure scaling, patching, and updates of these cluster capabilities, you can use the enterprise reliability and security without needing to maintain and manage the underlying components.</p><p>Here are the capabilities available at launch:</p><ul><li><strong>Argo CD</strong> – This is a declarative GitOps tool for Kubernetes that provides continuous continuous deployment (CD) capabilities for Kubernetes. It’s broadly adopted, with more than 45% of Kubernetes end-users reporting production or planned production use in the <a href="https://www.cncf.io/reports/cncf-annual-survey-2024/">2024 Cloud Native Computing Foundation (CNCF) Survey</a>.</li>
<li><strong>AWS Controllers for Kubernetes (ACK)</strong> – ACK is highly popular with enterprise platform teams in production environments. ACK provides custom resources for Kubernetes that enable the management of AWS Cloud resources directly from within your clusters.</li>
<li><strong>Kube Resource Orchestrator (KRO)</strong> – KRO provides a streamlined way to create and manage custom resources in Kubernetes. With KRO, platform teams can create reusable resource bundles that abstract away complexity while remaining natively to the Kubernetes ecosystem.</li>
</ul><p>With these features, you can accelerate and scale your Kubernetes use with fully managed capabilities, using its opinionated but flexible features to build for scale right from the start. It is designed to offer a set of foundational cluster capabilities that layer seamlessly with each other, providing integrated features for continuous deployment, resource orchestration, and composition. You can focus on managing and shipping software without needing to spend time and resources building and managing these foundational platform components.</p><p><strong class="c6">How it works</strong><br />Platform engineers and cluster administrators can set up EKS Capabilities to offload building and managing custom solutions to provide common foundational services, meaning they can focus on more differentiated features that matter to your business.</p><p><img class="aligncenter size-full wp-image-100420" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/05/2025-eks-capabilities-1.png" alt="" width="2370" height="876" /></p><p>Your application developers primarily work with EKS Capabilities as they do other Kubernetes features. They do this by applying declarative configuration to create Kubernetes resources using familiar tools, such as <code>kubectl</code> or through automation from <code>git commit</code> to running code.</p><p><strong class="c6">Get started with EKS Capabilities</strong><br />To enable EKS Capabilities, you can use the <a href="https://us-west-2.console.aws.amazon.com/eks/clusters?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">EKS console</a>, <a href="https://aws.amazon.com/cli/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">AWS Command Line Interface (AWS CLI)</a>, <a href="https://docs.aws.amazon.com/eks/latest/eksctl/what-is-eksctl.html?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">eksctl</a>, or other preferred tools. In the EKS console, choose <strong>Create capabilities</strong> in the <strong>Capabilities</strong> tab on your existing EKS cluster. EKS Capabilities are AWS resources, and they can be tagged, managed, and deleted.</p><p><img class="aligncenter wp-image-101001 size-full c7" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/14/2025-eks-capabilities-2.png" alt="" width="2734" height="2245" /></p><p>You can select one or more capabilities to work together. I checked all three capabilities: ArgoCD, ACK, and KRO. However, these capabilities are completely independent and you can pick and choose which capabilities you want enabled on your clusters.</p><p><img class="aligncenter wp-image-100648 size-full c7" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/10/2025-eks-capabilities-3.jpg" alt="" width="2526" height="1328" /></p><p>Now you can configure selected capabilities. You should create <a href="https://aws.amazon.com/iam/">AWS Identity and Access Management (AWS IAM)</a> roles to enable EKS to operate these capabilities within your cluster. Please note you cannot modify the capability name, namespace, authentication region, or <a href="https://aws.amazon.com/iam/identity-center/">AWS IAM Identity Center</a> instance after creating the capability. Choose <strong>Next</strong> and review the settings and enable capabilities.</p><p><img class="aligncenter wp-image-101886 size-full c7" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/25/2025-eks-capabilities-4.jpg" alt="" width="1544" height="2560" /></p><p>Now you can see and manage created capabilities. Select <strong>ArgoCD</strong> to update configuration of the capability.</p><p><img class="aligncenter wp-image-100664 size-full c7" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/10/2025-eks-capabilities-6-1.png" alt="" width="2658" height="946" /></p><p>You can see details of ArgoCD capability. Choose <strong>Edit</strong> to change configuration settings or <strong>Monitor ArgoCD</strong> to show the health status of the capability for the current EKS cluster.</p><p><img class="aligncenter wp-image-100667 size-full c7" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/10/2025-eks-capabilities-7.png" alt="" width="2688" height="1550" /></p><p>Choose <strong>Go to Argo UI</strong> to visualize and monitor deployment status and application health.</p><p><img class="aligncenter size-full wp-image-100660" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/10/2025-eks-capabilities-9.png" alt="" width="3299" height="1588" /></p><p>To learn more about how to set up and use each capability in detail, visit <a href="https://docs.aws.amazon.com/eks/latest/userguide/capabilities.html?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">Getting started with EKS Capabilities</a> in the Amazon EKS User Guide.</p><p><strong class="c6">Things to know</strong><br />Here are key considerations to know about this feature:</p><ul><li><strong>Permissions</strong> – EKS Capabilities are cluster-scoped administrator resources, and resource permissions are configured through AWS IAM. For some capabilities, there is additional configuration for single sign-on. For example, Argo CD single sign-on configuration is enabled directly in EKS with a direct integration with IAM Identity Center.</li>
<li><strong>Upgrades</strong> – EKS automatically updates cluster capabilities you enable and their related dependencies. It automatically analyzes for breaking changes, patches and updates components as needed, and informs you of conflicts or issues through the EKS cluster insights.</li>
<li><strong>Adoptions</strong> – ACK provides resource adoption features that enable migration of existing AWS resources into ACK management. ACK also provides read-only resources which can help facilitate a step-wise migration from provisioned resources with Terraform, <a href="https://aws.amazon.com/cloudformation/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">AWS CloudFormation</a> into EKS Capabilities.</li>
</ul><p><strong class="c6">Now available</strong><br />Amazon EKS Capabilities are now available in commercial AWS Regions. For Regional availability and future roadmap, visit the <a href="https://builder.aws.com/capabilities/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">AWS Capabilities by Region</a>. There are no upfront commitments or minimum fees, and you only pay for the EKS Capabilities and resources that you use. To learn more, visit the <a href="https://aws.amazon.com/eks/pricing/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">EKS pricing page</a>.</p><p>Give it a try in the <a href="https://console.aws.amazon.com/eks/home#/cluster-create?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">Amazon EKS console</a> and send feedback to <a href="https://repost.aws/tags/TA4IvCeWI1TE66q4jEj4Z9zg/amazon-elastic-kubernetes-service?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">AWS re:Post for EKS</a> or through your usual AWS Support contacts.</p><p>— <a href="https://linkedin.com/in/channy">Channy</a></p></section><aside class="blog-comments"><div data-lb-comp="aws-blog:cosmic-comments" data-env="prod" data-content-id="afdba47b-5fcb-4a29-a491-42d66280a0e0" data-title="Announcing Amazon EKS Capabilities for workload orchestration and cloud resource management" data-url="https://aws.amazon.com/blogs/aws/announcing-amazon-eks-capabilities-for-workload-orchestration-and-cloud-resource-management/"><p data-failed-message="Comments cannot be loaded… Please refresh and try again.">Loading comments…</p></div>
</aside>]]></summary>
    <link href="https://aws.amazon.com/blogs/aws/announcing-amazon-eks-capabilities-for-workload-orchestration-and-cloud-resource-management/"/>
    <updated>2025-12-01T02:55:00+01:00</updated>
  </entry>
  <entry>
    <id>https://aws.amazon.com/blogs/aws/amazon-route-53-launches-accelerated-recovery-for-managing-public-dns-records/</id>
    <title><![CDATA[Amazon Route 53 launches Accelerated recovery for managing public DNS records]]></title>
    <summary><![CDATA[<section class="blog-post-content lb-rtxt"><table id="amazon-polly-audio-table"><tbody><tr><td id="amazon-polly-audio-tab">
<p></p></td></tr></tbody>


</table><p>Today, we’re announcing <a href="https://aws.amazon.com/route53/">Amazon Route 53</a> Accelerated recovery for managing public DNS records, a new Domain Name Service (DNS) business continuity feature that is designed to provide a 60-minute recovery time objective (RTO) during service disruptions in the US East (N. Virginia) <a href="https://aws.amazon.com/about-aws/global-infrastructure/regions_az/">AWS Region</a>. This enhancement ensures that customers can continue making DNS changes and provisioning infrastructure even during regional outages, providing greater predictability and resilience for mission-critical applications.</p><p>Customers running applications that require business continuity have told us they need additional DNS resilience capabilities to meet their business continuity requirements and regulatory compliance obligations. While AWS maintains exceptional availability across our global infrastructure, organizations in regulated industries like banking, FinTech, and SaaS want the confidence that they will be able to make DNS changes even during unexpected regional disruptions, allowing them to quickly provision standby cloud resources or redirect traffic when needed.</p><p>Accelerated recovery for managing public DNS records addresses this need by targeting DNS changes that customers can make within 60 minutes of a service disruption in the US East (N. Virginia) Region. The feature works seamlessly with your existing Route 53 setup, providing access to key Route 53 API operations during failover scenarios, including <a href="https://docs.aws.amazon.com/Route53/latest/APIReference/API_ChangeResourceRecordSets.html">ChangeResourceRecordSets</a>, <a href="https://docs.aws.amazon.com/Route53/latest/APIReference/API_GetChange.html">GetChange</a>, <a href="https://docs.aws.amazon.com/Route53/latest/APIReference/API_ListHostedZones.html">ListHostedZones</a>, and <a href="https://docs.aws.amazon.com/Route53/latest/APIReference/API_ListResourceRecordSets.html">ListResourceRecordSets</a>. Customers can continue using their existing Route 53 API endpoint without modifying applications or scripts.</p><p><strong>Let’s try it out<br /></strong> Configuring a Route53 hosted zone to use accelerated recovery is simple. Here I am creating a new hosted zone for a new website I’m building.</p><p><img class="alignnone wp-image-101972 size-large" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/25/1.-create-hz-page-narrow-1024x829.png" alt="" width="1024" height="829" /></p><p>Once I have created my hosted zone, I see a new tab labeled <strong>Accelerated recovery</strong>. I can see here that accelerated recovery is disabled by default.</p><p><img class="alignnone wp-image-101973 size-large" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/25/2.-accelerated-recovery-tab-narrow-1024x468.png" alt="" width="1024" height="468" /></p><p>To enable it, I just need to click the <strong>Enable</strong> button and confirm my choice in the modal that appears as depicted in the dialog below.</p><p><img class="alignnone wp-image-101974 size-large" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/25/3.-accelerated-recovery-modal-narrow-1024x444.png" alt="" width="1024" height="444" /></p><p>Enabling accelerated recovery will take a couple minutes to complete. Once it’s enabled, I see a green Enabled status as depicted in the screenshot below.</p><p><img class="alignnone wp-image-101975 size-large" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/25/4.-enable-success-notification-narrow-1024x524.png" alt="" width="1024" height="524" /></p><p>I can disable accelerated recovery at any time from this same area of the <a href="https://aws.amazon.com/console/">AWS Management Console</a>. I can also enable accelerated recovery for any existing hosted zones I have already created.</p><p><strong>Enhanced DNS business continuity<br /></strong> With accelerated recovery enabled, customers gain several key capabilities during service disruptions. The feature maintains access to essential Route 53 API operations, ensuring that DNS management remains available when it’s needed most. Organizations can continue to make critical DNS changes, provision new infrastructure, and redirect traffic flows without waiting for full service restoration.</p><p>The implementation is designed for simplicity and reliability. Customers don’t need to learn new APIs or modify existing automation scripts. The same Route 53 endpoints and API calls continue to work, providing a seamless experience during both normal operations and failover scenarios.</p><p><strong>Now available<br /></strong> Accelerated recovery for Amazon Route 53 public hosted zones is available now. You can enable this feature through the AWS Management Console, <a href="https://aws.amazon.com/cli/">AWS Command Line Interface (AWS CLI)</a>, <a href="https://builder.aws.com/build/tools">AWS Software Development Kit (AWS SDKs)</a>, or infrastructure as code tools like <a href="https://aws.amazon.com/cloudformation/">AWS CloudFormation</a> and <a href="https://aws.amazon.com/cdk/">AWS Cloud Development Kit (AWS CDK)</a>. There is no additional cost for using accelerated recovery.</p><p>To learn more about accelerated recovery and get started, visit the <a href="https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/accelerated-recovery.html">documentation</a>. This new capability represents our continued commitment to providing customers with the DNS resilience they need to build and operate mission-critical applications in the cloud.</p></section><aside id="Comments" class="blog-comments"><div data-lb-comp="aws-blog:cosmic-comments" data-env="prod" data-content-id="3ccae0df-5ae3-464b-87d3-79949fdff91f" data-title="Amazon Route 53 launches Accelerated recovery for managing public DNS records" data-url="https://aws.amazon.com/blogs/aws/amazon-route-53-launches-accelerated-recovery-for-managing-public-dns-records/"><p data-failed-message="Comments cannot be loaded… Please refresh and try again.">Loading comments…</p></div>
</aside>]]></summary>
    <link href="https://aws.amazon.com/blogs/aws/amazon-route-53-launches-accelerated-recovery-for-managing-public-dns-records/"/>
    <updated>2025-11-26T17:21:00+01:00</updated>
  </entry>
  <entry>
    <id>https://aws.amazon.com/blogs/aws/aws-weekly-roundup-how-to-join-aws-reinvent-2025-plus-kiro-ga-and-lots-of-launches-nov-24-2025/</id>
    <title><![CDATA[AWS Weekly Roundup: How to join AWS re:Invent 2025, plus Kiro GA, and lots of launches (Nov 24, 2025)]]></title>
    <summary><![CDATA[<section class="blog-post-content lb-rtxt"><table id="amazon-polly-audio-table"><tbody><tr><td id="amazon-polly-audio-tab">
<p></p></td></tr></tbody>


</table><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/24/reInvent-2025-logo.jpg"><img class="alignright wp-image-101773 size-medium" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/24/reInvent-2025-logo-300x108.jpg" alt="" width="300" height="108" /></a>Next week, don’t miss <strong><a href="https://reinvent.awsevents.com/">AWS re:Invent</a>,</strong> Dec. 1-5, 2025, for the latest AWS news, expert insights, and global cloud community connections! Our News Blog team is finalizing posts to introduce the most exciting launches from our service teams. If you’re joining us in person in Las Vegas, review the <a href="https://reinvent.awsevents.com/agenda/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">agenda</a>, <a href="https://registration.awsevents.com/flow/awsevents/reinvent2025/eventcatalog/page/eventcatalog??trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">session catalog,</a> and <a href="https://registration.awsevents.com/flow/awsevents/reinvent2025/AttendeeGuides/page/attendeeguidelanding?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">attendee guides</a> before arriving. Can’t attend in person? <a href="https://reinvent.awsevents.com/livestream/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">Watch our Keynotes and Innovation Talks via livestream.</a></p><p><strong>Kiro is now generally available</strong><br />Last week, <a href="https://kiro.dev/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">Kiro</a>, the first AI coding tool built around spec-driven development, became <a href="https://kiro.dev/blog/general-availability/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">generally available</a>. This tool, which we pioneered to bring more clarity and structure to agentic workflows, has already been embraced by over 250,000 developers since its preview release. The GA launch introduces four new capabilities: <a href="https://kiro.dev/blog/property-based-testing/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">property-based testing for spec correctness</a> (which measures whether your code matches what you specified); <a href="https://kiro.dev/blog/introducing-checkpointing/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">a new way to checkpoint your progress on Kiro</a>; <a href="https://kiro.dev/blog/introducing-kiro-cli/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">a new Kiro CLI bringing agents to your terminal</a>; and <a href="https://kiro.dev/enterprise/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">enterprise team plans</a> with centralized management.</p><p><strong class="c6">Last week’s launches</strong><br />We’ve announced numerous new feature and service launches as we approach re:Invent week. Key launches include:</p><p>Here are some AWS bundled feature launches:</p><ul><li><strong>Amazon EKS</strong> announces new <a href="https://aws.amazon.com/about-aws/whats-new/2025/11/amazon-eks-provisioned-control-plane/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">Provisioned Control Plane</a>, and <a href="https://aws.amazon.com/about-aws/whats-new/2025/11/amazon-eks-ecs-fully-managed-mcp-servers-preview?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">fully managed MCP servers</a> (preview) and <a href="https://aws.amazon.com/about-aws/whats-new/2025/11/amazon-ecs-eks-ai-powered-troubleshooting-console/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">enhanced AI-powered troubleshooting</a> in the console with <strong>Amazon ECS</strong>.</li>
<li><strong>Amazon ECR</strong> introduces <a href="https://aws.amazon.com/about-aws/whats-new/2025/11/amazon-ecr-managed-container-image-signing?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">managed container image signing</a>, <a href="https://aws.amazon.com/about-aws/whats-new/2025/11/amazon-ecr-archive-storage-class-container-images/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">archive storage class for rarely accessed container images</a>, and <a href="https://aws.amazon.com/about-aws/whats-new/2025/11/amazon-ecr-privatelink-fips-endpoints/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">AWS PrivateLink for FIPS Endpoints</a>.</li>
<li><strong>Amazon Aurora DSQL</strong> provides an <a href="https://aws.amazon.com/about-aws/whats-new/2025/11/amazon-aurora-dsql-integrated-query-editor/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">integrated query editor</a> in the console, s<a href="https://aws.amazon.com/about-aws/whats-new/2025/11/aurora-dsql-statement-level-cost-estimates-query-plans/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">tatement-level cost estimates</a> in query plans, <a href="https://aws.amazon.com/about-aws/whats-new/2025/11/aurora-dsql-python-node-js-jdbc-connectors-iam?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">new Python, Node.js, and JDBC Connectors</a>, up to <a href="https://aws.amazon.com/about-aws/whats-new/2025/11/amazon-aurora-dsql-database-clusters-up-to-256-tib?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">256 TiB of storage volume.</a></li>
<li><strong>Amazon API Gateway</strong> supports <a href="https://aws.amazon.com/about-aws/whats-new/2025/11/api-gateway-response-streaming-rest-apis/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">response streaming for REST APIs</a>, <a href="https://aws.amazon.com/about-aws/whats-new/2025/11/api-gateway-developer-portal-capabilities/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">developer portal capabilities</a>, and <a href="https://aws.amazon.com/about-aws/whats-new/2025/11/amazon-api-gateway-tls-security-rest-apis/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">additional TLS security policies for REST APIs.</a></li>
<li><strong>Amazon Connect</strong> provides <a href="https://aws.amazon.com/about-aws/whats-new/2025/11/amazon-connect-conversational-analytics/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">conversational analytics for voice</a>, <a href="https://aws.amazon.com/about-aws/whats-new/2025/11/amazon-connect-persistent-agent-connections?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">persistent agent connections for faster call handling</a>, and <a href="https://aws.amazon.com/about-aws/whats-new/2025/11/amazon-connect-multi-skill-agent-scheduling?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">multi skill agent scheduling</a>.</li>
<li><strong>Amazon CloudWatch</strong> introduces <a href="https://aws.amazon.com/about-aws/whats-new/2025/11/amazon-cloudwatch-scheduled-queries?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">scheduled queries in Logs Insights</a> and <a href="https://aws.amazon.com/about-aws/whats-new/2025/11/cloudwatch-in-console-agent-management-ec2/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">in-console agent management on EC2</a>.</li>
<li><strong>AWS CloudFormation</strong> StackSets offers <a href="https://aws.amazon.com/about-aws/whats-new/2025/11/aws-cloudformation-stacksets-deployment-ordering/">deployment ordering for auto-deployment mode</a>. You can define the sequence in which your stack instances automatically deploy across accounts and Regions.</li>
<li><strong>AWS NAT Gateway</strong> supports <a href="https://aws.amazon.com/about-aws/whats-new/2025/11/aws-nat-gateway-regional-availability/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">Regional availability</a> to create a single NAT Gateway that automatically expands and contracts across availability zones (AZs).</li>
<li><strong>Amazon Bedrock</strong> supports <a href="https://aws.amazon.com/about-aws/whats-new/2025/11/bedrock-model-import-openai-gpt-oss-models/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">OpenAI GPT OSS models for Custom Model Import</a>, <a href="https://aws.amazon.com/about-aws/whats-new/2025/11/amazon-bedrock-guardrails-coding-use-cases?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">coding use cases for Guardrails</a>, and <a href="https://aws.amazon.com/about-aws/whats-new/2025/11/amazon-bedrock-data-automation-10-languages/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">10 additional languages for speech analytics</a> for Data Automation.</li>
<li><strong>Amazon OpenSearch</strong> supports <a href="https://aws.amazon.com/about-aws/whats-new/2025/11/amazon-opensearch-service-cluster-insights/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">Cluster Insights for improved operational visibility</a>, <a href="https://aws.amazon.com/about-aws/whats-new/2025/11/amazon-opensearch-serverless-backup-and-restore-console?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">backup and restore</a> and <a href="https://aws.amazon.com/about-aws/whats-new/2025/11/amazon-opensearch-serverless-auditlogs-dataplane-apis?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">audit logs for data plane APIs</a> in Serverless through the console.</li>
</ul><p>See <a href="https://aws.amazon.com/new/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">AWS What’s New</a> for more launch news that I haven’t covered here, and we’ll see you next week at re:Invent!</p><p>– <a href="https://twitter.com/channyun">Channy</a></p></section><aside id="Comments" class="blog-comments"><div data-lb-comp="aws-blog:cosmic-comments" data-env="prod" data-content-id="b79188ae-b3bf-4390-ab89-b2d9bdd8e448" data-title="AWS Weekly Roundup: How to join AWS re:Invent 2025, plus Kiro GA, and lots of launches (Nov 24, 2025)" data-url="https://aws.amazon.com/blogs/aws/aws-weekly-roundup-how-to-join-aws-reinvent-2025-plus-kiro-ga-and-lots-of-launches-nov-24-2025/"><p data-failed-message="Comments cannot be loaded… Please refresh and try again.">Loading comments…</p></div>
</aside>]]></summary>
    <link href="https://aws.amazon.com/blogs/aws/aws-weekly-roundup-how-to-join-aws-reinvent-2025-plus-kiro-ga-and-lots-of-launches-nov-24-2025/"/>
    <updated>2025-11-24T20:58:00+01:00</updated>
  </entry>
  <entry>
    <id>https://aws.amazon.com/blogs/aws/new-one-click-onboarding-and-notebooks-with-ai-agent-in-amazon-sagemaker-unified-studio/</id>
    <title><![CDATA[New one-click onboarding and notebooks with a built-in AI agent in Amazon SageMaker Unified Studio]]></title>
    <summary><![CDATA[<section class="blog-post-content lb-rtxt"><table id="amazon-polly-audio-table"><tbody><tr><td id="amazon-polly-audio-tab">
<p></p></td></tr></tbody>


</table><p>Today we’re announcing a faster way to get started with your existing AWS datasets in <a href="https://aws.amazon.com/sagemaker/unified-studio/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">Amazon SageMaker Unified Studio</a>. You can now start working with any data you have access to in a new serverless notebook with a built-in AI agent, using your existing <a href="https://aws.amazon.com/iam/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">AWS Identity and Access Management (IAM)</a> roles and permissions.</p><p><img class="aligncenter wp-image-101420 size-full" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/19/2025-sagemaker-unified-studio-3-new-IDE-1.jpg" alt="" width="2384" height="1446" /></p><p class="jss223" data-pm-slice="1 1 []">New updates include:</p><ul><li><strong>One-click onboarding</strong> – Amazon SageMaker can now automatically create a project in Unified Studio with all your existing data permissions from <a href="https://aws.amazon.com/glue/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">AWS Glue Data Catalog</a>, <a href="https://aws.amazon.com/lake-formation/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">AWS Lake Formation</a>, and <a href="https://aws.amazon.com/s3/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">Amazon Simple Storage Services (Amazon S3)</a>.</li>
<li><strong>Direct integration</strong> – You can launch SageMaker Unified Studio directly from <a href="https://aws.amazon.com/sagemaker/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">Amazon SageMaker</a>, <a href="https://aws.amazon.com/athena/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">Amazon Athena</a>, <a href="https://aws.amazon.com/redshift/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">Amazon Redshift</a>, and <a href="https://aws.amazon.com/s3/features/tables/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">Amazon S3 Tables</a> console pages, giving a fast path to analytics and AI workloads.</li>
<li><strong>Notebooks with a built-in AI agent</strong> – You can use a new serverless notebook with a built-in AI agent, which supports SQL, Python, Spark, or natural language and gives data engineers, analysts, and data scientists one place to develop and run both SQL queries and code.</li>
</ul><p>You also have access to other tools such as a <a href="https://docs.aws.amazon.com/sagemaker-unified-studio/latest/userguide/getting-started-querying.html?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">Query Editor</a> for SQL analysis, <a href="https://docs.aws.amazon.com/sagemaker-unified-studio/latest/userguide/jupyterlab.html?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">JupyterLab</a> integrated developer environment (IDE), <a href="https://docs.aws.amazon.com/sagemaker-unified-studio/latest/userguide/visual-etl.html?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">Visual ETL and workflows</a>, and <a href="https://docs.aws.amazon.com/sagemaker-unified-studio/latest/userguide/sagemaker.html?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">machine learning (ML) capabilities</a>.</p><p><strong class="c6">Try one-click onboarding and connect to Amazon SageMaker Unified Studio</strong><br />To get started, go to the <a href="https://console.aws.amazon.com/datazone/home?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">SageMaker console</a> and choose the <strong>Get started</strong> button.</p><p><img class="aligncenter size-full wp-image-100716 c7" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/11/2025-sagemaker-unified-studio-1-get-started.jpg" alt="" width="2446" height="800" /></p><p>You will be prompted either to select an existing <a href="https://aws.amazon.com/iam/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">AWS Identity and Access Management (AWS IAM)</a> role that has access to your data and compute, or to create a new role.</p><p><img class="aligncenter wp-image-101624 size-full c8" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/21/2025-sagemaker-unified-studio-2-quick-setup.png" alt="" width="1833" height="1361" /></p><p>Choose <strong>Set up</strong>. It takes a few minutes to complete your environment. After this role is granted access, you’ll be taken to the SageMaker Unified Studio landing page where you will see the datasets that you have access to in AWS Glue Data Catalog as well as a variety of analytics and AI tools to work with.</p><p>This environment automatically creates the following serverless compute: Amazon Athena Spark, Amazon Athena SQL, AWS Glue Spark, and <a href="https://aws.amazon.com/managed-workflows-for-apache-airflow/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">Amazon Managed Workflows for Apache Airflow (MWAA)</a> serverless. This means you completely skip provisioning and can start working immediately with just-in-time compute resources, and it automatically scales back down when you finish, helping to save on costs.</p><p>You can also get started working on specific tables in Amazon Athena, Amazon Redshift, and Amazon S3 Tables. For example, you can select <strong>Query your data in Amazon SageMaker Unified Studio</strong> and then choose <strong>Get started</strong> in Amazon Athena console.</p><p><img class="aligncenter size-full wp-image-100720 c7" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/11/2025-sagemaker-unified-studio-integration-athena.png" alt="" width="2256" height="788" /></p><p>If you start from these consoles, you’ll connect directly to the Query Editor with the data that you were looking at already accessible, and your previous query context preserved. By using this context-aware routing, you can run queries immediately once inside the SageMaker Unified Studio without unnecessary navigation.</p><p><strong class="c6">Getting started with notebooks with a built-in AI agent</strong><br />Amazon SageMaker is introducing a new notebook experience that provides data and AI teams with a high-performance, serverless programming environment for analytics and ML jobs. The new notebook experience includes Amazon SageMaker Data Agent, a built-in AI agent that accelerates development by generating code and SQL statements from natural language prompts while guiding users through their tasks.</p><p>To start a new notebook, choose the <strong>Notebooks</strong> menu in the left navigation pane to run SQL queries, Python code, and natural language, and to discover, transform, analyze, visualize, and share insights on data. You can get started with sample data such as customer analytics and retail sales forecasting.</p><p><img class="aligncenter wp-image-101421 size-full" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/19/2025-sagemaker-unified-studio-4-new-notebooks-1.png" alt="" width="2394" height="1460" /></p><p>When you choose a sample project for customer usage analysis, you can open sample notebook to explore customer usage patterns and behaviors in a telecom dataset.</p><p><img class="aligncenter wp-image-101515 size-full c9" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/20/2025-sagemaker-unified-studio-5-notebook-data-1.png" alt="" width="2003" height="1373" /></p><p>As I noted, the notebook includes a built-in AI agent that helps you interact with your data through natural language prompts. For example, you can start with data discovery using prompts like:</p><p><code>Show me some insights and visualizations on the customer churn dataset.</code></p><p><img class="aligncenter wp-image-101516 size-full c9" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/20/2025-sagemaker-unified-studio-5-notebook-genai-2.jpg" alt="" width="1885" height="1442" /></p><p>After you identify relevant tables, you can request specific analysis to generate Spark SQL. The AI agent creates step-by-step plans with initial code for data transformations and Python code for visualizations. If you see an error message while running the generated code, choose <strong>Fix with AI</strong> to get help resolving it. Here is a sample result:</p><p><img class="aligncenter wp-image-101517 size-full c9" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/20/2025-sagemaker-unified-studio-5-notebook-genai-visual-1.jpg" alt="" width="1887" height="1450" /></p><p>For ML workflows, use specific prompts like:</p><p><code>Build an XGBoost classification model for churn prediction using the churn table, with purchase frequency, average transaction value, and days since last purchase as features.</code></p><p><img class="aligncenter wp-image-101518 size-full" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/20/2025-sagemaker-unified-studio-6-notebook-ml-prompt-3.png" alt="" width="2404" height="1439" /></p><p>This prompt receives structured responses including a step-by-step plan, data loading, feature engineering, and model training code using the SageMaker AI capabilities, and evaluation metrics. SageMaker Data Agent works best with specific prompts and is optimized for AWS data processing services including Athena for Apache Spark and SageMaker AI.</p><p>To learn more about new notebook experience, visit the <a href="https://docs.aws.amazon.com/sagemaker-unified-studio/latest/userguide/notebooks.html?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">Amazon SageMaker Unified Studio User Guide</a>.</p><p><strong class="c6">Now available</strong><br />One-click onboarding and the new notebook experience in Amazon SageMaker Unified Studio are now available in US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Singapore), and Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland) Regions. To learn more, visit the <a href="https://aws.amazon.com/sagemaker/unified-studio/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">SageMaker Unified Studio product page</a>.</p><p>Give it a try in the <a href="https://console.aws.amazon.com/datazone/home?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">SageMaker console</a> and send feedback to <a href="https://repost.aws/tags/TAdXqriMJIT6CL4ervYlUgow/amazon-sagemaker-unified-studio?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">AWS re:Post for SageMaker Unified Studio</a> or through your usual AWS Support contacts.</p><p>— <a href="https://linkedin.com/in/channy">Channy</a></p></section><aside id="Comments" class="blog-comments"><div data-lb-comp="aws-blog:cosmic-comments" data-env="prod" data-content-id="9823eac5-6e53-439c-a012-4ae5e84ef861" data-title="New one-click onboarding and notebooks with a built-in AI agent in Amazon SageMaker Unified Studio" data-url="https://aws.amazon.com/blogs/aws/new-one-click-onboarding-and-notebooks-with-ai-agent-in-amazon-sagemaker-unified-studio/"><p data-failed-message="Comments cannot be loaded… Please refresh and try again.">Loading comments…</p></div>
</aside>]]></summary>
    <link href="https://aws.amazon.com/blogs/aws/new-one-click-onboarding-and-notebooks-with-ai-agent-in-amazon-sagemaker-unified-studio/"/>
    <updated>2025-11-22T02:23:00+01:00</updated>
  </entry>
  <entry>
    <id>https://aws.amazon.com/blogs/aws/build-production-ready-applications-without-infrastructure-complexity-using-amazon-ecs-express-mode/</id>
    <title><![CDATA[Build production-ready applications without infrastructure complexity using Amazon ECS Express Mode]]></title>
    <summary><![CDATA[<section class="blog-post-content lb-rtxt"><table id="amazon-polly-audio-table"><tbody><tr><td id="amazon-polly-audio-tab">
<p></p></td></tr></tbody>


</table><p>Deploying containerized applications to production requires navigating hundreds of configuration parameters across load balancers, auto scaling policies, networking, and security groups. This overhead delays time to market and diverts focus from core application development.</p><p>Today, I’m excited to announce Amazon ECS Express Mode, a new capability from <a href="https://aws.amazon.com/ecs/?trk=c4ea046f-18ad-4d23-a1ac-cdd1267f942c&amp;sc_channel=el">Amazon Elastic Container Service (Amazon ECS)</a> that helps you launch highly available, scalable containerized applications with a single command. ECS Express Mode automates infrastructure setup including domains, networking, load balancing, and auto scaling through simplified APIs. This means you can focus on building applications while deploying with confidence using <a href="https://aws.amazon.com/?trk=c4ea046f-18ad-4d23-a1ac-cdd1267f942c&amp;sc_channel=el">Amazon Web Services (AWS)</a> best practices. Furthermore, when your applications evolve and require advanced features, you can seamlessly configure and access the full capabilities of the resources, including Amazon ECS.</p><p>You can get started with Amazon ECS Express Mode by navigating to the <a href="https://console.aws.amazon.com/ecs/">Amazon ECS console</a>.</p><p><img class="aligncenter size-full wp-image-100534" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/08/news-2025-11-ecs-express-01.png" alt="" width="1920" height="910" /></p><p>Amazon ECS Express Mode provides a simplified interface to the Amazon ECS service resource with new integrations for creating commonly used resources across AWS. ECS Express Mode automatically provisions and configures ECS clusters, task definitions, Application Load Balancers, auto scaling policies, and Amazon Route 53 domains from a single entry point.</p><p><strong>Getting started with ECS Express Mode<br /></strong> Let me walk you through how to use Amazon ECS Express Mode. I’ll focus on the console experience, which provides the quickest way to deploy your containerized application.</p><p>For this example, I’m using a simple container image application running on Python with the Flask framework. Here’s the <code>Dockerfile</code> of my demo, which I have pushed to an <a href="https://aws.amazon.com/ecr/?trk=c4ea046f-18ad-4d23-a1ac-cdd1267f942c&amp;sc_channel=el">Amazon Elastic Container Registry (Amazon ECR)</a> repository:</p><pre class="language-dockerfile">
# Build stage
FROM python:3.6-slim as builder
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir --user -r requirements.txt gunicorn
# Runtime stage
FROM python:3.6-slim
WORKDIR /app
COPY --from=builder /root/.local /root/.local
COPY app.py .
ENV PATH=/root/.local/bin:$PATH
EXPOSE 80
CMD ["gunicorn", "--bind", "0.0.0.0:80", "app:app"]
</pre><p>On the Express Mode page, I choose <strong>Create</strong>. The interface is streamlined — I specify my container image URI from Amazon ECR, then select my task execution role and infrastructure role. If you don’t already have these roles, choose <strong>Create new role</strong> in the drop down to have one created for you from the <a href="https://aws.amazon.com/iam/?trk=c4ea046f-18ad-4d23-a1ac-cdd1267f942c&amp;sc_channel=el">AWS Identity and Access Management (IAM)</a> managed policy.</p><p><img class="aligncenter size-full wp-image-100535" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/08/news-2025-11-ecs-express-02.png" alt="" width="1920" height="871" /></p><p>If I want to customize the deployment, I can expand the <strong>Additional configurations</strong> section to define my cluster, container port, health check path, or environment variables.</p><p><img class="aligncenter size-full wp-image-100536" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/08/news-2025-11-ecs-express-03.png" alt="" width="1060" height="945" /></p><p>In this section, I can also adjust CPU, memory, or scaling policies.</p><p><img class="aligncenter size-full wp-image-100537" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/08/news-2025-11-ecs-express-04.png" alt="" width="1060" height="608" /></p><p>Setting up logs in <a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.html">Amazon CloudWatch Logs</a> is something I always configure so I can troubleshoot my applications if needed. When I’m happy with the configurations, I choose <strong>Create</strong>.</p><p><img class="aligncenter size-full wp-image-100538" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/08/news-2025-11-ecs-express-04-1.png" alt="" width="1060" height="591" /></p><p>After I choose <strong>Create</strong>, Express Mode automatically provisions a complete application stack, including an Amazon ECS service with <a href="https://aws.amazon.com/fargate/?trk=c4ea046f-18ad-4d23-a1ac-cdd1267f942c&amp;sc_channel=el">AWS Fargate</a> tasks, Application Load Balancer with health checks, auto scaling policies based on CPU utilization, security groups and networking configuration, and a custom domain with an AWS provided URL. I can also follow the progress in <strong>Timeline view</strong> on the <strong>Resources</strong> tab.</p><p><img class="aligncenter size-full wp-image-100540" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/08/news-2025-11-ecs-express-05.png" alt="" width="1264" height="1643" /></p><p>If I need to do a programmatic deployment, the same result can be achieved with a single <a href="https://aws.amazon.com/cli/?trk=c4ea046f-18ad-4d23-a1ac-cdd1267f942c&amp;sc_channel=el">AWS Command Line Interface (AWS CLI)</a> command:</p><pre class="lang-bash">aws ecs create-express-gateway-service \
--image [ACCOUNT_ID].ecr.us-west-2.amazonaws.com/myapp:latest \
--execution-role-arn arn:aws:iam::[ACCOUNT_ID]:role/[IAM_ROLE] \
--infrastructure-role-arn arn:aws:iam::[ACCOUNT_ID]:role/[IAM_ROLE]</pre><p>After it’s complete, I can see my application URL in the console and access my running application immediately.</p><p><img class="aligncenter size-full wp-image-100539" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/08/news-2025-11-ecs-express-07.png" alt="" width="1123" height="493" /></p><p>After the application is created, I can see the details by visiting the specified cluster, or the default cluster if I didn’t specify one, in the ECS service to monitor performance, view logs, and manage the deployment.</p><p><img class="aligncenter size-full wp-image-100544" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/08/news-2025-11-ecs-express-09-1.png" alt="" width="3024" height="1584" /></p><p>When I need to update my application with a new container version, I can return to the console, select my Express service, and choose <strong>Update</strong>. I can use the interface to specify a new image URI or adjust resource allocations.</p><p><img class="aligncenter size-full wp-image-100545" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/08/news-2025-11-ecs-express-08.png" alt="" width="2932" height="1392" /></p><p>Alternatively, I can use the AWS CLI for updates:</p><pre class="language-bash">aws ecs update-express-gateway-service \
  --service-arn arn:aws:ecs:us-west-2:[ACCOUNT_ID]:service/[CLUSTER_NAME]/[APP_NAME] \
  --primary-container '{
    "image": "[IMAGE_URI]"
  }'
</pre><p>I find the entire experience reduces setup complexity while still giving me access to all the underlying resources when I need more advanced configurations.</p><p><strong>Additional things to know<br /></strong> Here are additional things about ECS Express Mode:</p><ul><li><strong>Availability</strong> – ECS Express Mode is available in all <a href="https://docs.aws.amazon.com/glossary/latest/reference/glos-chap.html#region">AWS Regions</a> at launch.</li>
<li><strong><a href="https://aws.amazon.com/what-is/iac/?trk=c4ea046f-18ad-4d23-a1ac-cdd1267f942c&amp;sc_channel=el">Infrastructure as Code</a> support</strong> – You can use IaC tools such as <a href="https://aws.amazon.com/cloudformation/?trk=c4ea046f-18ad-4d23-a1ac-cdd1267f942c&amp;sc_channel=el">AWS CloudFormation</a>, <a href="https://aws.amazon.com/cdk/?trk=c4ea046f-18ad-4d23-a1ac-cdd1267f942c&amp;sc_channel=el">AWS Cloud Development Kit (CDK)</a>, or Terraform to deploy your applications using Amazon ECS Express Mode.</li>
<li><strong>Pricing</strong> – There is no additional charge to use Amazon ECS Express Mode. You pay for AWS resources created to launch and run your application.</li>
<li><strong>Application Load Balancer sharing</strong> – The ALB created is automatically shared across up to 25 ECS services using host-header based listener rules. This helps distribute the cost of the ALB significantly.</li>
</ul><p>Get started with Amazon ECS Express Mode through the Amazon ECS console. Learn more on the <a href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/express-service-overview.html">Amazon ECS documentation</a> page.</p><p>Happy building!<br />— <a href="https://www.linkedin.com/in/donnieprakoso">Donnie</a></p></section><aside id="Comments" class="blog-comments"><div data-lb-comp="aws-blog:cosmic-comments" data-env="prod" data-content-id="f45ac798-2ffe-4162-a1ec-a660a6de26ef" data-title="Build production-ready applications without infrastructure complexity using Amazon ECS Express Mode" data-url="https://aws.amazon.com/blogs/aws/build-production-ready-applications-without-infrastructure-complexity-using-amazon-ecs-express-mode/"><p data-failed-message="Comments cannot be loaded… Please refresh and try again.">Loading comments…</p></div>
</aside>]]></summary>
    <link href="https://aws.amazon.com/blogs/aws/build-production-ready-applications-without-infrastructure-complexity-using-amazon-ecs-express-mode/"/>
    <updated>2025-11-21T22:34:00+01:00</updated>
  </entry>
  <entry>
    <id>https://aws.amazon.com/blogs/aws/introducing-vpc-encryption-controls-enforce-encryption-in-transit-within-and-across-vpcs-in-a-region/</id>
    <title><![CDATA[Introducing VPC encryption controls: Enforce encryption in transit within and across VPCs in a Region]]></title>
    <summary><![CDATA[<section class="blog-post-content lb-rtxt"><table id="amazon-polly-audio-table"><tbody><tr><td id="amazon-polly-audio-tab">
<p></p></td></tr></tbody>


</table><p>Today, we’re announcing virtual private cloud (VPC) encryption controls, a new capability of <a href="https://aws.amazon.com/vpc/">Amazon Virtual Private Cloud (Amazon VPC)</a> that helps you audit and enforce encryption in transit for all traffic within and across VPCs in a Region.</p><p>Organizations across financial services, healthcare, government, and retail face significant operational complexity in maintaining encryption compliance across their cloud infrastructure. Traditional approaches require piecing together multiple solutions and managing complex public key infrastructure (PKI), while manually tracking encryption across different network paths using spreadsheets—a process prone to human error that becomes increasingly challenging as infrastructure scales.</p><p>Although <a href="https://aws.amazon.com/ec2/nitro/">AWS Nitro</a> based instances automatically encrypt traffic at the hardware layer without affecting performance, organizations need simple mechanisms to extend these capabilities across their entire VPC infrastructure. This is particularly important for demonstrating compliance with regulatory frameworks such as Health Insurance Portability and Accountability (HIPAA), Payment Card Industry Data Security Standard (PCI DSS), and Federal Risk and Authorization Management Program (FedRAMP), which require proof of end-to-end encryption across environments. Organizations need centralized visibility and control over their encryption status, without having to manage performance trade-offs or complex key management systems.</p><p>VPC encryption controls address these challenges by providing two operational modes: monitor and enforce. In monitor mode, you can audit the encryption status of your traffic flows and identify resources that allow plaintext traffic. The feature adds a new encryption-status field to VPC flow logs, giving you visibility into whether traffic is encrypted using Nitro hardware encryption, application-layer encryption (TLS), or both.</p><p>After you’ve identified resources that need modification, you can take steps to implement encryption. AWS services, such as <a href="https://aws.amazon.com/elasticloadbalancing/network-load-balancer/">Network Load Balancer</a>, <a href="https://docs.aws.amazon.com/elasticloadbalancing/latest/application/introduction.html">Application Load Balancer</a>, and <a href="https://aws.amazon.com/fargate/">AWS Fargate</a> tasks, will automatically and transparently migrate your underlying infrastructure to Nitro hardware without any action required from you and with no service interruption. For other resources, such as the previous generation of <a href="https://aws.amazon.com/ec2/">Amazon Elastic Compute Cloud (Amazon EC2)</a> instances, you will need to switch to <a href="https://docs.aws.amazon.com/ec2/latest/instancetypes/ec2-nitro-instances.html">modern Nitro based</a> <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/data-protection.html#encryption-transit">instance types</a> or configure TLS encryption at application level.</p><p>You can switch to enforce mode after all resources have been migrated to encryption-compliant infrastructure. This migration to encryption-compliant hardware and communication protocols is a prerequisite for enabling enforce mode. You can configure specific exclusions for resources such as <a href="https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Internet_Gateway.html">internet gateways</a> or <a href="https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html">NAT gateways</a>, that don’t support encryption (because the traffic flows outside of your VPC or the AWS network).</p><p>Other <a href="https://docs.aws.amazon.com/vpc/latest/userguide/vpc-encryption-controls.html">resources</a> must be encryption-compliant and can’t be excluded. After activation, enforce mode provides that all future resources are only created on compatible Nitro instances, and unencrypted traffic is dropped when incorrect protocols or ports are detected.</p><p><strong>Let me show you how to get started</strong></p><p>For this demo, I started three EC2 instances. I use one as a web server with Nginx installed on port 80, serving a clear text HTML page. The other two are continuously making HTTP GET requests to the server. This generates clear text traffic in my VPC. I use the <code>m7g.medium</code> instance type for the web server and one of the two clients. This instance type uses the underlying Nitro System hardware to automatically encrypt in-transit traffic between instances. I use a <code>t4g.medium</code> instance for the other web client. The network traffic of that instance is not encrypted at the hardware level.</p><p>To get started, I enable encryption controls in monitor mode. In the <a href="https://console.aws.amazon.com">AWS Management Console</a>, I select <strong>Your VPCs</strong> in the left navigation pane, then I switch to the <strong>VPC encryption controls</strong> tab. I choose <strong>Create encryption control</strong> and select the VPC I want to create the control for.</p><p>Each VPC can have only one VPC encryption control associated with it, creating a one-to-one relationship between the VPC ID and the VPC encryption control Id. When creating VPC encryption controls, you can add tags to help with resource organization and management. You can also activate VPC encryption control when you create a new VPC.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/10/17/2025-10-17_14-40-56.png"><img class="aligncenter size-large wp-image-99929" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/10/17/2025-10-17_14-40-56-1024x419.png" alt="VPC Encryption Control - create EC 1" width="1024" height="419" /></a></p><p>I enter a <strong>Name</strong> for this control. I select the <strong>VPC</strong> I want to control. For existing VPCs, I have to start in <strong>Monitor mode,</strong> and I can turn on <strong>Enforce mode</strong> when I’m sure there is no unencrypted traffic. For new VPCs, I can enforce encryption at the time of creation.</p><p>Optionally, I can define tags when creating encryption controls for an existing VPC. However, when enabling encryption controls during VPC creation, separate tags can’t be created for VPC encryption controls—because they automatically inherit the same tags as the VPC. When I’m ready, I choose <strong>Create encryption control.</strong></p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/10/17/2025-10-17_14-41-16.png"><img class="aligncenter size-large wp-image-99930" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/10/17/2025-10-17_14-41-16-1024x625.png" alt="VPC Encryption Control - create EC 2" width="1024" height="625" /></a>Alternatively, I can use the <a href="https://aws.amazon.com/cli/">AWS Command Line Interface (AWS CLI)</a>:</p><pre class="lang-bash">aws ec2 create-vpc-encryption-control --vpc-id vpc-123456789</pre><p>Next, I audit the encryption status of my VPC using the console, command line, or flow logs:</p><pre class="lang-bash">aws ec2 create-flow-logs \
  --resource-type VPC \
  --resource-ids vpc-123456789 \
  --traffic-type ALL \
  --log-destination-type s3 \
  --log-destination arn:aws:s3:::vpc-flow-logs-012345678901/vpc-flow-logs/ \
  --log-format '${flow-direction} ${traffic-path} ${srcaddr} ${dstaddr} ${srcport} ${dstport} ${encryption-status}'
{
    "ClientToken": "F7xmLqTHgt9krTcFMBHrwHmAZHByyDXmA1J94PsxWiU=",
    "FlowLogIds": [
        "fl-0667848f2d19786ca"
    ],
    "Unsuccessful": []
}</pre><p>After a few minutes, I see this traffic in my logs:</p><pre class="lang-text">flow-direction traffic-path srcaddr dstaddr srcport dstport encryption-status
ingress - 10.0.133.8 10.0.128.55 43236 80 1 # &lt;-- HTTP between web client and server. Encrypted at hardware-level
egress 1 10.0.128.55 10.0.133.8 80 43236 1
ingress - 10.0.133.8 10.0.128.55 36902 80 1
egress 1 10.0.128.55 10.0.133.8 80 36902 1
ingress - 10.0.130.104 10.0.128.55 55016 80 0 # &lt;-- HTTP between web client and server. Not encrypted at hardware-level
egress 1 10.0.128.55 10.0.130.104 80 55016 0
ingress - 10.0.130.104 10.0.128.55 60276 80 0
egress 1 10.0.128.55 10.0.130.104 80 60276 0</pre><ul><li><code>10.0.128.55</code> is the web server with hardware-encrypted traffic, serving clear text traffic at application level.</li>
<li><code>10.0.133.8</code> is the web client with hardware-encrypted traffic.</li>
<li><code>10.0.130.104</code> is the web client with no encryption at the hardware level.</li>
</ul><p>The <code>encryption-status</code> field tells me the status of the encryption for the traffic between the source and destination address:</p><ul><li>0 means the traffic is in clear text</li>
<li>1 means the traffic is encrypted at the network layer (Level 3) by the Nitro system</li>
<li>2 means the traffic is encrypted at the application layer (Level7, TCP Port 443 and TLS/SSL)</li>
<li>3 means the traffic is encrypted both at the application layer (TLS) and the network layer (Nitro)</li>
<li>“-” means VPC encryption controls are not enabled, or AWS Flow Logs don’t have the status information.</li>
</ul><p>The traffic originating from the web client on the instance that isn’t Nitro based (<code>10.0.130.104</code>), is flagged as <code>0</code>. The traffic initiated from the web client on the Nitro- ased instance (<code>10.0.133.8</code>) is flagged as <code>1</code>.</p><p>I also use the console to identify resources that need modification. It reports two nonencrypted resources: the internet gateway and the <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html">elastic network interface (ENI)</a> of the instance that isn’t based on Nitro.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/07/2025-11-07_21-53-27.png"><img class="aligncenter size-large wp-image-100517" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/07/2025-11-07_21-53-27-1024x327.png" alt="VPC Encryption Control - list of exclusions" width="1024" height="327" /></a>I can also check for nonencrypted resources using the CLI:</p><pre class="lang-bash">aws ec2 get-vpc-resources-blocking-encryption-enforcement --vpc-id vpc-123456789</pre><p>After updating my resources to support encryption, I can use the console or the CLI to switch to enforce mode.</p><p>In the console, I select the VPC encryption control. Then, I select <strong>Actions</strong> and <strong>Switch mode</strong>.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/07/2025-11-07_22-01-13.png"><img class="aligncenter size-large wp-image-100518" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/07/2025-11-07_22-01-13-1024x481.png" alt="VPC Encryption Control - switch mode" width="1024" height="481" /></a>Or the equivalent CLI:</p><pre class="lang-bash">aws ec2 modify-vpc-encryption-control --vpc-id vpc-123456789 --mode enforce</pre><p><strong>How to modify the resources that are identified as nonencrypted?</strong></p><p>All your VPC resources must support traffic encryption, either at the hardware layer or at the application layer. For most resources, you don’t need to take any action.</p><p>AWS services accessed through <a href="https://docs.aws.amazon.com/vpc/latest/privatelink/what-is-privatelink.html">AWS PrivateLink</a> and <a href="https://docs.aws.amazon.com/vpc/latest/privatelink/gateway-endpoints.html">gateway endpoints</a> automatically enforce encryption at the application layer. These services only accept TLS-encrypted traffic. AWS will automatically drop any traffic that isn’t encrypted at the application layer.</p><p>When you enable monitor mode, we automatically and gradually migrate your Network Load Balancers, Application Load Balancers, <a href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/AWS_Fargate.html">AWS Fargate</a> clusters, and <a href="https://aws.amazon.com/eks/">Amazon Elastic Kubernetes Service (Amazon EKS)</a> clusters to hardware that inherently supports encryption. This migration happens transparently without any action required from you.</p><p>Some VPC resources require you to select the underlying instances that support modern Nitro hardware-layer encryption. These include EC2 Instances, <a href="https://docs.aws.amazon.com/autoscaling/ec2/userguide/auto-scaling-groups.html">Auto Scaling groups,</a> <a href="https://aws.amazon.com/rds/">Amazon Relational Database Service (Amazon RDS)</a> databases (including <a href="https://docs.aws.amazon.com/documentdb/latest/developerguide/what-is.html">Amazon DocumentDB</a>), <a href="https://docs.aws.amazon.com/AmazonElastiCache/latest/dg/designing-elasticache-cluster.html">Amazon ElastiCache node-based clusters</a>, <a href="https://docs.aws.amazon.com/redshift/latest/mgmt/working-with-clusters.html">Amazon Redshift provisioned clusters</a>, <a href="https://docs.aws.amazon.com/eks/latest/userguide/clusters.html">EKS clusters</a>, <a href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/launch-type-ec2.html">ECS with EC2 capacity</a>, <a href="https://docs.aws.amazon.com/msk/latest/developerguide/msk-provisioned.html">MSK Provisioned</a>, <a href="https://docs.aws.amazon.com/opensearch-service/latest/developerguide/serverless-comparison.html">Amazon OpenSearch Service</a>, and <a href="https://aws.amazon.com/emr">Amazon EMR</a>. To migrate your Redshift clusters, you must create a new cluster or namespace from a snapshot.</p><p>If you use newer-generation instances, you likely already have encryption-compliant infrastructure because all recent instance types support encryption. For older-generation instances that don’t support encryption-in transit, you’ll need to upgrade to supported instance types.</p><p><strong>Something to know when using AWS Transit Gateway</strong></p><p>When creating a <a href="https://docs.aws.amazon.com/vpc/latest/tgw/what-is-transit-gateway.html">Transit Gateway</a> through <a href="https://aws.amazon.com/cloudformation/">AWS CloudFormation</a> with VPC encryption enabled, you need two additional <a href="https://aws.amazon.com/iam/">AWS Identity and Access Management (IAM)</a> permissions: <code>ec2:ModifyTransitGateway</code> and <code>ec2:ModifyTransitGatewayOptions</code>. These permissions are required because CloudFormation uses a two-step process to create a Transit Gateway. It first creates the Transit Gateway with basic configuration, then calls <code>ModifyTransitGateway</code> to enable encryption support. Without these permissions, your CloudFormation stack will fail during creation when attempting to apply the encryption configuration, even if you’re only performing what appears to be a create operation.</p><p><strong>Pricing and availability</strong></p><p>You can start using VPC encryption controls today in these AWS Regions: US East (Ohio, N. Virginia), US West (N. California, Oregon), Africa (Cape Town), Asia Pacific (Hong Kong, Hyderabad, Jakarta, Melbourne, Mumbai, Osaka, Singapore, Sydney, Tokyo), Canada (Central), Canada West (Calgary), Europe (Frankfurt, Ireland, London, Milan, Paris, Stockholm, Zurich), Middle East (Bahrain, UAE), and South America (São Paulo).</p><p>VPC encryption controls is free of cost until March 1, 2026. The <a href="https://aws.amazon.com/vpc/pricing/">VPC pricing page</a> will be updated with details as we get closer to that date.</p><p>To learn more, visit the <a href="https://docs.aws.amazon.com/vpc/latest/userguide/vpc-encryption-controls.html">VPC encryption controls documentation</a> or try it out in your AWS account. I look forward to hearing how you use this feature to strengthen your security posture and help you meet compliance standards.</p><a href="https://linktr.ee/sebsto">— seb</a></section><aside id="Comments" class="blog-comments"><div data-lb-comp="aws-blog:cosmic-comments" data-env="prod" data-content-id="51113296-044a-49ae-875d-2c8b545949d0" data-title="Introducing VPC encryption controls: Enforce encryption in transit within and across VPCs in a Region" data-url="https://aws.amazon.com/blogs/aws/introducing-vpc-encryption-controls-enforce-encryption-in-transit-within-and-across-vpcs-in-a-region/"><p data-failed-message="Comments cannot be loaded… Please refresh and try again.">Loading comments…</p></div>
</aside>]]></summary>
    <link href="https://aws.amazon.com/blogs/aws/introducing-vpc-encryption-controls-enforce-encryption-in-transit-within-and-across-vpcs-in-a-region/"/>
    <updated>2025-11-21T17:23:00+01:00</updated>
  </entry>
  <entry>
    <id>https://aws.amazon.com/blogs/aws/introducing-attribute-based-access-control-for-amazon-s3-general-purpose-buckets/</id>
    <title><![CDATA[Introducing attribute-based access control for Amazon S3 general purpose buckets]]></title>
    <summary><![CDATA[<section class="blog-post-content lb-rtxt"><table id="amazon-polly-audio-table"><tbody><tr><td id="amazon-polly-audio-tab">
<p></p></td></tr></tbody>


</table><p>As organizations scale, managing access permissions for storage resources becomes increasingly complex and time-consuming. As new team members join, existing staff changes roles, and new S3 buckets are created, organizations must constantly update multiple types of access policies to govern access across their S3 buckets. This challenge is especially pronounced in multi-tenant S3 environments where administrators must frequently update these policies to control access across shared datasets and numerous users.</p><p>Today we’re introducing <a href="https://aws.amazon.com/identity/attribute-based-access-control/?trk=ac97e39c-d115-4d4a-b3fe-c695e0c9a7ee&amp;sc_channel=el">attribute-based access control (ABAC)</a> for <a href="https://aws.amazon.com/s3/?trk=ac97e39c-d115-4d4a-b3fe-c695e0c9a7ee&amp;sc_channel=el">Amazon Simple Storage Service (S3)</a> general purpose buckets, a new capability you can use to automatically manage permissions for users and roles by controlling data access through tags on S3 general purpose buckets. Instead of managing permissions individually, you can use tag-based IAM or bucket policies to automatically grant or deny access based on tags between users, roles, and S3 general purpose buckets. Tag-based authorization makes it easy to grant S3 access based on project, team, cost center, data classification, or other bucket attributes instead of bucket names, dramatically simplifying permissions management for large organizations.</p><p><strong>How ABAC works<br /></strong> Here’s a common scenario: as an administrator, I want to give developers access to all S3 buckets meant to be used in development environments.</p><p>With ABAC, I can tag my development environment S3 buckets with a key-value pair such as <code>environment:development</code> and then attach an ABAC policy to an <a href="https://aws.amazon.com/iam?trk=ac97e39c-d115-4d4a-b3fe-c695e0c9a7ee&amp;sc_channel=el">AWS Identity and Access Management (IAM)</a> principal that checks for the same <code>environment:development</code> tag. If the bucket tag matches the condition in the policy, the principal is granted access.</p><p>Let’s see how this works.</p><p><strong>Getting started</strong><br />First, I need to explicitly enable ABAC on each S3 general purpose bucket where I want to use tag-based authorization.</p><p>I navigate to the <a href="https://console.aws.amazon.com/s3?trk=ac97e39c-d115-4d4a-b3fe-c695e0c9a7ee&amp;sc_channel=el">Amazon S3 console</a>, select my general purpose bucket then navigate to <strong>Properties</strong> where I can find the option to enable ABAC for this bucket.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/19/image-21-3.png"><img class="aligncenter size-full wp-image-101400" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/19/image-21-3.png" alt="" width="3000" height="1036" /></a></p><p>I can also use the <a href="https://aws.amazon.com/cli?trk=ac97e39c-d115-4d4a-b3fe-c695e0c9a7ee&amp;sc_channel=el">AWS Command Line Interface (AWS CLI)</a> to enable it programmatically by using the new PutBucketAbac API. Here I am enabling ABAC on a bucket called my-demo-development-bucket located in the US East (Ohio) us-east-2 <a href="https://docs.aws.amazon.com/glossary/latest/reference/glos-chap.html#region?trk=ac97e39c-d115-4d4a-b3fe-c695e0c9a7ee&amp;sc_channel=el">AWS Region</a>.</p><pre class="lang-bash">aws s3api put-bucket-abac --bucket my-demo-development-bucket abac-status Status=Enabled --region us-east-2</pre><p>Alternatively, if you use <a href="https://aws.amazon.com/cloudformation/?trk=ac97e39c-d115-4d4a-b3fe-c695e0c9a7ee&amp;sc_channel=el">AWS CloudFormation</a>, you can enable ABAC by setting the <code>AbacStatus</code> property to <code>Enabled</code> in your template.</p><p>Next, let’s tag our S3 general purpose bucket. I add an <code>environment:development</code> tag which will become the criteria for my tag-based authorization.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/19/adding-user-defined-tags.png"><img class="aligncenter size-full wp-image-101406" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/19/adding-user-defined-tags.png" alt="" width="2974" height="1042" /></a></p><p>Now that my S3 bucket is tagged, I’ll create an ABAC policy that verifies matching <code>environment:development</code> tags and attach it to an IAM role called dev-env-role. By managing developer access to this role, I can control permissions to all development environment buckets in a <a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/tutorial_attribute-based-access-control.html">single place</a>.</p><p>I navigate to the <a href="https://console.aws.amazon.com/iam?trk=ac97e39c-d115-4d4a-b3fe-c695e0c9a7ee&amp;sc_channel=el">IAM console</a>, choose <strong>Policies</strong>, and then <strong>Create policy. </strong>In the <strong>Policy editor</strong>, I switch to JSON view and create a policy that allows users to read, write and list S3 objects, but only when they have a tag with a key of “environment” attached and its value matches the one declared on the S3 bucket. I give this policy the name of s3-abac-policy and save it.</p><pre class="lang-json">{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:GetObject",
                "s3:PutObject",
                "s3:ListBucket"
            ],
            "Resource": [
                "*"
            ],
            "Condition": {
                "StringEquals": {
                    "aws:ResourceTag/environment": "development"
                }
            }
        }
    ]
}</pre><p>I then attach this s3-abac-policy to the dev-env-role.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/10/15/adding-abac-policy-to-iam-role.png"><img class="aligncenter size-full wp-image-99883" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/10/15/adding-abac-policy-to-iam-role.png" alt="" width="1480" height="461" /></a></p><p>That’s it! Now a user assuming the dev-role can access any ABAC-enabled bucket with the tag environment:development such as my-demo-development-bucket.</p><p><strong>Using your existing tags</strong><br />Keep in mind that although you can use your existing tags for ABAC, because these tags will now be used for access control, we recommend reviewing your current tag setup before enabling the feature. This includes reviewing your existing bucket tags and tag-based policies to prevent unintended access, and updating your tagging workflows to use the standard TagResource API (since enabling ABAC on your buckets will block the use of the PutBucketTagging API). You can use <a href="https://aws.amazon.com/config/?trk=ac97e39c-d115-4d4a-b3fe-c695e0c9a7ee&amp;sc_channel=el">AWS Config</a> to check which buckets have ABAC enabled and review your usage of PutBucketTagging API in your application using <a href="https://aws.amazon.com/cloudtrail/?trk=ac97e39c-d115-4d4a-b3fe-c695e0c9a7ee&amp;sc_channel=el">AWS Cloudtrail</a> management events.</p><p>Additionally, the same tags you use for ABAC can also serve as cost allocation tags for your S3 buckets. Activate them as cost allocation tags in the <a href="https://aws.amazon.com/aws-cost-management/aws-billing/?trk=ac97e39c-d115-4d4a-b3fe-c695e0c9a7ee&amp;sc_channel=el">AWS Billing Console</a> or through APIs, and your <a href="https://aws.amazon.com/aws-cost-management/aws-cost-explorer/?trk=ac97e39c-d115-4d4a-b3fe-c695e0c9a7ee&amp;sc_channel=el">AWS Cost Explorer</a> and <a href="https://aws.amazon.com/aws-cost-management/aws-cost-and-usage-reporting/?trk=ac97e39c-d115-4d4a-b3fe-c695e0c9a7ee&amp;sc_channel=el">Cost and Usage Reports</a> will automatically organize spending data based on these tags.</p><p><strong>Enforcing tags on creation</strong><br />To help standardize access control across your organization, you can now enforce tagging requirements when buckets are created through service control policies (SCPs) or IAM policies using the <code>aws:TagKeys</code> and <code>aws:RequestTag</code> condition keys. Then you can enable ABAC on these buckets to provide consistent access control patterns across your organization. To tag a bucket during creation you can add the tags to your CloudFormation templates or provide them in the request body of your call to the existing S3 CreateBucket API. For example, I could enforce a policy for my developers to create buckets with the tag environment=development so all my buckets are tagged accurately for cost allocation. If I want to use the same tags for access control, I can then enable ABAC for these buckets.</p><p><strong class="c6">Things to know</strong></p><p>With ABAC for Amazon S3, you can now implement scalable, tag-based access control across your S3 buckets. This feature makes writing access control policies simpler, and reduces the need for policy updates as principals and resources come and go. This helps you reduce administrative overhead while maintaining strong security governance as you scale.</p><p>Attribute-based access control for Amazon S3 general purpose buckets is available now through the <a href="https://console.aws.amazon.com/?trk=ac97e39c-d115-4d4a-b3fe-c695e0c9a7ee&amp;sc_channel=el">AWS Management Console</a>, API, <a href="https://builder.aws.com/build/tools?trk=ac97e39c-d115-4d4a-b3fe-c695e0c9a7ee&amp;sc_channel=el">AWS SDKs</a>, AWS CLI, and AWS CloudFormation at no additional cost. Standard API request rates apply according to <a href="https://aws.amazon.com/s3/pricing?trk=ac97e39c-d115-4d4a-b3fe-c695e0c9a7ee&amp;sc_channel=el">Amazon S3 pricing</a>. There’s no additional charge for tag storage on S3 resources.</p><p>You can use <a href="https://aws.amazon.com/cloudtrail/?trk=ac97e39c-d115-4d4a-b3fe-c695e0c9a7ee&amp;sc_channel=el">AWS CloudTrail</a> to audit access requests and understand which policies granted or denied access to your resources.</p><p>You can also use ABAC with other S3 resources such as S3 directory bucket, S3 access points and S3 tables buckets and tables. To learn more about ABAC on S3 buckets see the <a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/buckets-tagging-enable-abac.html">Amazon S3 User Guide</a>.</p><p>You can use the same tags you use for access control for cost allocation as well. You can activate them as cost allocation tags through the AWS Billing Console or APIs. Check out the documentation for more details on <a href="https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/cost-alloc-tags.html?trk=ac97e39c-d115-4d4a-b3fe-c695e0c9a7ee&amp;sc_channel=el">how to use cost allocation tags</a>.</p></section><aside id="Comments" class="blog-comments"><div data-lb-comp="aws-blog:cosmic-comments" data-env="prod" data-content-id="361b30a4-ccec-4596-b5bd-39e044148571" data-title="Introducing attribute-based access control for Amazon S3 general purpose buckets" data-url="https://aws.amazon.com/blogs/aws/introducing-attribute-based-access-control-for-amazon-s3-general-purpose-buckets/"><p data-failed-message="Comments cannot be loaded… Please refresh and try again.">Loading comments…</p></div>
</aside>]]></summary>
    <link href="https://aws.amazon.com/blogs/aws/introducing-attribute-based-access-control-for-amazon-s3-general-purpose-buckets/"/>
    <updated>2025-11-21T02:02:00+01:00</updated>
  </entry>
  <entry>
    <id>https://aws.amazon.com/blogs/aws/simplify-access-to-external-services-using-aws-iam-outbound-identity-federation/</id>
    <title><![CDATA[Simplify access to external services using AWS IAM Outbound Identity Federation]]></title>
    <summary><![CDATA[<section class="blog-post-content lb-rtxt"><table id="amazon-polly-audio-table"><tbody><tr><td id="amazon-polly-audio-tab">
<p></p></td></tr></tbody>


</table><p>When building applications that span multiple cloud providers or integrate with external services, developers face a persistent challenge: managing credentials securely. Traditional approaches require storing long-term credentials like API keys and passwords, creating security risks and operational overhead.</p><p>Today, we’re announcing a new capability called <a href="https://aws.amazon.com/iam/?trk=c4ea046f-18ad-4d23-a1ac-cdd1267f942c&amp;sc_channel=el">AWS Identity and Access Management (IAM)</a> outbound identity federation that customers can use to securely federate their <a href="https://aws.amazon.com/?trk=c4ea046f-18ad-4d23-a1ac-cdd1267f942c&amp;sc_channel=el">Amazon Web Services (AWS)</a> identities to external services without storing long-term credentials. You can now use short-lived JSON Web Tokens (JWTs) to authenticate your AWS workloads with a wide range of third-party providers, software-as-a-service (SaaS) platforms and self-hosted applications.</p><p>This feature enables IAM principals—such as IAM roles and users—to obtain cryptographically signed JWTs that assert their AWS identity. External services, such as third-party providers, SaaS platforms, and on-premises applications, can verify the token’s authenticity by validating its signature. Upon successful verification, you can securely access the external service.</p><p><strong>How it works<br /></strong> With IAM outbound identity federation, you exchange your AWS IAM credentials for short-lived JWTs. This mitigates the security risks associated with long-term credentials while enabling consistent authentication patterns.</p><p>Let’s walk through a scenario where your application running on AWS needs to interact with an external service. To access the external service’s APIs or resources, your application calls the AWS Security Token Service (AWS STS) `GetWebIdentityToken` API to obtain a JWT.</p><p>The following diagram shows this flow:</p><p><img class="aligncenter size-full wp-image-101254" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/18/news-2025-iam-web-identity-3-4.png" alt="" width="780" height="530" /></p><ol><li>Your application running on AWS requests a token from AWS STS by calling the <code>GetWebIdentityToken</code> API. The application uses its existing AWS credentials obtained from the underlying platform (such as Amazon EC2 instance profiles, AWS Lambda execution roles, or other AWS compute services) to authenticate this API call.</li>
<li>AWS STS returns a cryptographically signed JSON Web Token (JWT) that asserts the identity of your application.</li>
<li>Your application sends the JWT to the external service for authentication.</li>
<li>The external service fetches the verification keys from the JSON Web Key Set (JWKS) endpoint to verify the token’s authenticity.</li>
<li>The external service validates the JWT’s signature using these verification keys and confirms the token is authentic and was issued by AWS.</li>
<li>After successful verification, the external service exchanges the JWT for its own credentials. Your application can then use these credentials to perform its intended operations.</li>
</ol><p><strong>Setting up AWS IAM outbound identity federation<br /></strong> To begin using this feature, I need to enable outbound identity federation for my AWS account. I navigate to <a href="https://console.aws.amazon.com/iam/?trk=c4ea046f-18ad-4d23-a1ac-cdd1267f942c&amp;sc_channel=el">IAM</a> and choose <strong>Account settings</strong> under <strong>Access management</strong> in the left-hand navigation pane.</p><p><img class="aligncenter size-full wp-image-100846" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/13/news-2025-iam-web-identity-2.png" alt="" width="3004" height="1259" /></p><p>After I enable the feature, AWS generates a unique issuer URL for my AWS account that hosts the OpenID Connect (OIDC) discovery endpoints at <code>/.well-known/openid-configuration</code> and <code>/.well-known/jwks.json</code>. The OpenID Connect (OIDC) discovery endpoints contain the keys and metadata necessary for token verification.</p><p><img class="aligncenter size-full wp-image-100960" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/14/news-2025-iam-web-identity-4-1.png" alt="" width="1365" height="612" /></p><p>Next, I need to configure IAM permissions. My IAM principal (role or user) must have the <code>sts:GetWebIdentityToken</code> permission to request tokens.</p><p>For example, the following identity policy specifies access to the STS <code>GetWebIdentityToken</code> API, enabling the IAM principal to generate tokens.</p><pre class="language-json">{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": "sts:GetWebIdentityToken",
      "Resource": "*",
    }
  ]
}
</pre><p>At this stage, I need to configure the external service to trust and accept tokens issued by my AWS account. The specific steps vary by service, but generally involve:</p><ol><li>Registering my AWS account issuer URL as a trusted identity provider</li>
<li>Configuring which claims to validate (audience, subject patterns)</li>
<li>Mapping token claims to permissions in the external service</li>
</ol><p><strong>Let’s get started<br /></strong> Now, let me walk you through an example showing both the client-side token generation and server-side verification process.</p><p>First, I call the STS <code>GetWebIdentityToken</code> API to obtain a JWT that asserts my AWS identity. When calling the API, I can specify the intended audience, signing algorithm, and token lifetime as request parameters.</p><ul><li><code>Audience</code>: Populates the `aud` claim in the JWT, identifying the intended recipient of the token (for example, “my-app”)</li>
<li><code>DurationSeconds</code>: The token lifetime in seconds, ranging from 60 seconds (1 minute) to 3600 seconds (1 hour), with a default of 600 seconds (5 minutes)</li>
<li><code>SigningAlgorithm</code>: Choose either ES384 (ECDSA using P-384 and SHA-384) or RS256 (RSA using SHA-256)</li>
<li><code>Tags</code> (optional): An array of key-value pairs that appear as custom claims in the token, which you can use to include additional context that enables external services to implement fine-grained access control</li>
</ul><p>Here’s an example of getting an identity token using the <a href="https://builder.aws.com/build/tools?trk=c4ea046f-18ad-4d23-a1ac-cdd1267f942c&amp;sc_channel=el">AWS SDK</a> for Python (Boto3). I can also do this using <a href="https://aws.amazon.com/cli/?trk=c4ea046f-18ad-4d23-a1ac-cdd1267f942c&amp;sc_channel=el">AWS Command Line Interface (AWS CLI)</a>.</p><pre class="language-python">
import boto3
sts_client = boto3.client('sts')
response = sts_client.get_web_identity_token(
    Audience=['my-app'],
    SigningAlgorithm='ES384',  # or 'RS256'
    DurationSeconds=300
)
jwt_token = response['IdentityToken']
print(jwt_token)
</pre><p>This returns a signed JWT that I can inspect using any JWT parser.</p><pre class="language-bash">{
eyJraWQiOiJFQzM4NF8wIiwidHlwIjoiSldUIiwiYWxnIjoiRVMzODQifQ.hey&lt;REDACTED FOR BREVITY&gt;...
</pre><p>I can decode the token using any JWT parser like this <a href="https://www.jwt.io/">JWT Debugger</a>. The token header shows it’s signed with ES384 (ECDSA).</p><pre class="language-json">
{
  "kid": "EC384_0",
  "typ": "JWT",
  "alg": "ES384"
}
</pre><p>Also, the payload contains standard OIDC claims plus AWS specific metadata. The standard OIDC claims include subject (“sub”), audience (“aud”), issuer (“iss”), and others.</p><pre class="language-json">{
  "aud": "my-app",
  "sub": "arn:aws:iam::ACCOUNT_ID:role/MyAppRole",
  "https://sts.amazonaws.com/": {
    "aws_account": "ACCOUNT_ID",
    "source_region": "us-east-1",
    "principal_id": "arn:aws:iam::ACCOUNT_ID:role/MyAppRole"
  },
  "iss": "https://abc12345-def4-5678-90ab-cdef12345678.tokens.sts.global.api.aws",
  "exp": 1759786941,
  "iat": 1759786041,
  "jti": "5488e298-0a47-4c5b-80d7-6b4ab8a4cede"
}
</pre><p>AWS STS also enriches the token with identity-specific claims (such as account ID, organization ID, and principal tags) and session context. These claims provide information about the compute environment and session where the token request originated. AWS STS automatically includes these claims when applicable based on the requesting principal’s session context. You can also add custom claims to the token by passing request tags to the API call. To learn more about claims provided in the JWT, visit the <a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_outbound_token_claims.html">documentation page</a>.</p><p>Note the <code>iss</code> (issuer) claim. This is your account-specific issuer URL that external services use to verify that the token originated from a trusted AWS account. External services can verify the JWT by validating its signature using AWS’s verification keys available at a public JSON Web Key Set (JWKS) endpoint hosted at the <code>/.well-known/jwks.json</code> endpoint of the issuer URL.</p><p>Now, let’s look at how external services handle this identity token.</p><p>Here’s a snippet of Python example that external services can use to verify AWS tokens:</p><pre class="language-python">
import json
import jwt
import requests
from jwt import PyJWKClient
# Trusted issuers list - obtained from EnableOutboundFederation API response
TRUSTED_ISSUERS = [
    "https://EXAMPLE.tokens.sts.global.api.aws",
    # Add your trusted AWS account issuer URLs here
    # Obtained from EnableOutboundFederation API response
]
def verify_aws_jwt(token, expected_audience=None):
    """Verify an AWS IAM outbound identity federation JWT"""
    try:
        # Get issuer from token
        unverified_payload = jwt.decode(token, options={"verify_signature": False})
        issuer = unverified_payload.get('iss')
        # Verify issuer is trusted
        if not TRUSTED_ISSUERS or issuer not in TRUSTED_ISSUERS:
            raise ValueError(f"Untrusted issuer: {issuer}")
        # Fetch JWKS from AWS using PyJWKClient
        jwks_client = PyJWKClient(f"{issuer}/.well-known/jwks.json")
        signing_key = jwks_client.get_signing_key_from_jwt(token)
        # Verify token signature and claims
        decoded_token = jwt.decode(
            token,
            signing_key.key,
            algorithms=["ES384", "RS256"],
            audience=expected_audience,
            issuer=issuer
        )
        return decoded_token
    except Exception as e:
        print(f"Token verification failed: {e}")
        return None
</pre><p><strong>Using IAM policies to control access to token generation<br /></strong> An IAM principal (such as a role or user) must have the <code>sts:GetWebIdentityToken</code> permission in their IAM policies to request tokens for authentication with external services. AWS account administrators can configure this permission in all relevant AWS policy types such as identity policies, service control policies (SCPs), resource control policies (RCPs), and virtual private cloud endpoint (VPCE) policies to control which IAM principals in their account can generate tokens.</p><p>Additionally, administrators can use the new condition keys to specify signing algorithms (<code>sts:SigningAlgorithm</code>), permitted token audiences (<code>sts:IdentityTokenAudience</code>), and maximum token lifetimes (<code>sts:DurationSeconds</code>). To learn more about the condition keys, visit <a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_iam-condition-keys.html#condition-keys-sts">IAM and STS Condition keys documentation</a> page.</p><p><strong>Additional things to know<br /></strong> Here are key details about this launch:</p><ul><li><strong>Availability</strong> – AWS IAM outbound identity federation is available at no additional cost in all <a href="https://docs.aws.amazon.com/glossary/latest/reference/glos-chap.html#region?trk=c4ea046f-18ad-4d23-a1ac-cdd1267f942c&amp;sc_channel=el">AWS commercial Regions</a>, <a href="https://aws.amazon.com/govcloud-us/?trk=c4ea046f-18ad-4d23-a1ac-cdd1267f942c&amp;sc_channel=el">AWS GovCloud (US) Regions</a>, and China Regions.</li>
<li><strong>Pricing</strong> – This feature is available at no additional cost.</li>
</ul><p>Get started with AWS IAM outbound identity federation by visiting <a href="https://console.aws.amazon.com/iam/?trk=c4ea046f-18ad-4d23-a1ac-cdd1267f942c&amp;sc_channel=el">AWS IAM console</a> and enabling the feature in your AWS account. For more information, visit <a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_outbound.html">Federating AWS Identities to External Services</a> documentation page.</p><p>Happy building!<br />— <a href="https://www.linkedin.com/in/donnieprakoso">Donnie</a></p></section><aside id="Comments" class="blog-comments"><div data-lb-comp="aws-blog:cosmic-comments" data-env="prod" data-content-id="fb185cbe-95de-4c5c-b89c-fb94aaf800d4" data-title="Simplify access to external services using AWS IAM Outbound Identity Federation" data-url="https://aws.amazon.com/blogs/aws/simplify-access-to-external-services-using-aws-iam-outbound-identity-federation/"><p data-failed-message="Comments cannot be loaded… Please refresh and try again.">Loading comments…</p></div>
</aside>]]></summary>
    <link href="https://aws.amazon.com/blogs/aws/simplify-access-to-external-services-using-aws-iam-outbound-identity-federation/"/>
    <updated>2025-11-20T00:21:00+01:00</updated>
  </entry>
  <entry>
    <id>https://aws.amazon.com/blogs/aws/accelerate-workflow-development-with-enhanced-local-testing-in-aws-step-functions/</id>
    <title><![CDATA[Accelerate workflow development with enhanced local testing in AWS Step Functions]]></title>
    <summary><![CDATA[<section class="blog-post-content lb-rtxt"><table id="amazon-polly-audio-table"><tbody><tr><td id="amazon-polly-audio-tab">
<p></p></td></tr></tbody>


</table><p>Today, I’m excited to announce enhanced local testing capabilities for <a href="https://aws.amazon.com/step-functions/?trk=c4ea046f-18ad-4d23-a1ac-cdd1267f942c&amp;sc_channel=el">AWS Step Functions</a> through the <a href="https://docs.aws.amazon.com/step-functions/latest/dg/test-state-isolation.html?trk=c4ea046f-18ad-4d23-a1ac-cdd1267f942c&amp;sc_channel=el">TestState API</a>, our testing API.</p><p>These enhancements are available through the API, so you can build automated test suites that validate your workflow definitions locally on your development machines, test error handling patterns, data transformations, and mock service integrations using your preferred testing frameworks. This launch introduces an API-based approach for local unit testing, providing programmatic access to comprehensive testing capabilities without deploying to <a href="https://aws.amazon.com/?trk=c4ea046f-18ad-4d23-a1ac-cdd1267f942c&amp;sc_channel=el">Amazon Web Services (AWS)</a>.</p><p>There are three key capabilities introduced in this enhanced TestState API:</p><ul><li>
<p><strong>Mocking support</strong> – Mock state outputs and errors without invoking downstream services, enabling true unit testing of state machine logic. TestState validates mocked responses against AWS API models with three validation modes: STRICT (this is the default and validates all required fields), PRESENT (validates field types and names), and NONE (no validation), providing high-fidelity testing.</p>
</li>
<li>
<p><strong>Support for all state types</strong> – All state types, including advanced states such as Map states (inline and distributed), Parallel states, activity-based Task states, .sync service integration patterns, and .waitForTaskToken service integration patterns, can now be tested. This means you can use TestState API across your entire workflow definition and write unit tests to verify control flow logic, including state transitions, error handling, and data transformations.</p>
</li>
<li>
<p><strong>Testing individual states</strong> – Test specific states within a full state machine definition using the new stateName parameter. You can provide the complete state machine definition one time and test each state individually by name. You can control execution context to test specific retry attempts, Map iteration positions, and error scenarios.</p>
</li>
</ul><p><strong>Getting started with enhanced TestState</strong><br />Let me walk you through these new capabilities in enhanced TestState.</p><p><strong>Scenario 1: Mock successful results<br /></strong></p><p>The first capability is mocking support, which you can use to test your workflow logic without invoking actual AWS services or even external HTTP requests. You can either mock service responses for fast unit testing or test with actual AWS services for integration testing. When using mocked responses, you don’t need AWS Identity and Access Management (IAM) permissions.</p><p>Here’s how to mock a successful AWS Lambda function response:</p><pre class="language-bash">aws stepfunctions test-state --region us-east-1 \
--definition '{
  "Type": "Task",
  "Resource": "arn:aws:states:::lambda:invoke",
  "Parameters": {"FunctionName": "process-order"},
  "End": true
}' \
--mock '{"result":"{\"orderId\":\"12345\",\"status\":\"processed\"}"}' \
--inspection-level DEBUG
</pre><p>This command tests a Lambda invocation state without actually calling the function. TestState validates your mock response against the Lambda service API model so your test data matches what the real service would return.</p><p>The response shows the successful execution with detailed inspection data (when using DEBUG inspection level):</p><pre class="language-json">{
    "output": "{\"orderId\":\"12345\",\"status\":\"processed\"}",
    "inspectionData": {
        "input": "{}",
        "afterInputPath": "{}",
        "afterParameters": "{\"FunctionName\":\"process-order\"}",
        "result": "{\"orderId\":\"12345\",\"status\":\"processed\"}",
        "afterResultSelector": "{\"orderId\":\"12345\",\"status\":\"processed\"}",
        "afterResultPath": "{\"orderId\":\"12345\",\"status\":\"processed\"}"
    },
    "status": "SUCCEEDED"
}
</pre><p>When you specify a mock response, TestState validates it against the AWS service’s API model so your mocked data conforms to the expected schema, maintaining high-fidelity testing without requiring actual AWS service calls.</p><p><strong>Scenario 2: Mock error conditions<br /></strong>You can also mock error conditions to test your error handling logic:</p><pre class="language-bash">aws stepfunctions test-state --region us-east-1 \
--definition '{
  "Type": "Task",
  "Resource": "arn:aws:states:::lambda:invoke",
  "Parameters": {"FunctionName": "process-order"},
  "End": true
}' \
--mock '{"errorOutput":{"error":"Lambda.ServiceException","cause":"Function failed"}}' \
--inspection-level DEBUG
</pre><p>This simulates a Lambda service exception so you can verify how your state machine handles failures without triggering actual errors in your AWS environment.</p><p>The response shows the failed execution with error details:</p><pre class="language-json">{
    "error": "Lambda.ServiceException",
    "cause": "Function failed",
    "inspectionData": {
        "input": "{}",
        "afterInputPath": "{}",
        "afterParameters": "{\"FunctionName\":\"process-order\"}"
    },
    "status": "FAILED"
}
</pre><p><strong>Scenario 3: Test Map states<br /></strong>The second capability adds support for previously unsupported state types. Here’s how to test a Distributed Map state:</p><pre class="language-bash">aws stepfunctions test-state --region us-east-1 \
--definition '{
  "Type": "Map",
  "ItemProcessor": {
    "ProcessorConfig": {"Mode": "DISTRIBUTED", "ExecutionType": "STANDARD"},
    "StartAt": "ProcessItem",
    "States": {
      "ProcessItem": {
        "Type": "Task", 
        "Resource": "arn:aws:states:::lambda:invoke",
        "Parameters": {"FunctionName": "process-item"},
        "End": true
      }
    }
  },
  "End": true
}' \
--input '[{"itemId":1},{"itemId":2}]' \
--mock '{"result":"[{\"itemId\":1,\"status\":\"processed\"},{\"itemId\":2,\"status\":\"processed\"}]"}' \
--inspection-level DEBUG
</pre><p>The mock result represents the complete output from processing multiple items. In this case, the mocked array must match the expected Map state output format.</p><p>The response shows successful processing of the array input:</p><pre class="language-json">{
    "output": "[{\"itemId\":1,\"status\":\"processed\"},{\"itemId\":2,\"status\":\"processed\"}]",
    "inspectionData": {
        "input": "[{\"itemId\":1},{\"itemId\":2}]",
        "afterInputPath": "[{\"itemId\":1},{\"itemId\":2}]",
        "afterResultSelector": "[{\"itemId\":1,\"status\":\"processed\"},{\"itemId\":2,\"status\":\"processed\"}]",
        "afterResultPath": "[{\"itemId\":1,\"status\":\"processed\"},{\"itemId\":2,\"status\":\"processed\"}]"
    },
    "status": "SUCCEEDED"
}
</pre><p><strong>Scenario 4: Test Parallel states<br /></strong>Similarly, you can test Parallel states that execute multiple branches concurrently:</p><pre class="language-bash">aws stepfunctions test-state --region us-east-1 \
--definition '{
  "Type": "Parallel",
  "Branches": [
    {"StartAt": "Branch1", "States": {"Branch1": {"Type": "Pass", "End": true}}},
    {"StartAt": "Branch2", "States": {"Branch2": {"Type": "Pass", "End": true}}}
  ],
  "End": true
}' \
--mock '{"result":"[{\"branch1\":\"data1\"},{\"branch2\":\"data2\"}]"}' \
--inspection-level DEBUG
</pre><p>The mock result must be an array with one element per branch. By using TestState, your mock data structure matches what a real Parallel state execution would produce.</p><p>The response shows the parallel execution results:</p><pre class="language-json">{
    "output": "[{\"branch1\":\"data1\"},{\"branch2\":\"data2\"}]",
    "inspectionData": {
        "input": "{}",
        "afterResultSelector": "[{\"branch1\":\"data1\"},{\"branch2\":\"data2\"}]",
        "afterResultPath": "[{\"branch1\":\"data1\"},{\"branch2\":\"data2\"}]"
    },
    "status": "SUCCEEDED"
}
</pre><p><strong>Scenario 5: Test individual states within complete workflows<br /></strong>You can test specific states within a full state machine definition using the stateName parameter. Here’s an example testing a single state, though you would typically provide your complete workflow definition and specify which state to test:</p><pre class="language-bash">aws stepfunctions test-state --region us-east-1 \
--definition '{
  "Type": "Task",
  "Resource": "arn:aws:states:::lambda:invoke",
  "Parameters": {"FunctionName": "validate-order"},
  "End": true
}' \
--input '{"orderId":"12345","amount":99.99}' \
--mock '{"result":"{\"orderId\":\"12345\",\"validated\":true}"}' \
--inspection-level DEBUG
</pre><p>This tests a Lambda invocation state with specific input data, showing how TestState processes the input and transforms it through the state execution.</p><p>The response shows detailed input processing and validation:</p><pre class="language-json">{
    "output": "{\"orderId\":\"12345\",\"validated\":true}",
    "inspectionData": {
        "input": "{\"orderId\":\"12345\",\"amount\":99.99}",
        "afterInputPath": "{\"orderId\":\"12345\",\"amount\":99.99}",
        "afterParameters": "{\"FunctionName\":\"validate-order\"}",
        "result": "{\"orderId\":\"12345\",\"validated\":true}",
        "afterResultSelector": "{\"orderId\":\"12345\",\"validated\":true}",
        "afterResultPath": "{\"orderId\":\"12345\",\"validated\":true}"
    },
    "status": "SUCCEEDED"
}
</pre><p>These enhancements bring the familiar local development experience to Step Functions workflows, helping me to get instant feedback on changes before deploying to my AWS account. I can write automated test suites to validate all Step Functions features with the same reliability as cloud execution, providing confidence that my workflows will work as expected when deployed.</p><p><strong>Things to know<br /></strong>Here are key points to note:</p><ul><li><strong>Availability</strong> – Enhanced TestState capabilities are available in all <a href="https://docs.aws.amazon.com/glossary/latest/reference/glos-chap.html#region?trk=c4ea046f-18ad-4d23-a1ac-cdd1267f942c&amp;sc_channel=el">AWS Regions</a> where Step Functions is supported.</li>
<li><strong>Pricing</strong> – TestState API calls are included with AWS Step Functions at no additional charge.</li>
<li><strong>Framework compatibility</strong> – TestState works with any testing framework that can make HTTP requests, including Jest, pytest, JUnit, and others. You can write test suites that validate your workflows automatically in your continuous integration and continuous delivery (CI/CD) pipeline before deployment.</li>
<li><strong>Feature support</strong> – Enhanced TestState supports all Step Functions features including Distributed Map, Parallel states, error handling, and JSONata expressions.</li>
<li><strong>Documentation</strong> – For detailed options for different configurations, refer to the <a href="https://docs.aws.amazon.com/step-functions/latest/dg/test-state-isolation.html?trk=c4ea046f-18ad-4d23-a1ac-cdd1267f942c&amp;sc_channel=el">TestState documentation</a> and <a href="https://docs.aws.amazon.com/step-functions/latest/apireference/API_TestState.html?trk=c4ea046f-18ad-4d23-a1ac-cdd1267f942c&amp;sc_channel=el">API reference</a> for the updated request and response model.</li>
</ul><p>Get started today with enhanced local testing by integrating TestState into your development workflow.</p><p>Happy building!<br />— <a href="https://www.linkedin.com/in/donnieprakoso">Donnie</a></p></section><aside id="Comments" class="blog-comments"><div data-lb-comp="aws-blog:cosmic-comments" data-env="prod" data-content-id="b00749f8-36e1-4c6d-8369-bc90c7dc1894" data-title="Accelerate workflow development with enhanced local testing in AWS Step Functions" data-url="https://aws.amazon.com/blogs/aws/accelerate-workflow-development-with-enhanced-local-testing-in-aws-step-functions/"><p data-failed-message="Comments cannot be loaded… Please refresh and try again.">Loading comments…</p></div>
</aside>]]></summary>
    <link href="https://aws.amazon.com/blogs/aws/accelerate-workflow-development-with-enhanced-local-testing-in-aws-step-functions/"/>
    <updated>2025-11-19T23:13:00+01:00</updated>
  </entry>
  <entry>
    <id>https://aws.amazon.com/blogs/aws/streamlined-multi-tenant-application-development-with-tenant-isolation-mode-in-aws-lambda/</id>
    <title><![CDATA[Streamlined multi-tenant application development with tenant isolation mode in AWS Lambda]]></title>
    <summary><![CDATA[<section class="blog-post-content lb-rtxt"><table id="amazon-polly-audio-table"><tbody><tr><td id="amazon-polly-audio-tab">
<p></p></td></tr></tbody>


</table><p>Multi-tenant applications often require strict isolation when processing tenant-specific code or data. Examples include software-as-a-service (SaaS) platforms for workflow automation or code execution where customers need to ensure that execution environments used for individual tenants or end users remain completely separate from one another. Traditionally, developers have addressed these requirements by deploying separate Lambda functions for each tenant or implementing custom isolation logic within shared functions which increased architectural and operational complexity.</p><p>Today, <a href="https://aws.amazon.com/lambda/">AWS Lambda</a> introduces a new tenant isolation mode that extends the existing isolation capabilities in Lambda. Lambda already provides isolation at the function level, and this new mode extends isolation to the individual tenant or end-user level within a single function. This built-in capability processes function invocations in separate execution environments for each tenant, enabling you to meet strict isolation requirements without additional implementation effort to manage tenant-specific resources within function code.</p><p>Here’s how you can enable tenant isolation mode in the <a href="https://console.aws.amazon.com/lambda">AWS Lambda console</a>:</p><p><img class="aligncenter size-full wp-image-101248" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/18/news-2025-11-lambda-tenant-isolation-rev-3.png" alt="" width="2280" height="2644" /></p><p>When using the new tenant isolation capability, Lambda associates function execution environments with customer-specified tenant identifiers. This means that execution environments for a particular tenant aren’t used to serve invocation requests from other tenants invoking the same Lambda function.</p><p>The feature addresses strict security requirements for SaaS providers processing sensitive data or running untrusted tenant code. You maintain the pay-per-use and performance characteristics of AWS Lambda while gaining execution environment isolation. Additionally, this approach delivers the security benefits of per-tenant infrastructure without the operational overhead of managing dedicated Lambda functions for individual tenants, which can quickly grow as customers adopt your application.</p><p><strong>Getting started with AWS Lambda tenant isolation<br /></strong>Let me walk you through how to configure and use tenant isolation for a multi-tenant application.</p><p>First, on the <strong>Create function</strong> page in the AWS Lambda console, I choose <strong>Author from scratch</strong> option.</p><p><img class="aligncenter size-full wp-image-100634" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/10/news-2025-11-lambda-tenant-isolation-1.png" alt="" width="1130" height="629" /></p><p>Then, under <strong>Additional configurations</strong>, I select <strong>Enable</strong> under <strong>Tenant isolation mode</strong>. Note that, tenant isolation mode can only be set during function creation and can’t be modified for existing Lambda functions.</p><p><img class="aligncenter size-full wp-image-101249" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/18/news-2025-11-lambda-tenant-isolation-rev-4.png" alt="" width="2280" height="1983" /></p><p>Next, I write Python code to demonstrate this capability. I can access the tenant identifier in my function code through the context object. Here’s the full Python code:</p><pre class="language-python">import json
import os
from datetime import datetime
def lambda_handler(event, context):
    tenant_id = context.tenant_id
    file_path = '/tmp/tenant_data.json'
    # Read existing data or initialize
    if os.path.exists(file_path):
        with open(file_path, 'r') as f:
            data = json.load(f)
    else:
        data = {
            'tenant_id': tenant_id,
            'request_count': 0,
            'first_request': datetime.utcnow().isoformat(),
            'requests': []
        }
    # Increment counter and add request info
    data['request_count'] += 1
    data['requests'].append({
        'request_number': data['request_count'],
        'timestamp': datetime.utcnow().isoformat()
    })
    # Write updated data back to file
    with open(file_path, 'w') as f:
        json.dump(data, f, indent=2)
    # Return file contents to show isolation
    return {
        'statusCode': 200,
        'body': json.dumps({
            'message': f'File contents for {tenant_id} (isolated per tenant)',
            'file_data': data
        })
    }
</pre><p>When I’m finished, I choose <strong>Deploy</strong>. Now, I need to test this capability by choosing <strong>Test</strong>. I can see on the <strong>Create new test event</strong> panel that there’s a new setting called <strong>Tenant ID</strong>.</p><p><img class="aligncenter size-full wp-image-100636" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/10/news-2025-11-lambda-tenant-isolation-3.png" alt="" width="1920" height="923" /></p><p>If I try to invoke this function without a tenant ID, I’ll get the following error “Add a valid tenant ID in your request and try again.”</p><p><img class="aligncenter size-full wp-image-100785" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/12/news-2025-11-lambda-tenant-isolation-rev-1.png" alt="" width="1845" height="826" /></p><p>Let me try to test this function with a tenant ID called <code>tenant-A</code>.</p><p><img class="aligncenter size-full wp-image-100786" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/12/news-2025-11-lambda-tenant-isolation-rev-2.png" alt="" width="1399" height="915" /></p><p>I can see the function ran successfully and returned <code>request_count: 1</code>. I’ll invoke this function again to get <code>request_count: 2</code>.</p><p><img class="aligncenter size-full wp-image-100639" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/10/news-2025-11-lambda-tenant-isolation-6.png" alt="" width="1399" height="914" /></p><p>Now, let me try to test this function with a tenant ID called <code>tenant-B</code>.</p><p><img class="aligncenter size-full wp-image-100640" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/10/news-2025-11-lambda-tenant-isolation-7.png" alt="" width="1399" height="925" /></p><p>The last invocation returned <code>request_count: 1</code> because I never invoked this function with <code>tenant-B</code>. Each tenant’s invocations will use separate execution environments, isolating the cached data, global variables, and any files stored in <code>/tmp</code>.</p><p>This capability transforms how I approach multi-tenant serverless architecture. Instead of wrestling with complex isolation patterns or managing hundreds of tenant-specific Lambda functions, I let AWS Lambda automatically handle the isolation. This keeps tenant data isolated across tenants, giving me confidence in the security and separation of my multi-tenant application.</p><p><strong>Additional things to know<br /></strong>Here’s a list of additional things you need to know:</p><ul><li><strong>Performance —</strong> Same-tenant invocations can still benefit from warm execution environment reuse for optimal performance.</li>
<li><strong>Pricing —</strong> You’re charged when Lambda creates a new tenant-aware execution environment, with the price depending on the amount of memory you allocate to your function and the CPU architecture you use. For more details, view <a href="https://aws.amazon.com/lambda/pricing/">AWS Lambda pricing</a>.</li>
<li><strong>Availability —</strong> Available now in all commercial <a href="https://docs.aws.amazon.com/glossary/latest/reference/glos-chap.html#region">AWS Regions</a> except Asia Pacific (New Zealand), AWS GovCloud (US), and China Regions.</li>
</ul><p>This launch simplifies building multi-tenant applications on AWS Lambda, such as SaaS platforms for workflow automation or code execution. Learn more about how to configure tenant isolation for your next multi-tenant Lambda function in the <a href="https://docs.aws.amazon.com/lambda/">AWS Lambda Developer Guide</a>.</p><p>Happy building!<br />— <a href="https://www.linkedin.com/in/donnieprakoso">Donnie</a></p></section><aside id="Comments" class="blog-comments"><div data-lb-comp="aws-blog:cosmic-comments" data-env="prod" data-content-id="a1012dca-e8a8-4234-84c1-00cc3f2b0d79" data-title="Streamlined multi-tenant application development with tenant isolation mode in AWS Lambda" data-url="https://aws.amazon.com/blogs/aws/streamlined-multi-tenant-application-development-with-tenant-isolation-mode-in-aws-lambda/"><p data-failed-message="Comments cannot be loaded… Please refresh and try again.">Loading comments…</p></div>
</aside>]]></summary>
    <link href="https://aws.amazon.com/blogs/aws/streamlined-multi-tenant-application-development-with-tenant-isolation-mode-in-aws-lambda/"/>
    <updated>2025-11-19T20:12:00+01:00</updated>
  </entry>
  <entry>
    <id>https://aws.amazon.com/blogs/aws/new-business-metadata-features-in-amazon-sagemaker-catalog-to-improve-discoverability-across-organizations/</id>
    <title><![CDATA[New business metadata features in Amazon SageMaker Catalog to improve discoverability across organizations]]></title>
    <summary><![CDATA[<section class="blog-post-content lb-rtxt"><table id="amazon-polly-audio-table"><tbody><tr><td id="amazon-polly-audio-tab">
<p></p></td></tr></tbody>


</table><p><a href="https://aws.amazon.com/sagemaker/catalog/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">Amazon SageMaker Catalog</a>, which is now built in to <a href="https://aws.amazon.com/sagemaker/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">Amazon SageMaker</a>, can help you collect and organize your data with the accompanying business context people need to understand it. It automatically documents assets generated by <a href="https://aws.amazon.com/glue/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el" target="_blank" rel="noopener noreferrer">AWS Glue</a> and <a href="http://aws.amazon.com/redshift/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el" target="_blank" rel="noopener noreferrer">Amazon Redshift</a>, and it <a href="https://aws.amazon.com/blogs/aws/streamline-the-path-from-data-to-insights-with-new-amazon-sagemaker-capabilities/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">connects directly</a> with <a href="https://aws.amazon.com/quicksight/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">Amazon Quick Sight</a>, <a href="https://aws.amazon.com/s3/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">Amazon Simple Storage Service (Amazon S3)</a> buckets, <a href="https://aws.amazon.com/about-aws/whats-new/2025/05/amazon-sagemaker-catalog-governance-s3-tables/">Amazon S3 Tables</a>, and AWS Glue Data Catalog (GDC).</p><p>With only a few clicks, you can curate data inventory assets with the required business metadata by adding or updating business names (asset and schema), descriptions (asset and schema), read me, glossary terms (asset and schema), and metadata forms. You can also create <a href="https://aws.amazon.com/about-aws/whats-new/2025/07/amazon-sagemaker-catalog-adds-ai/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">AI-generated suggestions</a>, <a href="https://aws.amazon.com/blogs/big-data/introducing-genai-powered-business-description-recommendations-for-custom-assets-in-amazon-sagemaker-catalog/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">review and refine descriptions, and publish enriched asset metadata</a> directly to the catalog. This helps reduce manual documentation effort, improves metadata consistency, and accelerates asset discoverability across organizations.</p><p>Starting today, you can use new capabilities in Amazon SageMaker Catalog metadata to improve business metadata and search:</p><ul><li><strong>Column-level metadata forms and rich descriptions</strong> – You can create custom metadata forms to capture business-specific information directly in individual columns. Columns also support markdown-enabled rich text descriptions for comprehensive data documentation and business context.</li>
<li><strong>Enforce metadata rules for glossary terms for asset publishing</strong> – You can use metadata enforcement rules for glossary terms, meaning data producers must use approved business vocabulary when publishing assets. By standardizing metadata practices, your organization can improve compliance, enhance audit readiness, and streamline access workflows for greater efficiency and control.</li>
</ul><p>These new SageMaker Catalog metadata capabilities help address consistent data classification and improve discoverability across your organizational catalogs. Let’s take a closer look at each capability.</p><p><strong class="c6">Column-level metadata forms and rich descriptions</strong><br />You can now use custom metadata forms and rich text descriptions at the column level, extending existing curation capabilities for business names, descriptions, and glossary term classifications. Custom metadata form field values and rich text content are indexed in real time and become immediately discoverable through search.</p><p>To edit column-level metadata, select the schema of your catalog asset used in your project and choose the <strong>View/Edit</strong> action for each column.</p><p><img class="aligncenter size-full wp-image-100620" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/10/2025-sagemaker-catalog-metadata-1-edit-schema.jpg" alt="" width="2560" height="1302" /></p><p>When you choose one of the columns as an asset owner, you can define custom key-value metadata forms and markdown descriptions to provide detailed column documentation.</p><p><img class="aligncenter wp-image-100624 size-full" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/10/2025-sagemaker-catalog-metadata-1-edit-metadata.jpg" alt="" width="2560" height="1563" /></p><p>Now data analysts in your organization can search using custom form field values and rich text content, alongside existing column names, descriptions, and glossary terms.</p><p><strong class="c6">Enforce metadata rules for glossary terms for asset publishing</strong><br />You can define mandatory glossary term requirements for data assets during the publishing workflow. Your data producers must now classify their assets with approved business terms from organizational glossaries before publication, promoting consistent metadata standards and improving data discoverability. The enforcement rules validate that required glossary terms are applied, preventing assets from being published without proper business context.</p><p>To enable a new metadata rule for glossary terms, choose <strong>Add</strong> in your domain units under the <strong>Domain Management</strong> section in the <strong>Govern</strong> menu.</p><p><img class="aligncenter size-full wp-image-100625" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/10/2025-sagemaker-catalog-metadata-2-edit-rules.jpg" alt="" width="2560" height="1287" /></p><p>Now you can select either <strong>Metadata forms</strong> or <strong>Glossary association</strong> as a type of requirement for the rule. When you select <strong>Glossary association</strong>, you can choose up to 5 required glossary terms per rule.</p><p><img class="aligncenter size-full wp-image-100627 c7" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/10/2025-sagemaker-catalog-metadata-2-edit-rules-glossary-terms.jpg" alt="" width="1788" height="1566" /></p><p>If you attempt to publish assets without adding the required glossary terms, the error message prompting you to enforce the glossary rule appears.</p><p><img class="aligncenter wp-image-100924 size-full" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/13/025-sagemaker-catalog-metadata-2-edit-rules-glossary-terms-error-1.jpg" alt="" width="2560" height="1018" /></p><p>Standardizing metadata and aligning data schemas with business language enhances data governance and improves search relevance, helping your organization better understand and trust published data.</p><p>You can use <a href="https://aws.amazon.com/cli">AWS Command Line Interface (AWS CLI)</a> and <a href="https://builder.aws.com/build/tools">AWS SDKs</a> to use these features. To learn more, visit the <a href="https://docs.aws.amazon.com/sagemaker-unified-studio/latest/userguide/working-with-business-catalog.html">Amazon SageMaker Unified Studio data catalog</a> in the Amazon SageMaker Unified Studio User Guide.</p><p><strong class="c6">Now available</strong><br />The new metadata capabilities are now available in AWS Regions where Amazon SageMaker Catalog is available.</p><p>Give it a try and send feedback to <a href="https://repost.aws/tags/TAoz_4K4DIQn-i0b7QKcuoIw/amazon-sagemaker-catalog">AWS re:Post for Amazon SageMaker Catalog</a> or through your usual AWS Support contacts.</p><p>— <a href="https://linkedin.com/in/channy">Channy</a></p></section><aside id="Comments" class="blog-comments"><div data-lb-comp="aws-blog:cosmic-comments" data-env="prod" data-content-id="6a36d505-e24c-47ad-8bc4-525ebf7ffc05" data-title="New business metadata features in Amazon SageMaker Catalog to improve discoverability across organizations" data-url="https://aws.amazon.com/blogs/aws/new-business-metadata-features-in-amazon-sagemaker-catalog-to-improve-discoverability-across-organizations/"><p data-failed-message="Comments cannot be loaded… Please refresh and try again.">Loading comments…</p></div>
</aside>]]></summary>
    <link href="https://aws.amazon.com/blogs/aws/new-business-metadata-features-in-amazon-sagemaker-catalog-to-improve-discoverability-across-organizations/"/>
    <updated>2025-11-19T20:09:00+01:00</updated>
  </entry>
  <entry>
    <id>https://aws.amazon.com/blogs/aws/aws-control-tower-introduces-a-controls-dedicated-experience/</id>
    <title><![CDATA[AWS Control Tower introduces a Controls Dedicated experience]]></title>
    <summary><![CDATA[<section class="blog-post-content lb-rtxt"><table><tbody><tr><td>
<p></p></td></tr></tbody>


</table><p>Today, we’re announcing a <a href="https://aws.amazon.com/controltower/">Controls Dedicated experience in AWS Control Tower</a>. With this feature, you can use Amazon Web Services (AWS) managed controls without the need to set up resources you don’t need, which means you get started faster if you already have an established multi-account environment and want to use AWS Control Tower only for its managed controls. The Controls Dedicated experience gives you seamless access to the comprehensive collection of managed controls in the <a href="https://docs.aws.amazon.com/controlcatalog/latest/userguide/what-is-controlcatalog.html">Control Catalog</a> to incrementally enhance your governance stance.</p><p>Until now, customers were required to adopt and configure many recommended best practices which meant implementing a full <a href="https://docs.aws.amazon.com/prescriptive-guidance/latest/migration-aws-environment/understanding-landing-zones.html">AWS landing zone</a> at the time of setting up a multi-account environment. This setup included defining the prescribed organizational structure, required services, and more, in AWS Control Tower to start using landing zone. This approach is helpful to ensure a well-architected multi-account environment, however, for customers who already have an established, well-architected multi-account environment and only want to use AWS managed controls, it was more challenging for them to adopt AWS Control Tower. The new Controls Dedicated experience provides a faster and more flexible way of using AWS Control Tower.</p><p><strong>How it works</strong><br />Here’s how I define managed controls using the Controls Dedicated experience in AWS Control Tower in one of my accounts.</p><p>I start by choosing <strong>Enable AWS Control Tower</strong> on the AWS Control Tower landing page.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/03/Screenshot-2025-10-30-142542.png"><img class="aligncenter size-large wp-image-100327" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/03/Screenshot-2025-10-30-142542-1024x159.png" alt="" width="1024" height="159" /></a></p><p>I have the option to set up a full environment, or only set up controls using the Controls Dedicated experience. I opt to set up controls by choosing <strong>I have an existing environment and want to enable AWS Managed Controls</strong>. Next, I set up the rest of the information, such as choosing the <strong>Home Region</strong> from the dropdown list so that AWS Control Tower resources are provisioned in this Region during enablement. I also select <strong>Turn on automatic account enrollment</strong> for AWS Control Tower to enroll accounts automatically when I move them into a registered organization unit. The rest of the information is optional; I choose <strong>Enable AWS Control Tower</strong> to finalize the process, and the landing zone setup begins.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/03/Screenshot-2025-10-30-150947-2.png"><img class="aligncenter size-large wp-image-100330" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/03/Screenshot-2025-10-30-150947-2-954x1024.png" alt="" width="954" height="1024" /></a></p><p>Behind the scenes, AWS Control Tower installed the required service-linked <a href="https://aws.amazon.com/iam/">AWS Identity and Access Management (IAM) roles</a>, and to use detective controls, service-linked <a href="https://docs.aws.amazon.com/config/latest/developerguide/stop-start-recorder.html">Config Recorder in AWS Config</a> in the account where I’m deploying the AWS managed controls. The setup is completed, and now I have all the infrastructure required to use the controls in this account. The dashboard gives a summary of the environment such as the organizational units that were created, the shared accounts, the selected IAM configuration, the preventive controls to enforce policies, and detective controls to detect configuration violations.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/03/Screenshot-2025-10-30-155801.png"><img class="aligncenter size-large wp-image-100331" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/03/Screenshot-2025-10-30-155801-1024x655.png" alt="" width="1024" height="655" /></a><br />I choose <strong>View enabled controls</strong> for a list of all controls that were installed during this process.</p><p><strong>Good to know</strong><br />Usually, an existing <a href="https://aws.amazon.com/organizations/">AWS Organizations</a> account is required before you can use AWS Control Tower. If you’re using the console to create controls and don’t already have an Organizations account, one will be set up on your behalf.</p><p>Earlier, I mentioned a service-linked Config Recorder. With a service-linked Config Recorder, AWS Control Tower prevents the resource types needed for deployed managed controls from being altered. You have flexibility and the ability to keep your own Config Recorders, and only the configuration items for the resource types that are required by your managed detective controls will be enabled, which optimizes your AWS Config costs.</p><p><strong>Now available</strong><br />Controls Dedicated experience in AWS Control Tower is available today in all <a href="https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_tag-policies-supported-regions.html">AWS Regions</a> where AWS Control Tower is available.</p><p>To learn more, visit our <a href="https://aws.amazon.com/controltower/?trk=7c8639c6-87c6-47d6-9bd0-a5812eecb848&amp;sc_channel=el">AWS Control Tower page</a>. For more information related to pricing, refer to <a href="https://aws.amazon.com/controltower/pricing/?trk=7c8639c6-87c6-47d6-9bd0-a5812eecb848&amp;sc_channel=el">AWS Control Tower pricing</a>. Send feedback to <a href="https://repost.aws/tags/TA8lQh6CBhTq6yxP2OZEkWVg/aws-control-tower">AWS re:Post for AWS Control Tower</a> or through your usual AWS Support contacts.</p><p>– <a href="https://linkedin.com/veliswa-boya">Veliswa</a>.</p></section><aside class="blog-comments"><div data-lb-comp="aws-blog:cosmic-comments" data-env="prod" data-content-id="c2d852fd-d376-4947-8812-a394036a85a9" data-title="AWS Control Tower introduces a Controls Dedicated experience" data-url="https://aws.amazon.com/blogs/aws/aws-control-tower-introduces-a-controls-dedicated-experience/"><p data-failed-message="Comments cannot be loaded… Please refresh and try again.">Loading comments…</p></div>
</aside>]]></summary>
    <link href="https://aws.amazon.com/blogs/aws/aws-control-tower-introduces-a-controls-dedicated-experience/"/>
    <updated>2025-11-19T20:07:00+01:00</updated>
  </entry>
  <entry>
    <id>https://aws.amazon.com/blogs/aws/new-aws-billing-transfer-for-centrally-managing-aws-billing-and-costs-across-multiple-organizations/</id>
    <title><![CDATA[New AWS Billing Transfer for centrally managing AWS billing and costs across multiple organizations]]></title>
    <summary><![CDATA[<section class="blog-post-content lb-rtxt"><table id="amazon-polly-audio-table"><tbody><tr><td id="amazon-polly-audio-tab">
<p></p></td></tr></tbody>


</table><p>Today, we’re announcing the general availability of Billing Transfer, a new capability to centrally manage and pay bills across multiple organizations by transferring payment responsibility to other billing administrators, such as company affiliates and <a href="https://aws.amazon.com/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">Amazon Web Services (AWS)</a> Partners. This feature provides customers operating across multiple organizations with comprehensive visibility of cloud costs across their multi-organization environment, but organization administrators maintain security management autonomy over their accounts.</p><p>Customers use AWS Organizations to centrally administer and manage billing for their multi-account environment. However, when they operate in a multi-organization environment, billing administrators must access the management account of each organization separately to collect invoices and pay bills. This decentralized approach to billing management creates unnecessary complexity for enterprises managing costs and paying bills across multiple AWS organizations. This feature also will be useful for AWS Partners to resell AWS products and solutions, and assume the responsibility of paying AWS for the consumption of their multiple customers.</p><p><img class="aligncenter size-full wp-image-101021" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/15/2025-billing-transfer-0-bills-transfer-diagram.png" alt="" width="1024" height="516" /></p><p>With Billing Transfer, customers operating in multi-organization environments can now use a single management account to manage aspects of billing— such as invoice collection, payment processing, and detailed cost analysis. This makes billing operations more efficient and scalable while individual management accounts can maintain complete security and governance autonomy over their accounts. Billing Transfer also helps protect proprietary pricing data by integrating with <a href="https://docs.aws.amazon.com/billingconductor/latest/userguide/what-is-billingconductor.html?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">AWS Billing Conductor</a>, so billing administrators can control cost visibility.</p><p><strong class="c6">Getting started with Billing Transfer</strong><br />To set up Billing Transfer, an external management account sends a billing transfer invitation to a management account called a bill-source account. If accepted, the external account becomes the bill-transfer account, managing and paying for the bill-source account’s consolidated bill, starting on the date specified on the invitation.</p><p>To get started, go to the <a href="https://console.aws.amazon.com/costmanagement/home#/transferbilling?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">Billing and Cost Management console</a>, choose <strong>Preferences and Settings</strong> in the left navigation pane and choose <strong>Billing transfer</strong>. Choose <strong>Send invitation</strong> from a management account you’ll use to centrally manage billing across your multi-organization environment.</p><p><img class="aligncenter wp-image-100318 size-full c7" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/03/2025-billing-transfer-1-overview.png" alt="" width="2856" height="1732" /></p><p>Now, you can send a billing transfer invitation by entering the email address or account ID of the bill-source accounts for which you want to manage billing. You should choose the monthly billing period for when invoicing and payment will begin and a pricing plan from AWS Billing Conductor to control the cost data visible to the bill-source accounts.</p><p><img class="aligncenter wp-image-101417 size-full c8" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/19/2025-billing-transfer-2-send-invitation-1.png" alt="" width="1958" height="2427" /></p><p>When you choose <strong>Send invitation</strong>, the bill-source accounts will get a billing transfer notice in the <strong>Outbound billing</strong> tab.</p><p><img class="aligncenter size-full wp-image-100321 c8" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/03/2025-billing-transfer-3-view-invitation.jpg" alt="" width="2004" height="1226" /></p><p>Choose <strong>View details</strong>, review the invitation page, and choose <strong>Accept</strong>.</p><p><img class="aligncenter wp-image-100320 size-full c8" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/03/2025-billing-transfer-4-accept-invitation.jpg" alt="" width="2006" height="1314" /></p><p>After the transfer is accepted, all usage from the bill-source accounts will be billed to the bill-transfer account using its billing and tax settings, and invoices will no longer be sent to the bill-source accounts. Any party (bill-source accounts and bill-transfer account) can withdraw the transfer at any time.</p><p><img class="aligncenter wp-image-101017 size-full c8" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/15/2025-billing-transfer-5-analysis.jpg" alt="" width="2016" height="960" /></p><p>After your billing transfer begins, the bill-transfer account will receive a bill at the end of the month for each of your billing transfers. To view transferred invoices reflecting the usage of the bill-source accounts, choose the <strong>Invoices</strong> tab in the <strong>Bills</strong> page.</p><p><img class="aligncenter wp-image-101020 size-full c7" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/15/2025-billing-transfer-6-bills-invoices.jpg" alt="" width="2560" height="968" /></p><p>You can identify the transferred invoices by bill-source account IDs. You can also find the payments for the bill-source accounts invoices in the <strong>Payments</strong> menu. These appear only in the bill-transfer account.</p><p>The bill-transfer account can use billing views to access the cost data of the bill-source accounts in <a href="https://aws.amazon.com/aws-cost-management/aws-cost-explorer/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">AWS Cost Explorer</a>, <a href="https://aws.amazon.com/aws-cost-management/aws-cost-and-usage-reporting/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">AWS Cost and Usage Report</a>, <a href="https://aws.amazon.com/aws-cost-management/aws-budgets/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">AWS Budgets</a> and Bills page. When enabling billing view mode, you can choose your desired billing view for each bill-source account.</p><p><img class="aligncenter size-full wp-image-101023 c7" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/15/2025-billing-transfer-6-bills-billing-view.jpg" alt="" width="2542" height="1012" /></p><p>The bill-source accounts will experience these changes:</p><ul><li>Historical cost data will no longer be available and should be downloaded before accepting</li>
<li>Cost and Usage Reports should be reconfigured after transfer</li>
</ul><p>Transferred bills in the bill-transfer account always use the tax and payment settings of the account to which they’re delivered. Therefore, all the invoices reflecting the usage of the bill-source accounts and the member accounts in their AWS Organizations will contain taxes (if applicable) calculated on the tax settings determined by the bill-transfer account.</p><p>Similarly, the seller of record and payment preferences are also based on the configuration determined by the bill-transfer account. You can customize the tax and payments settings by creating the invoice units available in the Invoice Configuration functionality.</p><p>To learn more about details, visit <a href="https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/orgs_transfer_billing.html">Billing Transfer</a> in the AWS documentation.</p><p><strong class="c6">Now available</strong><br />Billing Transfer is available today in all commercial AWS Regions. To learn more, visit the <a href="https://aws.amazon.com/aws-cost-management/aws-billing-transfer/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">AWS Cloud Financial Management Services product page</a>.</p><p>Give Billing Transfer a try today and send feedback to <a href="https://repost.aws/tags/TALH1H5PjFQ7ekKQJNEzXLVQ/aws-billing?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">AWS re:Post for AWS Billing</a> or through your usual AWS Support contacts.</p><p>— <a href="https://linkedin.com/in/channy/">Channy</a></p></section><aside id="Comments" class="blog-comments"><div data-lb-comp="aws-blog:cosmic-comments" data-env="prod" data-content-id="92a38c6c-3fb5-4c71-837d-cea1e2616bdc" data-title="New AWS Billing Transfer for centrally managing AWS billing and costs across multiple organizations" data-url="https://aws.amazon.com/blogs/aws/new-aws-billing-transfer-for-centrally-managing-aws-billing-and-costs-across-multiple-organizations/"><p data-failed-message="Comments cannot be loaded… Please refresh and try again.">Loading comments…</p></div>
</aside>]]></summary>
    <link href="https://aws.amazon.com/blogs/aws/new-aws-billing-transfer-for-centrally-managing-aws-billing-and-costs-across-multiple-organizations/"/>
    <updated>2025-11-19T20:06:00+01:00</updated>
  </entry>
  <entry>
    <id>https://aws.amazon.com/blogs/aws/monitor-network-performance-and-traffic-across-your-eks-clusters-with-container-network-observability/</id>
    <title><![CDATA[Monitor network performance and traffic across your EKS clusters with Container Network Observability]]></title>
    <summary><![CDATA[<section class="blog-post-content lb-rtxt"><table id="amazon-polly-audio-table"><tbody><tr><td id="amazon-polly-audio-tab">
<p></p></td></tr></tbody>


</table><p>Organizations are increasingly expanding their Kubernetes footprint by deploying microservices to incrementally innovate and deliver business value faster. This growth places increased reliance on the network, giving platform teams exponentially complex challenges in monitoring network performance and traffic patterns in EKS. As a result, organizations struggle to maintain operational efficiency as their container environments scale, often delaying application delivery and increasing operational costs.</p><p>Today, I’m excited to announce <strong>Container Network Observability in <a href="https://aws.amazon.com/eks/?trk=c4ea046f-18ad-4d23-a1ac-cdd1267f942c&amp;sc_channel=el">Amazon Elastic Kubernetes Service (Amazon EKS)</a></strong>, a comprehensive set of network observability features in Amazon EKS that you can use to better measure your network performance in your system and dynamically visualize the landscape and behavior of network traffic in EKS.</p><p>Here’s a quick look at Container Network Observability in Amazon EKS:</p><p><img class="aligncenter size-full wp-image-101276" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/18/2025-news-eks-container-network-observability-rev-6.png" alt="" width="1881" height="1098" /></p><p>Container Network Observability in EKS addresses observability challenges by providing enhanced visibility of workload traffic. It offers performance insights into network flows within the cluster and those with cluster-external destinations. This makes your EKS cluster network environment more observable while providing built-in capabilities for more precise troubleshooting and investigative efforts.</p><p><strong>Getting started with Container Network Observability in EKS<br /></strong></p><p>I can enable this new feature for a new or existing EKS cluster. For a new EKS cluster, during the <strong>Configure observability</strong> setup, I navigate to the <strong>Configure network observability</strong> section. Here, I select <strong>Edit container network observability</strong>. I can see there are three included features: <strong>Service map</strong>, <strong>Flow table</strong>, and <strong>Performance metric endpoint</strong>, which are enabled by <a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-NetworkFlowMonitor.html">Amazon CloudWatch Network Flow Monitor</a>.</p><p><img class="aligncenter size-full wp-image-101116" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/17/2025-news-eks-container-network-observability-rev-4.png" alt="" width="1463" height="1677" /></p><p>On the next page, I need to install the <strong>AWS Network Flow Monitor Agent</strong>.</p><p><img class="aligncenter size-full wp-image-100913" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/13/2025-news-eks-container-network-observability-2.png" alt="" width="1531" height="1442" /></p><p>After it’s enabled, I can navigate to my EKS cluster and select <strong>Monitor cluster</strong>.<br /><img class="aligncenter size-full wp-image-100915" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/13/2025-news-eks-container-network-observability-3.png" alt="" width="1920" height="861" /></p><p>This will bring me to my cluster observability dashboard. Then, I select the <strong>Network</strong> tab.</p><p><img class="aligncenter size-full wp-image-100916" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/13/2025-news-eks-container-network-observability-4.png" alt="" width="1920" height="609" /><br /><strong>Comprehensive observability features<br /></strong> Container Network Observability in EKS provides several key features, including performance metrics, service map, and flow table with three views: AWS service view, cluster view, and external view.</p><p>With <strong>Performance metrics</strong>, you can now scrape network-related system metrics for pods and worker nodes directly from the Network Flow Monitor agent and send them to your preferred monitoring destination. Available metrics include ingress/egress flow counts, packet counts, bytes transferred, and various allowance exceeded counters for bandwidth, packets per second, and connection tracking limits. The following screenshot shows an example of how you can use <a href="https://aws.amazon.com/grafana/?trk=c4ea046f-18ad-4d23-a1ac-cdd1267f942c&amp;sc_channel=el">Amazon Managed Grafana</a> to visualize the performance metrics scraped using Prometheus.</p><p><img class="aligncenter size-full wp-image-100920" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/13/2025-news-eks-container-network-observability-8.png" alt="" width="5876" height="2848" /><br />With the <strong>Service map</strong> feature, you can dynamically visualize intercommunication between workloads in your cluster, making it straightforward to understand your application topology with a quick look. The service map helps you quickly identify performance issues by highlighting key metrics such as retransmissions, retransmission timeouts, and data transferred for network flows between communicating pods.</p><p>Let me show you how this works with a sample e-commerce application. The service map provides both high-level and detailed views of your microservices architecture. In this e-commerce example, we can see three core microservices working together: the <strong>GraphQL service</strong> acts as an API gateway, orchestrating requests between the frontend and backend services.</p><p>When a customer browses products or places an order, the GraphQL service coordinates communication with both the <strong>products service</strong> (for catalog data, pricing, and inventory) and the <strong>orders service</strong> (for order processing and management). This architecture allows each service to scale independently while maintaining clear separation of concerns.</p><p><img class="aligncenter size-full wp-image-101085" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/17/2025-news-eks-container-network-observability-rev-1.png" alt="" width="1501" height="664" /></p><p>For deeper troubleshooting, you can expand the view to see individual pod instances and their communication patterns. The detailed view reveals the complexity of microservices communication. Here, you can see multiple pod instances for each service and the network of connections between them.</p><p>This granular visibility is crucial for identifying issues like uneven load distribution, pod-to-pod communication bottlenecks, or when specific pod instances are experiencing higher latency. For example, if one GraphQL pod is making disproportionately more calls to a particular products pod, you can quickly spot this pattern and investigate potential causes.</p><p><img class="aligncenter size-full wp-image-101086" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/17/2025-news-eks-container-network-observability-rev-2.png" alt="" width="1492" height="650" /></p><p>Use the <strong>Flow table</strong> to monitor the top talkers across Kubernetes workloads in your cluster from three different perspectives, each providing unique insights into your network traffic patterns.</p><p><strong>Flow table</strong> – Monitor the top talkers across Kubernetes workloads in your cluster from three different perspectives, each providing unique insights into your network traffic patterns:</p><ul><li><strong>AWS service view</strong> shows which workloads generate the most traffic to Amazon Web Services (AWS) services such as <a href="https://aws.amazon.com/dynamodb/?trk=c4ea046f-18ad-4d23-a1ac-cdd1267f942c&amp;sc_channel=el">Amazon DynamoDB</a> and <a href="https://aws.amazon.com/s3/?trk=c4ea046f-18ad-4d23-a1ac-cdd1267f942c&amp;sc_channel=el">Amazon Simple Storage Service (Amazon S3)</a>, so you can optimize data access patterns and identify potential cost optimization opportunities.</li>
<li><strong>The Cluster view</strong> reveals the heaviest communicators within your cluster (east-west traffic), which means you can spot chatty microservices that might benefit from optimization or colocation strategies</li>
<li><strong>External view</strong>identifies workloads with the highest traffic to destinations outside AWS (internet or on premises), which is useful for security monitoring and bandwidth management.</li>
</ul><p>The flow table provides detailed metrics and filtering capabilities to analyze network traffic patterns. In this example, we can see the flow table displaying cluster view traffic between our e-commerce services. The table shows that the <code>orders</code> pod is communicating with multiple <code>products</code> pods, transferring amounts of data. This pattern suggests the orders service is making frequent product lookups during order processing.</p><p>The filtering capabilities are useful for troubleshooting, for example, to focus on traffic from a specific orders pod. This granular filtering helps you quickly isolate communication patterns when investigating performance issues. For instance, if customers are experiencing slow checkout times, you can filter to see if the orders service is making too many calls to the products service, or if there are network bottlenecks between specific pod instances.</p><p><img class="aligncenter size-full wp-image-101118" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/17/2025-news-eks-container-network-observability-rev-5.png" alt="" width="1424" height="639" /></p><p><strong>Additional things to know<br /></strong> Here are key points to note about Container Network Observability in EKS:</p><ul><li><strong>Pricing</strong> – For network monitoring, you pay standard Amazon CloudWatch Network Flow Monitor pricing.</li>
<li><strong>Availability</strong> – Container Network Observability in EKS is available in all commercial AWS regions where Amazon CloudWatch Network Flow Monitor is available.</li>
<li><strong>Export metrics to your preferred monitoring solution</strong> – Metrics are available in OpenMetrics format, compatible with Prometheus and Grafana. For configuration details, refer to <a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-NetworkFlowMonitor.html">Network Flow Monitor documentation</a>.</li>
</ul><p>Get started with <a href="https://docs.aws.amazon.com/eks/latest/userguide/network-observability.html">Container Network Observability in Amazon EKS</a> today to improve network observability in your cluster.</p><p>Happy building!<br />— <a href="https://www.linkedin.com/in/donnieprakoso">Donnie</a></p></section><aside id="Comments" class="blog-comments"><div data-lb-comp="aws-blog:cosmic-comments" data-env="prod" data-content-id="b6d89190-845d-4e97-8c02-169b0d993998" data-title="Monitor network performance and traffic across your EKS clusters with Container Network Observability" data-url="https://aws.amazon.com/blogs/aws/monitor-network-performance-and-traffic-across-your-eks-clusters-with-container-network-observability/"><p data-failed-message="Comments cannot be loaded… Please refresh and try again.">Loading comments…</p></div>
</aside>]]></summary>
    <link href="https://aws.amazon.com/blogs/aws/monitor-network-performance-and-traffic-across-your-eks-clusters-with-container-network-observability/"/>
    <updated>2025-11-19T20:05:00+01:00</updated>
  </entry>
  <entry>
    <id>https://aws.amazon.com/blogs/aws/new-amazon-bedrock-service-tiers-help-you-match-ai-workload-performance-with-cost/</id>
    <title><![CDATA[New Amazon Bedrock service tiers help you match AI workload performance with cost]]></title>
    <summary><![CDATA[<section class="blog-post-content lb-rtxt"><table id="amazon-polly-audio-table"><tbody><tr><td id="amazon-polly-audio-tab">
<p></p></td></tr></tbody>


</table><p>Today, <a href="https://aws.amazon.com/bedrock/">Amazon Bedrock</a> introduces new service tiers that give you more control over your AI workload costs while maintaining the performance levels your applications need.</p><p>I’m working with customers building AI applications. I’ve seen firsthand how different workloads require different performance and cost trade-offs. Many organizations running AI workloads face challenges balancing performance requirements with cost optimization. Some applications need rapid response times for real-time interactions, whereas others can process data more gradually. With these challenges in mind, today we’re announcing additional options pricing that give you more flexibility in matching your workload requirements with cost optimization.</p><p>Amazon Bedrock now offers three service tiers for workloads: Priority, Standard, and Flex. Each tier is designed to match specific workload requirements. Applications have varying response time requirements based on the use case. Some applications—such as financial trading systems—demand the fastest response times, others need rapid response times to support business processes like content generation, and applications such as content summarization can process data more gradually.</p><p>The <strong>Priority</strong> tier processes your requests ahead of other tiers, providing preferential compute allocation for mission-critical applications like customer-facing chat-based assistants and real-time language translation services, though at a premium price point. The <strong>Standard</strong> tier provides consistent performance at regular rates for everyday AI tasks, ideal for content generation, text analysis, and routine document processing. For workloads that can handle longer latency, the <strong>Flex</strong> tier offers a more cost-effective option with lower pricing, which is well suited for model evaluations, content summarization, and multistep analysis and agentic workflows.</p><p>You can now optimize your spending by matching each workload to the most appropriate tier. For example, if you’re running a customer service chat-based assistant that needs quick responses, you can use the Priority tier to get the fastest processing times. For content summarization tasks that can tolerate longer processing times, you can use the Flex tier to reduce costs while maintaining reliable performance. For most models that support Priority Tier, customers can realize up to 25% better output tokens per second (OTPS) latency compared to standard tier.</p><p>Check the <a href="https://docs.aws.amazon.com/bedrock/latest/userguide/service-tiers-inference.html">Amazon Bedrock documentation for an up-to-date list of models supported for each service tier.</a></p><p><strong>Choosing the right tier for your workload</strong></p><p>Here is a mental model to help you choose the right tier for your workload.</p><table class="c9" style="margin: auto;"><tbody><tr class="c7"><th class="c6">Category</th>
<th class="c6">Recommended service tier</th>
<th class="c6">Description</th>
</tr><tr><td class="c8">Mission-critical</td>
<td class="c8">Priority</td>
<td class="c8">Requests are handled ahead of other tiers. Lower latency responses for user-facing apps (for example, customer service chat assistants, real-time language translation, interactive AI assistants)</td>
</tr><tr><td class="c8">Business-standard</td>
<td class="c8">Standard</td>
<td class="c8">Responsive performance for important workloads (for example, content generation, text analysis, routine document processing)</td>
</tr><tr><td class="c8">Business-noncritical</td>
<td class="c8">Flex</td>
<td class="c8">Cost-efficient for less urgent workloads (for example, model evaluations, content summarization, multistep agentic workflows)</td>
</tr></tbody></table><p>Start by reviewing with application owners your current usage patterns. Next, identify which workloads need immediate responses and which ones can process data more gradually. You can then begin routing a small portion of your traffic through different tiers to test performance and cost benefits.</p><p>The <a href="https://calculator.aws/#/createCalculator/bedrock">AWS Pricing Calculator</a> helps you estimate costs for different service tiers by entering your expected workload for each tier. You can estimate your budget based on your specific usage patterns.</p><p>To monitor your usage and costs, you can use the <a href="https://us-east-1.console.aws.amazon.com/servicequotas/home/services/bedrock/quotas">AWS Service Quotas console</a> or <a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/model-invocations.html">turn on model invocation logging in Amazon Bedrock</a> and observe the metrics with <a href="https://aws.amazon.com/cloudwatch/">Amazon CloudWatch</a>. These tools provide visibility into your token usage and help you track performance across different tiers.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/10/17/2025-10-17_13-49-02.png"><img class="aligncenter size-large wp-image-99925" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/10/17/2025-10-17_13-49-02-1024x651.png" alt="Amazon Bedrock invocations observability" width="1024" height="651" /></a></p><p>You can start using the new service tiers today. You choose the tier on a per-API call basis. Here is an example using the <code>ChatCompletions</code> OpenAI API, but you can pass the same <code>service_tier</code> parameter in the body of <code>InvokeModel</code>, <code>InvokeModelWithResponseStream</code>, <code>Converse</code>, and<code>ConverseStream</code> APIs (for supported models):</p><pre class="lang-python">from openai import OpenAI
client = OpenAI(
    base_url="https://bedrock-runtime.us-west-2.amazonaws.com/openai/v1",
    api_key="$AWS_BEARER_TOKEN_BEDROCK" # Replace with actual API key
)
completion = client.chat.completions.create(
    model= "openai.gpt-oss-20b-1:0",
    messages=[
        {
            "role": "developer",
            "content": "You are a helpful assistant."
        },
        {
            "role": "user",
            "content": "Hello!"
        }
    ]
    service_tier= "priority"  # options: "priority | default | flex"
)
print(completion.choices[0].message)</pre><p>To learn more, check out the <a href="https://docs.aws.amazon.com/bedrock/latest/userguide/what-is-bedrock.html">Amazon Bedrock User Guide</a> or contact your AWS account team for detailed planning assistance.</p><p>I’m looking forward to hearing how you use these new pricing options to optimize your AI workloads. Share your experience with me online on social networks or connect with me at AWS events.</p><a href="https://linktr.ee/sebsto">— seb</a></section><aside id="Comments" class="blog-comments"><div data-lb-comp="aws-blog:cosmic-comments" data-env="prod" data-content-id="8103c140-83f2-4c60-9139-3ccbaf2db0b0" data-title="New Amazon Bedrock service tiers help you match AI workload performance with cost" data-url="https://aws.amazon.com/blogs/aws/new-amazon-bedrock-service-tiers-help-you-match-ai-workload-performance-with-cost/"><p data-failed-message="Comments cannot be loaded… Please refresh and try again.">Loading comments…</p></div>
</aside>]]></summary>
    <link href="https://aws.amazon.com/blogs/aws/new-amazon-bedrock-service-tiers-help-you-match-ai-workload-performance-with-cost/"/>
    <updated>2025-11-18T23:29:00+01:00</updated>
  </entry>
  <entry>
    <id>https://aws.amazon.com/blogs/aws/accelerate-large-scale-ai-applications-with-the-new-amazon-ec2-p6-b300-instances/</id>
    <title><![CDATA[Accelerate large-scale AI applications with the new Amazon EC2 P6-B300 instances]]></title>
    <summary><![CDATA[<section class="blog-post-content lb-rtxt"><table id="amazon-polly-audio-table"><tbody><tr><td id="amazon-polly-audio-tab">
<p></p></td></tr></tbody>


</table><p>Today, we’re announcing the general availability of <a href="https://aws.amazon.com/pm/ec2/?trk=7c8639c6-87c6-47d6-9bd0-a5812eecb848&amp;sc_channel=el">Amazon Elastic Compute Cloud (Amazon EC2)</a> P6-B300 instances, our next-generation GPU platform accelerated by NVIDIA Blackwell Ultra GPUs. These instances deliver 2 times more networking bandwidth, and 1.5 times more GPU memory compared to previous generation instances, creating a balanced platform for large-scale AI applications.</p><p>With these improvements, P6-B300 instances are ideal for training and serving large-scale <a href="https://aws.amazon.com/ai/?trk=7c8639c6-87c6-47d6-9bd0-a5812eecb848&amp;sc_channel=el">AI models</a>, particularly those employing sophisticated techniques such as <a href="https://docs.aws.amazon.com/sagemaker/latest/dg/model-parallel-core-features-v2-expert-parallelism.html?trk=7c8639c6-87c6-47d6-9bd0-a5812eecb848&amp;sc_channel=el">Mixture of Experts (MoE)</a> and multimodal processing. For organizations working with trillion-parameter models and requiring distributed training across thousands of GPUs, these instances provide the perfect balance of compute, memory, and networking capabilities.<a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/12/p6b300screen.png"><img class="alignright wp-image-100819" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/12/p6b300screen-300x287.png" alt="" width="341" height="326" /></a></p><p><strong>Improvements made compared to predecessors</strong><br />The P6-B300 instances deliver 6.4Tbps <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/efa.html?trk=7c8639c6-87c6-47d6-9bd0-a5812eecb848&amp;sc_channel=el">Elastic Fabric Adapter (EFA) networking</a> bandwidth, supporting efficient communication across large GPU clusters. These instances feature 2.1TB of GPU memory, allowing large models to reside within a single NVLink domain, which significantly reduces model sharding and communication overhead. When combined with EFA networking and the advanced virtualization and security capabilities of <a href="https://aws.amazon.com/ec2/nitro/?trk=7c8639c6-87c6-47d6-9bd0-a5812eecb848&amp;sc_channel=el">AWS Nitro System</a>, these instances provide unprecedented speed, scale, and security for AI workloads.</p><p>The specs for the EC2 P6-B300 instances are as follows.</p><table class="c9"><tbody><tr class="c7"><td class="c6"><strong>Instance size</strong></td>
<td class="c6"><strong>VCPUs</strong></td>
<td class="c6"><strong>System memory</strong></td>
<td class="c6"><strong>GPUs</strong></td>
<td class="c6"><strong>GPU memory</strong></td>
<td class="c6"><strong>GPU-GPU interconnect</strong></td>
<td class="c6"><strong>EFA network bandwidth</strong></td>
<td class="c6"><strong>ENA bandwidth</strong></td>
<td class="c6"><strong>EBS bandwidth</strong></td>
<td class="c6"><strong>Local storage</strong></td>
</tr><tr class="c8"><td class="c6"><strong>P6-B300.48xlarge</strong></td>
<td class="c6">192</td>
<td class="c6">4TB</td>
<td class="c6">8x B300 GPU</td>
<td class="c6">2144GB HBM3e</td>
<td class="c6">1800 GB/s</td>
<td class="c6">6.4 Tbps</td>
<td class="c6">300 Gbps</td>
<td class="c6">100 Gbps</td>
<td class="c6">8x 3.84TB</td>
</tr></tbody></table><p><strong>Good to know</strong><br />In terms of persistent storage, AI workloads primarily use a combination of high performance persistent storage options such as <a href="https://aws.amazon.com/fsx/lustre/?trk=7c8639c6-87c6-47d6-9bd0-a5812eecb848&amp;sc_channel=el">Amazon FSx for Lustre</a>, <a href="https://aws.amazon.com/s3/storage-classes/express-one-zone/?trk=7c8639c6-87c6-47d6-9bd0-a5812eecb848&amp;sc_channel=el">Amazon S3 Express One Zone</a>, and <a href="https://aws.amazon.com/ebs/?trk=7c8639c6-87c6-47d6-9bd0-a5812eecb848&amp;sc_channel=el">Amazon Elastic Block Store (Amazon EBS)</a>, depending on price performance considerations. For illustration, the dedicated 300Gbps <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/enhanced-networking.html?trk=7c8639c6-87c6-47d6-9bd0-a5812eecb848&amp;sc_channel=el">Elastic Network Adapter (ENA) networking</a> on P6-B300 enables high-throughput hot storage access with S3 Express One Zone, supporting large-scale training workloads. If you’re using FSx for Lustre, you can now use EFA with GPUDirect Storage (GDS) to achieve up to 1.2Tbps of throughput to the Lustre file system on the P6-B300 instances to quickly load your models.</p><p><strong>Available now</strong><br />The P6-B300 instances are now available through Amazon EC2 Capacity Blocks for ML and Savings Planin the US West (Oregon) <a href="https://docs.aws.amazon.com/glossary/latest/reference/glos-chap.html#region">AWS Region</a>.<br />For on-demand reservation of P6-B300 instances, please reach out to your account manager. As usual with Amazon EC2, you pay only for what you use. For more information, refer to <a href="https://aws.amazon.com/ec2/pricing/?trk=7c8639c6-87c6-47d6-9bd0-a5812eecb848&amp;sc_channel=el">Amazon EC2 Pricing</a>. Check out the full collection of <a href="https://aws.amazon.com/ec2/instance-types/">accelerated computing instances</a> to help you start migrating your applications.</p><p>To learn more, visit our <a href="https://aws.amazon.com/ec2/instance-types/p6/">Amazon EC2 P6-B300 instances page</a>. Send feedback to <a href="https://repost.aws/tags/TAO-wqN9fYRoyrpdULLa5y7g/amazon-ec-2/?trk=7c8639c6-87c6-47d6-9bd0-a5812eecb848&amp;sc_channel=el">AWS re:Post for EC2</a> or through your usual AWS Support contacts.</p><p>– <a href="https://www.linkedin.com/in/veliswa-boya/">Veliswa</a></p></section><aside id="Comments" class="blog-comments"><div data-lb-comp="aws-blog:cosmic-comments" data-env="prod" data-content-id="39084575-5628-42aa-bfd9-2c4f192b3cdd" data-title="Accelerate large-scale AI applications with the new Amazon EC2 P6-B300 instances" data-url="https://aws.amazon.com/blogs/aws/accelerate-large-scale-ai-applications-with-the-new-amazon-ec2-p6-b300-instances/"><p data-failed-message="Comments cannot be loaded… Please refresh and try again.">Loading comments…</p></div>
</aside>]]></summary>
    <link href="https://aws.amazon.com/blogs/aws/accelerate-large-scale-ai-applications-with-the-new-amazon-ec2-p6-b300-instances/"/>
    <updated>2025-11-18T23:16:00+01:00</updated>
  </entry>
  <entry>
    <id>https://aws.amazon.com/blogs/aws/aws-weekly-roundup-aws-lambda-load-balancers-amazon-dcv-amazon-linux-2023-and-more-november-17-2025/</id>
    <title><![CDATA[AWS Weekly Roundup: AWS Lambda, load balancers, Amazon DCV, Amazon Linux 2023, and more (November 17, 2025)]]></title>
    <summary><![CDATA[<section class="blog-post-content lb-rtxt"><table id="amazon-polly-audio-table"><tbody><tr><td id="amazon-polly-audio-tab">
<p></p></td></tr></tbody>


</table><p>The weeks before AWS re:Invent, my team is full steam ahead preparing content for the conference. I can’t wait to meet you at one of my three talks: <a href="https://registration.awsevents.com/flow/awsevents/reinvent2025/eventcatalog/page/eventcatalog?search=CMP346&amp;trk=direct">CMP346</a> : Supercharge AI/ML on Apple Silicon with EC2 Mac, <a href="https://registration.awsevents.com/flow/awsevents/reinvent2025/eventcatalog/page/eventcatalog?search=CMP344&amp;trk=direct">CMP344</a>: Speed up Apple application builds with CI/CD on EC2 Mac, and <a href="https://registration.awsevents.com/flow/awsevents/reinvent2025/eventcatalog/page/eventcatalog?search=DEV416&amp;trk=direct">DEV416</a>: Develop your AI Agents and MCP Tools in Swift.</p><p>Last week, <a href="https://aws.amazon.com/blogs/aws/introducing-our-final-aws-heroes-of-2025/">AWS announced three new AWS Heroes.</a> The <a href="https://builder.aws.com/community/heroes">AWS Heroes program</a> recognizes a vibrant, worldwide group of AWS experts whose enthusiasm for knowledge-sharing has a real impact within the community. Welcome to the community, Dimple, Rola, and Vivek.</p><p>We also opened the <a href="https://aws-experience.com/emea/tel-aviv/gen-ai-loft-program">GenAI Loft in Tel Aviv, Israel</a>. <a href="https://aws.amazon.com/startups/lp/aws-gen-ai-lofts">AWS Gen AI Lofts</a> are collaborative spaces and immersive experiences for startups and developers. The Loft content is tailored to address local customer needs – from startups and enterprises to public sector organizations, bringing together developers, investors, and industry experts under one roof.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/17/2025_11_11-AWS-Gen-Ai-Loft-Tel-Aviv-253.jpg"><img class="aligncenter size-large wp-image-101091" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/17/2025_11_11-AWS-Gen-Ai-Loft-Tel-Aviv-253-1024x683.jpg" alt="GenAI Loft - TLV" width="1024" height="683" /></a></p><p>The loft is open in Tel Aviv until Wednesday, November 19. If you’re in the area, <a href="https://aws-experience.com/emea/tel-aviv/gen-ai-loft-program">check the list of sessions, workshops, and hackathons today.</a></p><p>If you are a serverless developer, last week was really rich with news. Let’s start with these.</p><p><strong class="c6">Last week’s launches<br /></strong> Here are the launches that got my attention this week:</p><ul><li><a href="https://aws.amazon.com/about-aws/whats-new/2025/11/aws-lambda-rust/">AWS Lambda officially supports Rust</a> , <a href="https://aws.amazon.com/about-aws/whats-new/2025/11/aws-lambda-java-25/">AWS Lambda supports Java 25,</a> and <a href="https://aws.amazon.com/blogs/opensource/the-swift-aws-lambda-runtime-moves-to-awslabs/">AWS Lambda adds an experimental runtime interface client for Swift</a> – What a busy time for the Lambda service team! The support for the Rust programming language is now generally available. Although the runtime interface client existed for years, it has just been graduated to version 1.0.0. My colleagues <a href="https://www.linkedin.com/in/julianrwood/">Julian</a> and <a href="https://www.linkedin.com/in/darko-mesaros/">Darko</a> <a href="https://aws.amazon.com/blogs/compute/building-serverless-applications-with-rust-on-aws-lambda/">wrote a blog post to showcase the benefits of using Rust for your Lambda functions</a>. Java 25 also has changes that make Lambda functions written in Java more efficient. My colleague <a href="https://www.linkedin.com/in/lefkarag/">Lefteris</a> wrote <a href="https://aws.amazon.com/blogs/compute/aws-lambda-now-supports-java-25/">a blog post to describe these benefits</a>.</li>
<li><a href="https://aws.amazon.com/about-aws/whats-new/2025/11/aws-lambda-provisioned-mode-sqs-esm/">AWS Lambda announces Provisioned Mode for SQS event source mapping</a> – This provides you with the benefits to optimize throughput and handle traffic spikes by provisioning event polling resources.</li>
<li><a href="https://aws.amazon.com/about-aws/whats-new/2025/11/eventbridge-enhanced-visual-rule-builder/">Amazon EventBridge introduces enhanced visual rule builder</a> – The Amazon EventBridge enhanced visual rule builder simplifies event-driven application development with an intuitive interface, comprehensive event catalog, and integration with the <a href="https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-schema-registry.html">EventBridge Schema Registry</a>.</li>
<li><a href="https://aws.amazon.com/about-aws/whats-new/2025/11/aws-service-reference-information-sdk-operation-action-mapping/">AWS Service Reference Information now supports SDK Operation to Action mapping</a> – Besides the serverless news, this is the biggest announcement of the week in my opinion. The service reference information now includes which operations are supported by AWS services and which IAM permissions are needed to call a given operation. This will help you answer questions such as “I want to call a specific AWS service operation, which IAM permissions do I need?” <a href="https://docs.aws.amazon.com/service-authorization/latest/reference/service-reference.html">You can automate the retrieval of service reference information through a simple JSON based API</a>.</li>
<li><a href="https://aws.amazon.com/about-aws/whats-new/2025/11/aws-network-load-balancer-quic-passthrough-mode/">AWS Network Load Balancer (NLB) now supports QUIC protocol in passthrough mode</a> – This provides ultra-low latency traffic forwarding with session stickiness using QUIC Connection IDs. This capability reduces application latency by 25-30% for mobile-first applications through minimized handshakes and connection resilience. NLB operates in passthrough mode, forwarding QUIC traffic directly to targets while maintaining customer control over TLS certificates and end-to-end encryption. <a href="https://aws.amazon.com/blogs/networking-and-content-delivery/introducing-quic-protocol-support-for-network-load-balancer-accelerating-mobile-first-applications/">The blog post has the details</a>.</li>
<li><a href="https://aws.amazon.com/about-aws/whats-new/2025/11/application-load-balancer-jwt-verification/">Application Load Balancer (ALB) support client credential flow with JWT verification</a> – This one is important for API developers too. This simplifies the deployment of secure machine-to-machine (M2M) and service-to-service (S2S) communications. This provides ALB with the ability to verify JWTs, reducing architectural complexity and simplifying security implementation.</li>
<li><a href="https://aws.amazon.com/about-aws/whats-new/2025/11/aws-kms-edwards-curve-digital-signature-algorithm/">AWS KMS now supports Edwards-curve Digital Signature Algorithm (EdDSA)</a> – This capability provides 128-bit security equivalent to NIST P-256 with faster signing performance and compact sizes (64-byte signatures, 32-byte public keys). Ed25519 is ideal for IoT devices and blockchain applications requiring small key and signature sizes.</li>
<li><a href="https://aws.amazon.com/about-aws/whats-new/2025/11/amazon-dcv-ed2-mac-instances/">Amazon DCV now supports Amazon EC2 Mac instances</a> – This provides high-performance remote desktop access with 4K resolution and 60 FPS performance. You can connect from Windows, Linux, macOS, or web clients with features including time zone redirection and audio output.</li>
<li><a href="https://docs.aws.amazon.com/linux/al2023/release-notes/relnotes-2023.9.20251110.html">Amazon Linux 2023 version 2025110</a> is released – It now includes packages for <a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/mountpoint.html">Mountpoint for Amazon S3</a>, the <a href="https://www.swift.org/get-started/cloud-services/">Swift 6.2.1 toolchain</a>, and <a href="https://nodejs.org/en/blog/release/v24.0.0">Node.js 24</a>. Installing Swift on Amazon Linux 2023 virtual machines or containers is now as easy as <code>sudo dnf install -y swiftlang</code>.</li>
</ul><p><strong class="c6">Additional updates</strong><br />Here are some additional projects, blog posts, and news items that I found interesting:</p><ul><li><a href="https://aws.amazon.com/blogs/security/amazon-elastic-kubernetes-service-gets-independent-affirmation-of-its-zero-operator-access-design/">Amazon Elastic Kubernetes Service gets independent affirmation of its zero operator access design</a> – Amazon EKS offers a zero operator access posture. AWS personnel cannot access your content. This is achieved through a combination of <a href="https://aws.amazon.com/ec2/nitro/">AWS Nitro</a> System-based instances, restricted administrative APIs, and end-to-end encryption. An independent review by NCC Group confirmed the effectiveness of these security measures.</li>
<li><a href="https://aws.amazon.com/blogs/machine-learning/make-your-web-apps-hands-free-with-amazon-nova-sonic/">Make your web apps hands-free with Amazon Nova Sonic</a> – Amazon Nova Sonic, a foundation model from A<a href="https://aws.amazon.com/bedrock/">Amazon Bedrock</a>, provides you with the ability to create natural, low-latency, bidirectional speech conversations for applications. This provides users with the ability to collaborate with applications through voice and embedded intelligence, unlocking new interaction patterns and enhancing usability. This blog post demonstrates a reference app, Smart Todo App. It shows how voice can be integrated to provide a hands-free experience for task management.</li>
<li><a href="https://aws.amazon.com/blogs/mt/aws-x-ray-sdks-daemon-migration-to-opentelemetry/">AWS X-Ray SDKs &amp; Daemon migration to OpenTelemetry</a> – AWS X-Ray is transitioning to OpenTelemetry as its primary instrumentation standard for application tracing. OpenTelemetry-based instrumentation solutions are recommended for producing traces from applications and sending them to AWS X-Ray. X-Ray’s existing console experience and functionality continue to be fully supported and remains unchanged by this transition.</li>
<li><a href="https://aws.amazon.com/blogs/aws-insights/powering-the-worlds-largest-events-how-amazon-cloudfront-delivers-at-scale/">Powering the world’s largest events: How Amazon CloudFront delivers at scale</a> – Amazon CloudFront achieved a record-breaking peak of 268 terabits per second on November 1, 2025, during major game delivery workloads—enough bandwidth to simultaneously stream live sports in HD to approximately 45 million concurrent viewers. This milestone demonstrates the CloudFront massive scale, powered by 750+ edge locations across 440+ cities globally and 1,140+ embedded PoPs within 100+ ISPs, with the latest generation delivering 3x the performance of previous versions.</li>
</ul><p><strong class="c6">Upcoming AWS events</strong><br />Check your calendars so that you can sign up for these upcoming events:</p><ul><li><a href="https://builder.aws.com/connect/events/builder-loft">AWS Builder Loft</a> – A community tech space in San Francisco where you can learn from expert sessions, join hands-on workshops, explore AI and emerging technologies, and collaborate with other builders to accelerate your ideas. Browse the <a href="https://luma.com/aws-builder-loft-events">upcoming sessions</a> and join the events that interest you.</li>
<li><a href="https://aws.amazon.com/events/community-day/">AWS Community Days</a> – Join community-led conferences that feature technical discussions, workshops, and hands-on labs led by experienced AWS users and industry leaders from around the world. I will deliver the opening keynote at the last Community Day of the year in <a href="https://www.meetup.com/aws-user-group-congo-kinshasa/events/308082108/">Kinshasa, Democratic Republic of Congo</a> (November 22). The next Community Day will be in <a href="https://aws-community.ro/">Timişoara, România</a> (April 2026). The <a href="https://aws-community.ro/call-for-papers">call for papers is now open</a>.</li>
<li><a href="https://pulse.aws/survey/LOLZYMRD?p=0">AWS Skills Center Seattle 4th Anniversary Celebration</a> – A free, public event on November 20 with a keynote, learned panels, recruiter insights, raffles, and virtual participation options.</li>
</ul><p>Join the <a href="https://builder.aws.com/?trk=e61dee65-4ce8-4738-84db-75305c9cd4fe&amp;sc_channel=el">AWS Builder Center</a> to learn, build, and connect with builders in the AWS community. Browse here for <a href="https://aws.amazon.com/events/explore-aws-events/?trk=e61dee65-4ce8-4738-84db-75305c9cd4fe&amp;sc_channel=el">upcoming in-person events</a>, <a href="https://aws.amazon.com/developer/events/?trk=e61dee65-4ce8-4738-84db-75305c9cd4fe&amp;sc_channel=el">developer-focused events</a>, and <a href="https://aws.amazon.com/startups/events?trk=c4ea046f-18ad-4d23-a1ac-cdd1267f942c&amp;sc_channel=el">events for startups</a>.</p><p>That’s all for this week. Check back next Monday for another <a href="https://aws.amazon.com/blogs/aws/tag/week-in-review/?trk=7c8639c6-87c6-47d6-9bd0-a5812eecb848&amp;sc_channel=el">Weekly Roundup</a>!</p><a href="https://linktr.ee/sebsto">— seb</a><p>This post is part of our Weekly Roundup series. Check back each week for a quick roundup of interesting news and announcements from AWS!</p></section><aside id="Comments" class="blog-comments"><div data-lb-comp="aws-blog:cosmic-comments" data-env="prod" data-content-id="9be47d17-a81a-4ad8-b77a-eb61ab990a73" data-title="AWS Weekly Roundup: AWS Lambda, load balancers, Amazon DCV, Amazon Linux 2023, and more (November 17, 2025)" data-url="https://aws.amazon.com/blogs/aws/aws-weekly-roundup-aws-lambda-load-balancers-amazon-dcv-amazon-linux-2023-and-more-november-17-2025/"><p data-failed-message="Comments cannot be loaded… Please refresh and try again.">Loading comments…</p></div>
</aside>]]></summary>
    <link href="https://aws.amazon.com/blogs/aws/aws-weekly-roundup-aws-lambda-load-balancers-amazon-dcv-amazon-linux-2023-and-more-november-17-2025/"/>
    <updated>2025-11-17T18:59:00+01:00</updated>
  </entry>
  <entry>
    <id>https://aws.amazon.com/blogs/aws/aws-lambda-enhances-sqs-processing-with-new-provisioned-mode-3x-faster-scaling-16x-higher-capacity/</id>
    <title><![CDATA[AWS Lambda enhances event processing with provisioned mode for SQS event-source mapping]]></title>
    <summary><![CDATA[<section class="blog-post-content lb-rtxt"><table id="amazon-polly-audio-table"><tbody><tr><td id="amazon-polly-audio-tab">
<p></p></td></tr></tbody>


</table><p>Today, we’re announcing the general availability of provisioned mode for <a href="https://aws.amazon.com/lambda">AWS Lambda</a> with <a href="https://aws.amazon.com/sqs/">Amazon Simple Queue Service (Amazon SQS)</a> <a href="https://docs.aws.amazon.com/lambda/latest/dg/with-sqs.html">Event Source Mapping (ESM)</a>, a new feature that customers can use to optimize the throughput of their event-driven applications by configuring dedicated polling resources. Using this new capability, which provides 3x faster scaling, and 16x higher concurrency, you can process events with lower latency, handle sudden traffic spikes more effectively, and maintain precise control over your event processing resources.</p><p>Modern applications increasingly rely on event-driven architectures where services communicate through events and messages. Amazon SQS is commonly used as an event source for Lambda functions, so developers can build loosely coupled, scalable applications. Although the SQS ESM automatically handles queue polling and function invocation, customers with stringent performance requirements have asked for more control over the polling behavior to handle spiky traffic patterns and maintain low processing latency.</p><p>Provisioned mode for SQS ESM addresses these needs by introducing event pollers, which are dedicated resources that remain ready to handle expected traffic patterns. These event pollers can auto scale up to 1000 per concurrent executions per minute, more than three times faster than before to handle sudden spikes in event traffic and provide up to 20,000 concurrency–16 times higher capacity to process millions of events with Lambda functions. This enhanced scaling behavior helps customers maintain predictable low latency even during traffic surges.</p><p>Enterprises across various industries, from financial services to gaming companies, are using AWS Lambda with Amazon SQS to process real-time events for their mission-critical applications. These organizations, which include some of the largest online gaming platforms and financial institutions, require consistent subsecond processing times for their event-driven workloads, particularly during periods of peak usage. Provisioned mode for SQS ESM is a capability you can use to meet your stringent performance requirements while maintaining cost controls.</p><p><strong>Enhanced control and performance</strong></p><p>With provisioned mode, you can configure both minimum and maximum numbers of event pollers for your SQS ESM. Each event poller represents a unit of compute that handles queue polling, event batching, and filtering before invoking Lambda functions. Each event poller can handle up to 1 MB/sec of throughput, up to 10 concurrent invokes, or up to 10 SQS polling API calls per second. By setting a minimum number of event pollers, you enable your application to maintain a baseline processing capacity that can immediately handle sudden traffic increases. We recommend that you set the minimum event pollers required to handle your known peak workload requirements. The optional maximum setting helps prevent overloading downstream systems by limiting the total processing throughput.</p><p>The new mode delivers significant improvements in how your event-driven applications handle varying workloads. When traffic increases, your ESM detects the growing backlog within seconds and dynamically scales event pollers between your configured minimum and maximum values three times faster than before. This enhanced scaling capability is complemented by a substantial increase in processing capacity, with support for up to 2 GBps of aggregate traffic, and up to 20K concurrent requests—16x higher than previously possible. By maintaining a minimum number of ready-to-use event pollers, your application achieves predictable performance, handling sudden traffic spikes without the delay typically associated with scaling up resources. During low traffic periods, your ESM automatically scales down to your configured minimum number of event pollers, which means you can optimize costs while maintaining responsiveness.</p><p><strong>Let’s try it out</strong></p><p>Enabling provisioned mode is straightforward in the <a href="https://aws.amazon.com/console/">AWS Management Console</a>. You need to already have an SQS queue configured and a Lambda function. To get started, in the <strong>Configuration</strong> tab for your Lambda function, choose <strong>Triggers</strong>, then <strong>Add trigger</strong>. This will bring up a user interface where you can configure your trigger. Choose <strong>SQS</strong> from the dropdown menu for source and then select the <strong>SQS queue</strong> you want to use.</p><p>Under <strong>Event poller configuration</strong>, you will now see a new option called <strong>Provisioned mode</strong>. Select <strong>Configure</strong> to reveal settings for <strong>Minimum event pollers</strong> and <strong>Maximum event pollers</strong>, each with defaults and minimum and maximum values displayed.</p><p><img class="alignnone size-large wp-image-97833" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/07/07/sqs-provision-01-1024x541.png" alt="Configuration panel for SQS provisioned Mode" width="1024" height="541" /></p><p>After you have configured <strong>Provisioned mode</strong>, you can save your trigger. If you need to make changes later, you can find the current configuration under the <strong>Triggers</strong> tab in the AWS Lambda configuration section, and you can modify your current settings there.</p><p><img class="alignnone size-large wp-image-97835" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/07/07/sqs-provision-02-1024x475.png" alt="SQS Provisioned Poller confiig" width="1024" height="475" /></p><p><strong>Monitoring and observability</strong></p><p>You can monitor your provisioned mode usage through Amazon CloudWatch metrics. The ProvisionedPollers metric shows the number of active event pollers processing events in one-minute windows.</p><p><strong>Now available</strong></p><p>Provisioned mode for Lambda SQS ESM is available today in all commercial <a href="https://docs.aws.amazon.com/glossary/latest/reference/glos-chap.html#region">AWS Regions</a>. You can start using this feature through the AWS Management Console, <a href="https://aws.amazon.com/cli/">AWS Command Line Interface (AWS CLI)</a>, or <a href="https://aws.amazon.com/developer/tools/">AWS SDKs</a>. Pricing is based on the number of event pollers provisioned and the duration they’re provisioned for, measured in Event Poller Units (EPUs). Each EPU supports up to 1 MB per second throughput capacity per event poller, with minimum 2 event pollers per ESM. See the <a href="https://aws.amazon.com/lambda/pricing/">AWS pricing page</a> for more information on EPU charges.</p><p>To learn more about provisioned mode for SQS ESM, visit the AWS Lambda <a href="https://docs.aws.amazon.com/lambda/latest/dg/with-sqs.html">documentation</a>. Start building more responsive event-driven applications today with enhanced control over your event processing resources.</p></section><aside id="Comments" class="blog-comments"><div data-lb-comp="aws-blog:cosmic-comments" data-env="prod" data-content-id="7ab41179-f13d-486b-9b43-d43156554dc6" data-title="AWS Lambda enhances event processing with provisioned mode for SQS event-source mapping" data-url="https://aws.amazon.com/blogs/aws/aws-lambda-enhances-sqs-processing-with-new-provisioned-mode-3x-faster-scaling-16x-higher-capacity/"><p data-failed-message="Comments cannot be loaded… Please refresh and try again.">Loading comments…</p></div>
</aside>]]></summary>
    <link href="https://aws.amazon.com/blogs/aws/aws-lambda-enhances-sqs-processing-with-new-provisioned-mode-3x-faster-scaling-16x-higher-capacity/"/>
    <updated>2025-11-14T18:45:00+01:00</updated>
  </entry>
  <entry>
    <id>https://aws.amazon.com/blogs/aws/introducing-aws-iot-core-device-location-integration-with-amazon-sidewalk/</id>
    <title><![CDATA[Introducing AWS IoT Core Device Location integration with Amazon Sidewalk]]></title>
    <summary><![CDATA[<section class="blog-post-content lb-rtxt"><table id="amazon-polly-audio-table"><tbody><tr><td id="amazon-polly-audio-tab">
<p></p></td></tr></tbody>


</table><p>Today, I’m happy to announce a new capability to resolve location data for <a href="https://www.amazon.com/Amazon-Sidewalk/b?ie=UTF8&amp;node=21328123011">Amazon Sidewalk</a> enabled devices with the <a href="https://docs.aws.amazon.com/iot/latest/developerguide/device-location.html">AWS IoT Core Device Location service</a>. This feature removes the requirement to install GPS modules in a Sidewalk device and also simplifies the developer experience of resolving location data. Devices powered by small coin cell batteries, such as smart home sensor trackers, use Sidewalk to connect. Supporting built-in GPS modules for products that move around is not only expensive, it can creates challenge in ensuring optimal battery life performance and longevity.</p><p>With this launch, Internet of Things (IoT) device manufacturers and solution developers can build asset tracking and location monitoring solutions using Sidewalk-enabled devices by sending Bluetooth Low Energy (BLE), Wi-Fi, or Global Navigation Satellite System (GNSS) information to <a href="https://aws.amazon.com/iot/">AWS IoT</a> for location resolution. They can then send the resolved location data to an <a href="https://docs.aws.amazon.com/iot/latest/developerguide/topics.html">MQTT topic</a> or <a href="https://docs.aws.amazon.com/iot/latest/developerguide/iot-rules.html">AWS IoT rule</a> and route the data to other <a href="https://aws.amazon.com/">Amazon Web Services (AWS)</a> services, thus using different capabilities of AWS Cloud through AWS IoT Core. This would simplify their software development and give them more options to choose the optimal location source, thereby improving their product performance.</p><p>This launch addresses <a href="https://aws.amazon.com/blogs/iot/building-track-and-trace-applications-using-aws-iot-core-for-amazon-sidewalk/">previous challenges and architecture complexity</a>. You don’t need location sensing on network-based devices when you use the Sidewalk network infrastructure itself to determine device location, which eliminates the need for power-hungry and costly GPS hardware on the device. And, this feature also allows devices to efficiently measure and report location data from GNSS and Wi-Fi, thus extending the product battery life. Therefore, you can build a more compelling solution for asset tracking and location-aware IoT applications with these enhancements.</p><p>For those unfamiliar with Amazon Sidewalk and the AWS IoT Core Device Location service, I’ll briefly explain their history and context. If you’re already familiar with them, you can skip to the section on how to get started.</p><p><strong class="c6">AWS IoT Core integrations with Amazon Sidewalk</strong><br />Amazon Sidewalk is a shared network that helps devices work better through improved connectivity options. It’s designed to support a wide range of customer devices with capabilities ranging from locating pets or valuables, to smart home security and lighting control and remote diagnostics for appliances and tools.</p><p>Amazon Sidewalk is a secure community network that uses Amazon Sidewalk Gateways (also called Sidewalk Bridges), such as compatible Amazon Echo and Ring devices, to provide cloud connectivity for IoT endpoint devices. Amazon Sidewalk enables low-bandwidth and long-range connectivity at home and beyond using BLE for short-distance communication and LoRa and frequency-shift keying (FSK) radio protocols at 900MHz frequencies to cover longer distances.</p><p>Sidewalk now provides <a href="https://coverage.sidewalk.amazon/">coverage to more than 90% of the US population</a> and supports long-range connected solutions for communities and enterprises. Users with Ring cameras or Alexa devices that act as a Sidewalk Bridge can choose to contribute a small portion of their internet bandwidth, which is pooled to create a shared network that benefits all Sidewalk-enabled devices in a community.</p><p><img class="aligncenter size-full wp-image-100180 c7" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/10/27/2025-aws-iot-with-sidewalk.png" alt="" width="943" height="416" /></p><p>In March 2023, <a href="https://aws.amazon.com/about-aws/whats-new/2023/03/aws-iot-core-deepens-integration-amazon-sidewalk/">AWS IoT Core deepened its integration with Amazon Sidewalk</a> to seamlessly provision, onboard, and monitor Sidewalk devices with qualified hardware development kits (HDKs), SDKs, and sample applications. As of this writing, AWS IoT Core is the only way for customers to connect the Sidewalk network.</p><p>In the <a href="https://us-east-1.console.aws.amazon.com/iot/home?region=us-east-1#/wireless/devices?tab=sidewalk">AWS IoT Core console</a>, you can add your Sidewalk device, provision and register your devices, and connect your Sidewalk endpoint to the cloud. To learn more about onboarding your Sidewalk devices, visit the <a href="https://docs.aws.amazon.com/iot-wireless/latest/developerguide/sidewalk-getting-started.html">Getting started with AWS IoT Core for Amazon Sidewalk</a> in the AWS IoT Wireless Developer Guide.</p><p><img class="aligncenter wp-image-100276 size-full c8" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/10/31/2025-aws-iot-with-sidewalk-console.png" alt="" width="2606" height="2161" /></p><p>In November 2022, we <a href="https://aws.amazon.com/about-aws/whats-new/2022/11/aws-iot-core-new-device-location-feature/">announced AWS IoT Core Device Location service</a>, a new feature that you can use to get the geo-coordinates of their IoT devices even when the device doesn’t have a GPS module. You can use the Device Location service as a simple request and response HTTP API, or you can use it with IoT connectivity pathways like MQTT, LoRaWAN, and now with Amazon Sidewalk.</p><p>In the <a href="https://us-east-1.console.aws.amazon.com/iot/home?region=us-east-1#/device-location-test">AWS IoT Core console</a>, you can test the Device Location service to resolve the location of your device by importing device payload data. Resource location is reported as a GeoJSON payload. To learn more, visit the <a href="https://docs.aws.amazon.com/iot/latest/developerguide/device-location.html">AWS IoT Core Device Location</a> in the AWS IoT Core Developer Guide.</p><p><img class="aligncenter size-full wp-image-100181 c8" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/10/28/2025-aws-iot-core-device-location.png" alt="" width="2326" height="2278" /></p><p>Customers across multiple industries like automotive, supply chain, and industrial tools have requested a simplified solution such as the Device Location service to extract location-data from Sidewalk products. This would streamline customer software development and give them more options to choose the optimal location source, thereby improving their product.</p><p><strong class="c6">Get started with a Device Location integration with Amazon Sidewalk</strong><br />To enable Device Location for Sidewalk devices, go to the <strong>AWS IoT Core for Amazon Sidewalk</strong> section under <strong>LPWAN devices</strong> in the <a href="https://us-east-1.console.aws.amazon.com/iot/home?region=us-east-1#/wireless/devices?tab=sidewalk">AWS IoT Core console</a>. Choose <strong>Provision device</strong> or your existing device to edit the setting and select <strong>Activate positioning</strong> in the <strong>Geolocation</strong> option when creating and updating your Sidewalk devices.</p><p>While activating position, you need to specify a destination where you want to send your location data. The destination can either be an AWS IoT rule or an MQTT topic.</p><p><img class="aligncenter wp-image-100278 size-full c9" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/10/31/2025-aws-iot-with-sidewalk-location-1-1.png" alt="" width="1962" height="2270" /></p><p>Here is a sample <a href="https://aws.amazon.com/cli">AWS Command Line Interface (AWS CLI)</a> command to enable position while provisioning a new Sidewalk device:</p><pre class="lang-bash">$ aws iotwireless createwireless device --type Sidewalk \
  --name "demo-1" --destination-name "New-1" \
  --positioning Enabled</pre><p>After your Sidewalk device establishes a connection to the Amazon Sidewalk network, the device SDK will send the GNSS-, Wi-Fi- or BLE-based information to AWS IoT Core for Amazon Sidewalk. If the customer has enabled Positioning, then AWS IoT Core Device Location will resolve the location data and send the location data to the specified Destination. After your Sidewalk device transmits location measurement data, the resolved geographic coordinates and a map pin will also be displayed in the Position section for the selected device.</p><p><img class="aligncenter wp-image-100280 size-full c9" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/10/31/2025-aws-iot-with-sidewalk-location-2.png" alt="" width="1982" height="2323" /></p><p>You will also get location information delivered to your destination in GeoJSON format, as shown in the following example:</p><pre class="lang-json">{
    "coordinates": [
        13.376076698303223,
        52.51823043823242
    ],
    "type": "Point",
    "properties": {
        "verticalAccuracy": 45,
        "verticalConfidenceLevel": 0.68,
        "horizontalAccuracy": 303,
        "horizontalConfidenceLevel": 0.68,
        "country": "USA",
        "state": "CA",
        "city": "Sunnyvale",
        "postalCode": "91234",
        "timestamp": "2025-11-18T12:23:58.189Z"
    }
}</pre><p>You can monitor the Device Location data between your Sidewalk devices and AWS Cloud by enabling <a href="https://docs.aws.amazon.com/iot/latest/developerguide/cloud-watch-logs.html">Amazon CloudWatch Logs for AWS IoT Core</a>. To learn more, visit the <a href="https://docs.aws.amazon.com/iot-wireless/latest/developerguide/iot-sidewalk.html">AWS IoT Core for Amazon Sidewalk</a> in the AWS IoT Wireless Developer Guide.</p><p><strong class="c6">Now available</strong><br />AWS IoT Core Device Location integration with Amazon Sidewalk is now generally available in the US East (N. Virginia) Region. To learn more about use cases, documentation, sample codes, and partner devices, visit the <a href="https://aws.amazon.com/iot-core/sidewalk/">AWS IoT Core for Amazon Sidewalk product page</a>.</p><p>Give it a try in the <a href="https://us-east-1.console.aws.amazon.com/iot/home?region=us-east-1#/wireless/devices?tab=sidewalk">AWS IoT Core console</a> and send feedback to <a href="https://repost.aws/tags/TA-HbE5sc6Si6BzWWIPv4LfQ/aws-iot-core">AWS re:Post for AWS IoT Core</a> or through your usual AWS Support contacts.</p><p>— <a href="https://linkedin.com/in/channy/">Channy</a></p></section><aside id="Comments" class="blog-comments"><div data-lb-comp="aws-blog:cosmic-comments" data-env="prod" data-content-id="02f389a6-2861-43f2-90b8-79398ca57fe1" data-title="Introducing AWS IoT Core Device Location integration with Amazon Sidewalk" data-url="https://aws.amazon.com/blogs/aws/introducing-aws-iot-core-device-location-integration-with-amazon-sidewalk/"><p data-failed-message="Comments cannot be loaded… Please refresh and try again.">Loading comments…</p></div>
</aside>]]></summary>
    <link href="https://aws.amazon.com/blogs/aws/introducing-aws-iot-core-device-location-integration-with-amazon-sidewalk/"/>
    <updated>2025-11-13T20:09:00+01:00</updated>
  </entry>
  <entry>
    <id>https://aws.amazon.com/blogs/aws/introducing-our-final-aws-heroes-of-2025/</id>
    <title><![CDATA[Introducing Our Final AWS Heroes of 2025]]></title>
    <summary><![CDATA[<section class="blog-post-content lb-rtxt"><table id="amazon-polly-audio-table"><tbody><tr><td id="amazon-polly-audio-tab">
<p></p></td></tr></tbody>


</table><p>With AWS re:Invent approaching, we’re celebrating three exceptional AWS Heroes whose diverse journeys and commitment to knowledge sharing are empowering builders worldwide. From advancing women in tech and rural communities to bridging academic and industry expertise and pioneering enterprise AI solutions, these leaders exemplify the innovative spirit that drives our community forward. Their stories showcase how technical excellence, combined with passionate advocacy and mentorship, strengthens the global AWS community.</p><h2 class="c6">Dimple Vaghela – Ahmedabad, India</h2><p><a href="https://builder.aws.com/community/@dimplevaghela" target="_blank" rel="noopener noreferrer"><img class="alignleft" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/10/dimple.png" width="175" height="263" alt="image" /></a>Community Hero <a href="https://builder.aws.com/community/@dimplevaghela" target="_blank" rel="noopener noreferrer">Dimple Vaghela</a> leads both the AWS User Group Ahmedabad and AWS User Group Vadodara, where she drives cloud education and technical growth across the region. Her impact spans organizing numerous AWS meetups, workshops, and AWS Community Days that have helped thousands of learners advance their cloud careers. Dimple launched the “Cloud for Her” project to empower girls from rural areas in technology careers and serves as co-organizer of the Women in Tech India User Group. Her exceptional leadership and community contributions were recognized at AWS re:Invent 2024 with the AWS User Group Leader Award in the Ownership category, while she continues building a more inclusive cloud community through speaking, mentoring, and organizing impactful tech events.</p><h2 class="c6">Rola Dali – Montreal, Canada</h2><p><a href="https://builder.aws.com/community/@rdali" target="_blank" rel="noopener noreferrer"><img class="alignleft" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/10/RolaDali.png" width="175" height="263" alt="image" /></a>Community Hero <a href="https://builder.aws.com/community/@rdali" target="_blank" rel="noopener noreferrer">Rola Dali</a> is a senior Data, ML, and AI expert specializing in AWS cloud, bringing unique perspective from her PhD in neuroscience and bioinformatics with expertise in human genomics. As co-organizer of the AWS Montreal User Group and a former AWS Community Builder, her commitment to the cloud community earned her the prestigious Golden Jacket recognition in 2024. She actively shapes the tech community by architecting AWS solutions, sharing knowledge through blogs and lectures, and mentoring women entering tech, academics transitioning to industry, and students starting their careers.</p><h2 class="c6">Vivek Velso – Toronto, Canada</h2><p><a href="https://builder.aws.com/community/@vivekv" target="_blank" rel="noopener noreferrer"><img class="alignleft" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/10/Pinpoint_LT.png" width="175" height="263" alt="image" /></a>Machine Learning Hero <a href="https://builder.aws.com/community/@vivekv" target="_blank" rel="noopener noreferrer">Vivek Velso</a> is a seasoned technology leader with over 27 years of IT industry experience, specializing in helping organizations modernize their cloud infrastructure for generative AI workloads. His deep AWS expertise earned him the prestigious Golden Jacket award for completing all AWS certifications, and he actively contributes to the AWS Subject Matter Expert (SME) program for multiple certification exams. A former AWS Community Builder and AWS Ambassador, he continues to share his knowledge through more than 100 technical blogs, articles, conference engagements, and AWS livestreams, helping the community confidently embrace cloud innovation.</p><h2 class="c6">Learn More</h2><p>Visit the <a href="https://builder.aws.com/connect/community/heroes" target="_blank" rel="noopener">AWS Heroes webpage</a> if you’d like to learn more about the AWS Heroes program, or to connect with a Hero near you.</p><p>— <a href="https://twitter.com/taylorjacobsen" target="_blank" rel="noopener noreferrer">Taylor</a></p></section><aside id="Comments" class="blog-comments"><div data-lb-comp="aws-blog:cosmic-comments" data-env="prod" data-content-id="a5cc8bb6-9c26-426a-9d47-66876141bf18" data-title="Introducing Our Final AWS Heroes of 2025" data-url="https://aws.amazon.com/blogs/aws/introducing-our-final-aws-heroes-of-2025/"><p data-failed-message="Comments cannot be loaded… Please refresh and try again.">Loading comments…</p></div>
</aside>]]></summary>
    <link href="https://aws.amazon.com/blogs/aws/introducing-our-final-aws-heroes-of-2025/"/>
    <updated>2025-11-12T21:05:00+01:00</updated>
  </entry>
  <entry>
    <id>https://aws.amazon.com/blogs/aws/secure-eks-clusters-with-the-new-support-for-amazon-eks-in-aws-backup/</id>
    <title><![CDATA[Secure EKS clusters with the new support for Amazon EKS in AWS Backup]]></title>
    <summary><![CDATA[<section class="blog-post-content lb-rtxt"><table id="amazon-polly-audio-table"><tbody><tr><td id="amazon-polly-audio-tab">
<p></p></td></tr></tbody>


</table><p>Today, we’re announcing <a href="https://docs.aws.amazon.com/aws-backup/latest/devguide/assigning-resources-console.html?trk=7c8639c6-87c6-47d6-9bd0-a5812eecb848&amp;sc_channel=el">support for Amazon EKS in AWS Backup</a> to provide the capability to secure Kubernetes applications using the same centralized platform you trust for your other <a href="https://aws.amazon.com/?trk=7c8639c6-87c6-47d6-9bd0-a5812eecb848&amp;sc_channel=el">Amazon Web Services (AWS)</a> services. This integration eliminates the complexity of protecting containerized applications while providing enterprise-grade backup capabilities for both cluster configurations and application data. <a href="https://aws.amazon.com/backup/?trk=7c8639c6-87c6-47d6-9bd0-a5812eecb848&amp;sc_channel=el">AWS Backup</a> is a fully managed service to centralize and automate data protection across AWS and on-premises workloads. <a href="https://aws.amazon.com/pm/eks/?trk=7c8639c6-87c6-47d6-9bd0-a5812eecb848&amp;sc_channel=el">Amazon Elastic Kubernetes Service (Amazon EKS)</a> is a fully managed Kubernetes service to manage availability and scalability of the Kubernetes clusters. With this new capability, you can centrally manage and automate data protection across your Amazon EKS environments alongside other AWS services.</p><p>Until now, for backups, customers relied on custom solutions or third-party tools to back up their EKS clusters, requiring complex scripting and maintenance for each cluster. The support for Amazon EKS in AWS Backup eliminates this overhead by providing a single, centralized, and policy-driven solution that protects both EKS clusters (Kubernetes deployments and resources) and stateful data (stored in <a href="https://aws.amazon.com/ebs/?trk=7c8639c6-87c6-47d6-9bd0-a5812eecb848&amp;sc_channel=el">Amazon Elastic Block Store (Amazon EBS)</a>, <a href="https://aws.amazon.com/efs/?trk=7c8639c6-87c6-47d6-9bd0-a5812eecb848&amp;sc_channel=el">Amazon Elastic File System (Amazon EFS)</a>, and <a href="https://aws.amazon.com/pm/serv-s3/?trk=7c8639c6-87c6-47d6-9bd0-a5812eecb848&amp;sc_channel=el">Amazon Simple Storage Service (Amazon S3)</a> only) without the need to manage custom scripts across clusters. For restores, customers were previously required to restore their EKS backups to a target EKS cluster which was either the source EKS cluster, or a new EKS cluster, requiring that an EKS cluster infrastructure is provisioned ahead of time prior to the restore. With this new capability, during a restore of EKS cluster backups, customers also have the option to create a new EKS cluster based on previous EKS cluster configuration settings and restore to this new EKS cluster, with AWS Backup managing the provisioning of the EKS cluster on the customer’s behalf.</p><p>This support includes policy-based automation for protecting single or multiple EKS clusters. This single data protection policy provides a consistent experience across all services AWS Backup supports. It allows creation of immutable backups to prevent malicious or inadvertent changes, helping customers meet their regulatory compliance needs. In case there is a customer data loss or cluster downtime event, customers can easily recover their EKS cluster data from encrypted, immutable backups using an easy-to-use interface and maintain business continuity of running their EKS clusters at scale.</p><p><strong>How it works</strong><br />Here’s how I set up support for on-demand backup of my EKS cluster in AWS Backup. First, I’ll show a walkthrough of the backup process, then demonstrate a restore of the EKS cluster.</p><p><strong>Backup</strong><br />In the <a href="https://console.aws.amazon.com/backup/?trk=7c8639c6-87c6-47d6-9bd0-a5812eecb848&amp;sc_channel=el">AWS Backup console</a>, in the left navigation pane, I choose <strong>Settings</strong> and then <strong>Configure resources</strong> to opt in to enable protection of EKS clusters in AWS Backup.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/10/22/awsbackupEKS1-2.png"><img class="size-large wp-image-100052 aligncenter" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/10/22/awsbackupEKS1-2-1024x400.png" alt="" width="1024" height="400" /></a></p><p>Now that I’ve enabled Amazon EKS, in <strong>Protected resources</strong> I choose <strong>Create on-demand backup</strong> to create a backup for my already existing EKS cluster <code>floral-electro-unicorn</code>.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/10/22/awsbackupEKS2-1.png"><img class="size-large wp-image-100054 aligncenter" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/10/22/awsbackupEKS2-1-1024x239.png" alt="" width="1024" height="239" /></a></p><p>Enabling EKS in Settings ensures that it shows up as a <strong>Resource type</strong> when I create on-demand backup for the EKS cluster. I proceed to select the EKS resource type and the cluster.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/10/22/awsbackupEKS3-2.png"><img class="aligncenter size-large wp-image-100064" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/10/22/awsbackupEKS3-2-1024x437.png" alt="" width="1024" height="437" /></a></p><p>I leave the rest of the information as default, then select <strong>Choose an IAM role</strong> to select a role (<code>test-eks-backup</code>) that I’ve created and customized with the <a href="https://docs.aws.amazon.com/aws-backup/latest/devguide/iam-service-roles.html?trk=7c8639c6-87c6-47d6-9bd0-a5812eecb848&amp;sc_channel=el">necessary permissions for AWS Backup to assume when creating and managing backups on my behalf</a>. I choose <strong>Create on-demand backup</strong> to finalize the process.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/10/22/awsbackupEKS5.png"><img class="aligncenter size-large wp-image-100058" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/10/22/awsbackupEKS5-1024x303.png" alt="" width="1024" height="303" /></a><br />The job is initiated, and it will start running to back up both the EKS cluster state and the persistent volumes. If Amazon S3 buckets are attached to the backup, you’ll need to <a href="https://docs.aws.amazon.com/aws-backup/latest/devguide/security-iam-awsmanpol.html#AWSBackupServiceRolePolicyForS3Backup">add the additional Amazon S3 backup permissions <code>AWSBackupServiceRolePolicyForS3Backup</code> to your role</a>. This policy contains the permissions necessary for AWS Backup to back up any Amazon S3 bucket, including access to all objects in a bucket and any associated AWS KMS key.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/05/awsbackupEKS8-1.png"><img class="aligncenter size-large wp-image-100408" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/05/awsbackupEKS8-1-1024x389.png" alt="" width="1024" height="389" /></a><br />The job is completed successfully and now EKS cluster<code>floral-electro-unicorn</code> is backed up by AWS Backup.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/10/22/awsbackupEKS7-1.png"><img class="aligncenter size-large wp-image-100063" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/10/22/awsbackupEKS7-1-1024x263.png" alt="" width="1024" height="263" /></a><br /><strong>Restore</strong><br />Using the AWS Backup Console, I choose the EKS backup composite recovery point to start the process of restoring the EKS cluster backups, then choose <strong>Restore</strong>.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/04/restore2-1.png"><img class="aligncenter size-large wp-image-100356" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/04/restore2-1-1024x512.png" alt="" width="1024" height="512" /></a><br />I choose <strong>Restore full EKS cluster</strong> to restore the full EKS backup. To restore to an existing cluster, I <strong>Choose an existing cluster</strong> then select the cluster from the drop-down list. I choose the <strong>Default order</strong> as the order in which individual Kubernetes resources will be restored.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/05/restore4-3.png"><img class="aligncenter size-large wp-image-100413" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/05/restore4-3-1024x374.png" alt="" width="1024" height="374" /></a></p><p>I then configure the restore for the persistent storage resources, that will be restored alongside my EKS clusters.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/05/backup_storage.png"><img class="aligncenter size-large wp-image-100410" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/05/backup_storage-1024x325.png" alt="" width="1024" height="325" /></a><br />Next, I <strong>Choose an IAM role</strong> to execute the restore action. The <strong>Protected resource tags</strong> checkbox is selected by default and I’ll leave it as is, then choose <strong>Next</strong>.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/05/restore7-2.png"><img class="aligncenter size-large wp-image-100414" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/05/restore7-2-1024x255.png" alt="" width="1024" height="255" /></a></p><p>I review all the information before I finalize the process by choosing <strong>Restore,</strong> to start the job.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/05/restore8-2.png"><img class="aligncenter size-large wp-image-100415" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/05/restore8-2-1024x344.png" alt="" width="1024" height="344" /></a><br />Selecting the drop-down arrow gives details of the restore status for both the EKS cluster state and persistent volumes attached. In this walkthrough, all the individual recovery points are restored successfully. If portions of the backup fail, it’s possible to restore the successfully backed up persistent stores (for example, Amazon EBS volumes) and cluster configuration settings individually. However, it’s not possible to restore full EKS backup. The successfully backed up resources will be available for restore, listed as nested recovery points under the EKS cluster recovery point. If there’s a partial failure, there will be a notification of the portion(s) that failed.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/05/restore9-2.png"><img class="aligncenter size-large wp-image-100416" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/05/restore9-2-1024x266.png" alt="" width="1024" height="266" /></a><br /><strong>Benefits</strong><br />Here are some of the benefits provided by the support for Amazon EKS in AWS Backup:</p><ul><li>A fully managed multi-cluster backup experience, removing the overhead associated with managing custom scripts and third-party solutions.</li>
<li>Centralized, policy-based backup management that simplifies backup lifecycle management and makes it seamless to back up and recover your application data across AWS services, including EKS.</li>
<li>The ability to store and organize your backups with <a href="https://docs.aws.amazon.com/aws-backup/latest/devguide/vaults.html?trk=7c8639c6-87c6-47d6-9bd0-a5812eecb848&amp;sc_channel=el">backup vaults</a>. You assign policies to the backup vaults to grant access to users to create backup plans and on-demand backups but limit their ability to delete recovery points after they’re created.</li>
</ul><p><strong>Good to know<br /></strong> The following are some helpful facts to know:</p><ul><li>Use either the <a href="https://console.aws.amazon.com/backup/?trk=7c8639c6-87c6-47d6-9bd0-a5812eecb848&amp;sc_channel=el">AWS Backup Console</a>, API, or <a href="https://aws.amazon.com/cli/?trk=7c8639c6-87c6-47d6-9bd0-a5812eecb848&amp;sc_channel=el">AWS Command Line Interface (AWS CLI)</a> to protect EKS clusters using AWS Backup. Alternatively, you can create an on-demand backup of the cluster after it has been created.</li>
<li>You can create secondary copies of your EKS backups across different accounts and <a href="https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_tag-policies-supported-regions.html?trk=7c8639c6-87c6-47d6-9bd0-a5812eecb848&amp;sc_channel=el">AWS Regions</a> to minimize risk of accidental deletion.</li>
<li>Restoration of EKS backups is available using the AWS Backup Console, API, or AWS CLI.</li>
<li>Restoring to an existing cluster will not override the Kubernetes versions, or any data as restores are non-destructive. Instead, there will be a restore of the delta between the backup and source resource.</li>
<li>Namespaces can only be restored to an existing cluster to ensure a successful restore as Kubernetes resources may be scoped at the cluster level.</li>
</ul><p><strong>Voice of the customer</strong></p><p>Srikanth Rajan, Sr. Director of Engineering at Salesforce said “Losing a Kubernetes control plane because of software bugs or unintended cluster deletion can be catastrophic without a solid backup and restore plan. That’s why it’s exciting to see AWS rolling out the new EKS Backup and Restore feature, it’s a big step forward in closing a critical resiliency gap for Kubernetes platforms.”</p><p><strong>Now available</strong><br />Support for Amazon EKS in AWS Backup is available today in all AWS commercial <a href="https://docs.aws.amazon.com/glossary/latest/reference/glos-chap.html#region">Regions</a> (except China) and in the <a href="https://aws.amazon.com/govcloud-us/?trk=7c8639c6-87c6-47d6-9bd0-a5812eecb848&amp;sc_channel=el">AWS GovCloud (US)</a> where AWS Backup and Amazon EKS are available. Check the <a href="https://docs.aws.amazon.com/bedrock/latest/userguide/models-regions.html?trk=7c8639c6-87c6-47d6-9bd0-a5812eecb848&amp;sc_channel=el">full Region list</a> for future updates.</p><p>To learn more, check out the <a href="https://aws.amazon.com/backup/?trk=7c8639c6-87c6-47d6-9bd0-a5812eecb848&amp;sc_channel=el">AWS Backup product page</a> and the <a href="https://aws.amazon.com/backup/pricing/?trk=7c8639c6-87c6-47d6-9bd0-a5812eecb848&amp;sc_channel=el">AWS Backup pricing page</a>.</p><p>Try out this capability for protecting your EKS clusters in AWS Backup and let us know what you think by sending feedback to <a href="https://repost.aws/tags/TAEq_tyFmxTri2axdF_HfATg/aws-backup/?trk=7c8639c6-87c6-47d6-9bd0-a5812eecb848&amp;sc_channel=el">AWS re:Post for AWS Backup</a> or through your usual AWS Support contacts.</p><p>– <a href="https://linkedin.com/veliswa-boya">Veliswa</a>.</p></section><aside id="Comments" class="blog-comments"><div data-lb-comp="aws-blog:cosmic-comments" data-env="prod" data-content-id="f0846cac-d4ea-4551-b711-e440857936e7" data-title="Secure EKS clusters with the new support for Amazon EKS in AWS Backup" data-url="https://aws.amazon.com/blogs/aws/secure-eks-clusters-with-the-new-support-for-amazon-eks-in-aws-backup/"><p data-failed-message="Comments cannot be loaded… Please refresh and try again.">Loading comments…</p></div>
</aside>]]></summary>
    <link href="https://aws.amazon.com/blogs/aws/secure-eks-clusters-with-the-new-support-for-amazon-eks-in-aws-backup/"/>
    <updated>2025-11-10T22:30:00+01:00</updated>
  </entry>
  <entry>
    <id>https://aws.amazon.com/blogs/aws/aws-weekly-roundup-amazon-s3-amazon-ec2-and-more-november-10-2025/</id>
    <title><![CDATA[AWS Weekly Roundup: Amazon S3, Amazon EC2, and more (November 10, 2025)]]></title>
    <summary><![CDATA[<section class="blog-post-content lb-rtxt"><table id="amazon-polly-audio-table"><tbody><tr><td id="amazon-polly-audio-tab">
<p></p></td></tr></tbody>


</table><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/08/wir-oai-aws-hero.png"><img class="alignright wp-image-100548 size-medium" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/08/wir-oai-aws-hero-300x169.png" alt="" width="300" height="169" /></a><a href="https://reinvent.awsevents.com/">AWS re:Invent 2025</a> is only 3 weeks away and I’m already looking forward to the new launches and announcements at the conference. Last year brought 60,000 attendees from across the globe to Las Vegas, Nevada, and the atmosphere was amazing. <a href="https://registration.awsevents.com/flow/awsevents/reinvent2025/reg/createaccount?trk=aws-blogs-prod.amazon.com">Registration</a> is still open for AWS re:Invent 2025. We hope you’ll join us in Las Vegas December 1–5 for keynotes, breakout sessions, chalk talks, interactive learning opportunities, and networking with cloud practitioners from around the world.</p><p>AWS and OpenAI <a href="http://aboutamazon.com/news/aws/aws-open-ai-workloads-compute-infrastructure?utm_source=ecsocial&amp;utm_medium=linkedin&amp;utm_term=36">announced</a> a multi-year strategic partnership that provides OpenAI with immediate access to AWS infrastructure for running advanced AI workloads. The $38 billion agreement spans 7 years and includes access to AWS compute resources comprising hundreds of thousands of NVIDIA GPUs, with the ability to scale to tens of millions of CPUs for agentic workloads. The infrastructure deployment that AWS is building for OpenAI features a sophisticated architectural design optimized for maximum AI processing efficiency and performance. Clustering the NVIDIA GPUs—both GB200s and GB300s—using Amazon EC2 UltraServers on the same network enables low-latency performance across interconnected systems, allowing OpenAI to efficiently run workloads with optimal performance. The clusters are designed to support various workloads, from serving inference for ChatGPT to training next generation models, with the flexibility to adapt to OpenAI’s evolving needs.</p><p>AWS <a href="https://www.aboutamazon.com/news/aws/jane-goodall-institute-research-archive-aws-ai">committed $1 million through its Generative AI Innovation Fund</a> to digitize the Jane Goodall Institute’s 65 years of primate research archives. The project will transform handwritten field notes, film footage, and observational data on chimpanzees and baboons from analog to digital formats using <a href="https://aws.amazon.com/bedrock/">Amazon Bedrock</a> and <a href="https://aws.amazon.com/sagemaker/">Amazon SageMaker</a>. The digital transformation will employ multimodal <a href="https://aws.amazon.com/what-is/large-language-model/">large language models (LLMs)</a> and embedding models to make the research archives searchable and accessible to scientists worldwide for the first time. AWS is collaborating with Ode to build the user experience, helping the Jane Goodall Institute adopt AI technologies to advance research and conservation efforts. I was deeply saddened when I heard that world-renowned primatologist Jane Goodall had passed away. Learning that this project will preserve her life’s work and make it accessible to researchers around the world brought me comfort. It’s a fitting tribute to her remarkable legacy.</p><div id="attachment_100547" class="wp-caption alignnone c6"><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/08/wir-download.jpeg"><img aria-describedby="caption-attachment-100547" class="size-full wp-image-100547" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/08/wir-download.jpeg" alt="" width="1320" height="743" /></a><p id="caption-attachment-100547" class="wp-caption-text">Transforming decades of research through cloud and AI. Dr. Jane Goodall and field staff observe Goblin at Gombe National Park, Tanzania. CREDIT: the Jane Goodall Institute</p></div><p><strong>Last week’s launches</strong><br />Let’s look at last week’s new announcements:</p><ul><li><a href="https://aws.amazon.com/about-aws/whats-new/2025/11/amazon-s3-tags-s3-tables/">Amazon S3 now supports tags on S3 Tables</a> – Amazon S3 now supports tags on S3 Tables for attribute-based access control (ABAC) and cost allocation. You can use tags for ABAC to automatically manage permissions for users and roles accessing table buckets and tables, eliminating frequent AWS Identity and Access Management (IAM) or S3 Tables resource-based policy updates and simplifying access governance at scale. Additionally, tags can be added to individual tables to track and organize AWS costs using AWS Billing and Cost Management.</li>
</ul><ul><li><a href="https://aws.amazon.com/about-aws/whats-new/2025/11/memory-optimized-amazon-ec2-r8a-instances/">Amazon EC2 R8a Memory-Optimized Instances now generally available</a> – R8a instances feature 5th Gen AMD EPYC processors (formerly code named Turin) with a maximum frequency of 4.5 GHz, and they deliver up to 30% higher performance and up to 19% better price-performance compared to R7a instances, with 45% more memory bandwidth. Built on the AWS Nitro System using sixth-generation Nitro Cards, these instances are designed for high-performance, memory-intensive workloads, including SQL and NoSQL databases, distributed web scale in-memory caches, in-memory databases, real-time big data analytics, and electronic design automation (EDA) applications. R8a instances are SAP certified and offer 12 sizes, including two bare metal sizes.</li>
</ul><ul><li><a href="https://aws.amazon.com/about-aws/whats-new/2025/11/ec2-auto-scaling-warm-pool-mixed-instances-policies/">EC2 Auto Scaling announces warm pool support for mixed instances policies</a> – EC2 Auto Scaling groups now support warm pools for Auto Scaling groups configured with mixed instances policies. Warm pools create a pool of pre-initialized EC2 instances ready to quickly serve application traffic, improving application elasticity. The feature benefits applications with lengthy initialization processes, such as writing large amounts of data to disk or running complex custom scripts. By combining warm pools with instance type flexibility, Auto Scaling groups can rapidly scale out to maximum size while deploying applications across multiple instance types to enhance availability. The feature works with Auto Scaling groups configured for multiple On-Demand Instance types through manual instance type lists or attribute-based instance type selection.</li>
</ul><ul><li><a href="https://aws.amazon.com/about-aws/whats-new/2025/11/amazon-bedrock-agentcore-runtime-code-deployment/">Amazon Bedrock AgentCore Runtime now supports direct code deployment</a> – Amazon Bedrock AgentCore Runtime now offers two deployment methods for AI agents: container-based deployment and direct code upload. You can choose between direct code–zip file upload for rapid prototyping and iteration or container-based options for complex use cases requiring custom configurations. AgentCore Runtime provides a serverless framework and model agnostic runtime for running agents and tools at scale. The direct code–zip upload feature includes drag-and-drop functionality, enabling faster iteration cycles for prototyping while maintaining enterprise security and scaling capabilities for production deployments.</li>
</ul><ul><li><a href="https://aws.amazon.com/blogs/aws/introducing-aws-capabilities-by-region-for-easier-regional-planning-and-faster-global-deployments/">AWS Capabilities by Region now available for Regional planning</a> – AWS Capabilities by Region helps discover and compare AWS services, features, APIs, and AWS CloudFormation resources across Regions. This planning tool provides an interactive interface to explore service availability, compare multiple Regions side by side, and view forward-looking roadmap information. You can search for specific services or features, view API operations availability, verify CloudFormation resource type support, and check EC2 instance type availability including specialized instances. The tool displays availability states including Available, Planning, Not Expanding, and directional launch planning by quarter. The AWS Capabilities by Region data is also accessible through the AWS Knowledge MCP server, enabling automation of Region expansion planning and integration into development workflows and continuous integration and continuous delivery (CI/CD) pipelines.</li>
</ul><p><strong>Upcoming AWS events</strong><br />Check your calendar and sign up for upcoming AWS events:</p><ul><li><a href="https://reinvent.awsevents.com/">AWS re:Invent 2025</a> – Join us in Las Vegas December 1–5 as cloud pioneers gather from across the globe for the latest AWS innovations, peer-to-peer learning, expert-led discussions, and invaluable networking opportunities. Don’t forget to explore the <a href="https://registration.awsevents.com/flow/awsevents/reinvent2025/eventcatalog/page/eventcatalog?trk=aws-blogs-prod.amazon.com">event catalog</a>.</li>
<li><a href="https://builder.aws.com/connect/events/builder-loft">AWS Builder Loft</a> – A tech hub in San Francisco where builders share ideas, learn, and collaborate. The space offers industry expert sessions, hands-on workshops, and community events covering topics from AI to emerging technologies. Browse the <a href="https://luma.com/aws-builder-loft-events">upcoming sessions</a> and join the events that interest you.</li>
<li><a href="https://pulse.aws/survey/LOLZYMRD?p=0">AWS Skills Center Seattle 4th Anniversary Celebration</a> – A free, public event on November 20 with a keynote, learning panels, recruiter insights, raffles, and virtual participation options.</li>
</ul><p>Join the <a href="https://builder.aws.com/?trk=e61dee65-4ce8-4738-84db-75305c9cd4fe&amp;sc_channel=el">AWS Builder Center</a> to connect with builders, share solutions, and access content that supports your development. Browse here for upcoming <a href="https://aws.amazon.com/events/explore-aws-events/?refid=e61dee65-4ce8-4738-84db-75305c9cd4fe">AWS led in-person and virtual events</a>, <a href="https://builder.aws.com/connect/events?trk=e61dee65-4ce8-4738-84db-75305c9cd4fe&amp;sc_channel=el">developer-focused events</a>, and <a href="https://aws.amazon.com/startups/events?tab=upcoming&amp;region=EMEA">events for startups</a>.</p><p>That’s all for this week. Check back next Monday for another Weekly Roundup!</p><a href="https://www.linkedin.com/in/esrakayabali/">— Esra</a><p><em>This post is part of our Weekly Roundup series. Check back each week for a quick roundup of interesting news and announcements from AWS!</em></p></section><aside id="Comments" class="blog-comments"><div data-lb-comp="aws-blog:cosmic-comments" data-env="prod" data-content-id="77a01329-4b04-4a68-b7f1-fa5702d70861" data-title="AWS Weekly Roundup: Amazon S3, Amazon EC2, and more (November 10, 2025)" data-url="https://aws.amazon.com/blogs/aws/aws-weekly-roundup-amazon-s3-amazon-ec2-and-more-november-10-2025/"><p data-failed-message="Comments cannot be loaded… Please refresh and try again.">Loading comments…</p></div>
</aside>]]></summary>
    <link href="https://aws.amazon.com/blogs/aws/aws-weekly-roundup-amazon-s3-amazon-ec2-and-more-november-10-2025/"/>
    <updated>2025-11-10T17:38:00+01:00</updated>
  </entry>
  <entry>
    <id>https://aws.amazon.com/blogs/aws/introducing-aws-capabilities-by-region-for-easier-regional-planning-and-faster-global-deployments/</id>
    <title><![CDATA[Introducing AWS Capabilities by Region for easier Regional planning and faster global deployments]]></title>
    <summary><![CDATA[<section class="blog-post-content lb-rtxt"><table id="amazon-polly-audio-table"><tbody><tr><td id="amazon-polly-audio-tab">
<p></p></td></tr></tbody>


</table><p>At AWS, a common question we hear is: “Which AWS capabilities are available in different <a href="https://docs.aws.amazon.com/glossary/latest/reference/glos-chap.html#region?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">Regions</a>?” It’s a critical question whether you’re planning Regional expansion, ensuring compliance with data residency requirements, or architecting for disaster recovery.</p><p>Today, I’m excited to introduce <a href="https://builder.aws.com/capabilities/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">AWS Capabilities by Region</a>, a new planning tool that helps you discover and compare AWS services, features, APIs, and <a href="https://aws.amazon.com/cloudformation/">AWS CloudFormation</a> resources across Regions. You can explore service availability through an interactive interface, compare multiple Regions side-by-side, and view forward-looking roadmap information. This detailed visibility helps you make informed decisions about global deployments and avoid project delays and costly rework.</p><p><img class="aligncenter wp-image-100388 size-full c6" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/04/2025-aws-capabilities-service-and-features-header.png" alt="" width="2564" height="2026" /></p><p><strong class="c7">Getting started with Regional comparison</strong><br />To get started, go to <a href="https://builder.aws.com">AWS Builder Center</a> and choose <strong>AWS Capabilities</strong> and <strong>Start Exploring</strong>. When you select <strong>Services and features</strong>, you can choose the AWS Regions you’re most interested in from the dropdown list. You can use the search box to quickly find specific services or features. For example, I chose US (N. Virginia), Asia Pacific (Seoul), and Asia Pacific (Taipei) Regions to compare <a href="https://aws.amazon.com/s3/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">Amazon Simple Storage Service (Amazon S3)</a> features.</p><p>Now I can view the availability of services and features in my chosen Regions and also see when they’re expected to be released. Select <strong>Show only common features</strong> to identify capabilities consistently available across all selected Regions, ensuring you design with services you can use everywhere.</p><p><img class="aligncenter wp-image-100456 size-full c6" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/06/2025-aws-capabilities-service-and-features-1.png" alt="" width="2636" height="1690" /></p><p>The result will indicate availability using the following states: <strong>Available</strong> (live in the region); <strong>Planning</strong> (evaluating launch strategy); <strong>Not Expanding</strong> (will not launch in region); and <strong>2026 Q1</strong> (directional launch planning for the speciﬁed quarter).</p><p>In addition to exploring services and features, AWS Capabilities by Region also helps you explore available APIs and CloudFormation resources. As an example, to explore <strong>API operations</strong>, I added Europe (Stockholm) and Middle East (UAE) Regions to compare <a href="https://aws.amazon.com/dynamodb/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">Amazon DynamoDB</a> features across different geographies. The tool lets you view and search the availability of API operations in each Region.</p><p><img class="aligncenter wp-image-100458 size-full c8" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/06/2025-aws-capabilities-api-operations.png" alt="" width="2510" height="1495" /></p><p>The <strong>CloudFormation resources</strong> tab helps you verify Regional support for specific resource types before writing your templates. You can search by <code>Service</code>, <code>Type</code>, <code>Property</code>, and <code>Config.</code>For instance, when planning an <a href="https://aws.amazon.com/api-gateway/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">Amazon API Gateway</a> deployment, you can check the availability of resource types like <code>AWS::ApiGateway::Account</code>.</p><p><img class="aligncenter wp-image-100459 size-full c8" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/06/2025-aws-capabilities-cloudformation-resources.png" alt="" width="2536" height="1489" /></p><p>You can also search detailed resources such as <a href="https://aws.amazon.com/ec2/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">Amazon Elastic Compute Cloud (Amazon EC2)</a> instance type availability, including specialized instances such as Graviton-based, GPU-enabled, and memory-optimized variants. For example, I searched 7th generation compute-optimized metal instances and could find <code>c7i.metal-24xl</code> and <code>c7i.metal-48xl</code> instances are available across all targeted Regions.</p><p><img class="aligncenter wp-image-100460 size-full c8" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/11/06/2025-aws-capabilities-cloudformation-resources-property-1.png" alt="" width="2524" height="1882" /></p><p>Beyond the interactive interface, the AWS Capabilities by Region data is also accessible through the <a href="https://awslabs.github.io/mcp/servers/aws-knowledge-mcp-server/">AWS Knowledge MCP Server</a>. This allows you to automate Region expansion planning, generate AI-powered recommendations for Region and service selection, and integrate Regional capability checks directly into your development workflows and CI/CD pipelines.</p><p><img class="aligncenter size-full wp-image-100038 c9" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/10/22/2025-aws-inventory-cloud-mcp.png" alt="" width="803" height="345" /></p><p><strong class="c7">Now available<br /></strong> You can begin exploring <a href="https://builder.aws.com/capabilities/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">AWS Capabilities by Region in AWS Builder Center</a> immediately. The Knowledge MCP server is also publicly accessible at no cost and does not require an AWS account. Usage is subject to rate limits. Follow the <a href="https://awslabs.github.io/mcp/servers/aws-knowledge-mcp-server/">getting started guide</a> for setup instructions.</p><p>We would love to hear your feedback, so please send us any suggestions through the <a href="https://builder.aws.com/support/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">Builder Support</a> page.</p><p>— <a href="https://linkedin.com/in/channy/">Channy</a></p></section><aside id="Comments" class="blog-comments"><div data-lb-comp="aws-blog:cosmic-comments" data-env="prod" data-content-id="3a373361-0e91-411b-a031-21afdff4acff" data-title="Introducing AWS Capabilities by Region for easier Regional planning and faster global deployments" data-url="https://aws.amazon.com/blogs/aws/introducing-aws-capabilities-by-region-for-easier-regional-planning-and-faster-global-deployments/"><p data-failed-message="Comments cannot be loaded… Please refresh and try again.">Loading comments…</p></div>
</aside>]]></summary>
    <link href="https://aws.amazon.com/blogs/aws/introducing-aws-capabilities-by-region-for-easier-regional-planning-and-faster-global-deployments/"/>
    <updated>2025-11-06T22:30:00+01:00</updated>
  </entry>
  <entry>
    <id>https://aws.amazon.com/blogs/aws/aws-weekly-roundup-project-rainier-online-amazon-nova-amazon-bedrock-and-more-november-3-2025/</id>
    <title><![CDATA[AWS Weekly Roundup: Project Rainier online, Amazon Nova, Amazon Bedrock, and more (November 3, 2025)]]></title>
    <summary><![CDATA[<section class="blog-post-content lb-rtxt"><table id="amazon-polly-audio-table"><tbody><tr><td id="amazon-polly-audio-tab">
<p></p></td></tr></tbody>


</table><p><img class="wp-image-100247 size-large alignright c6" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/10/30/10ed047ed4c665272f34fb5205f34009-1024x606.jpg" alt="" width="1024" height="606" />Last week I met Jeff Barr at the <a href="https://aws.amazon.com/cn/about-aws/events/events/miniapp-1638-shenzhen-1026/">AWS Shenzhen Community Day</a>. Jeff shared stories about how builders around the world are experimenting with generative AI and encouraged local developers to keep pushing ideas into real prototypes. Many attendees stayed after the sessions to discuss model grounding, evaluation, and how to bring generative AI into real applications.</p><p>Community builders showcased creative Kiro-themed demos, AI-powered IoT projects, and student-led experiments. It was inspiring to see new developers, students, and long-time <a href="https://aws.amazon.com/">Amazon Web Services (AWS)</a> community leaders connecting over shared curiosity and excitement for generative AI innovation.</p><p>Project Rainier, one of the world’s most powerful operational AI supercomputers is now online. Built by AWS in close collaboration with Anthropic, Project Rainier brings nearly 500,000 <a href="https://aws.amazon.com/ai/machine-learning/trainium/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">AWS custom-designed Trainium2 chips</a> into service using a new <a href="https://aws.amazon.com/ec2/ultraservers/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">Amazon Elastic Compute (Amazon EC2) UltraServer</a> and <a href="https://aws.amazon.com/ec2/ultraclusters/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">EC2 UltraCluster</a> architecture designed for high-bandwidth, low-latency model training at hyperscale.</p><p>Anthropic is already training and running inference for Claude on Project Rainier, and is expected to scale to more than one million Trainium2 chips across direct usage and Amazon Bedrock by the end of 2025. For architecture details, deployment insights, and behind-the-scenes video of an UltraServer coming online, refer to <a href="https://www.aboutamazon.com/news/aws/aws-project-rainier-ai-trainium-chips-compute-cluster">AWS activates Project Rainier</a> for the full announcement.</p><p><strong class="c7">Last week’s launches</strong><br />Here are the launches that got my attention this week:</p><ul><li><a href="https://aws.amazon.com/ai/generative-ai/nova/?refid=ep_card_main_event_page">Amazon Nova</a> – Adds <a href="https://aws.amazon.com/blogs/aws/build-more-accurate-ai-applications-with-amazon-nova-web-grounding/">Web Grounding</a> as a new built-in tool for real-time, citation-based web retrieval, and introduces <a href="https://aws.amazon.com/blogs/aws/amazon-nova-multimodal-embeddings-now-available-in-amazon-bedrock/">Multimodal Embeddings</a>, a state-of-the-art model that produces unified cross-modal vectors, improving accuracy for <a href="https://aws.amazon.com/what-is/retrieval-augmented-generation/">Retrieval Augmented Generation (RAG)</a> and semantic search. Both capabilities are available in <a href="https://aws.amazon.com/bedrock/">Amazon Bedrock</a>.</li>
<li>Amazon Bedrock – <a href="https://aws.amazon.com/about-aws/whats-new/2025/10/twelvelabs-marengo3-embed-amazon-bedrock/">TwelveLabs’ Marengo Embed 3.0</a> is now available for long-form, video-native multimodal embeddings across video, images, audio, and text with improved domain accuracy. <a href="https://aws.amazon.com/about-aws/whats-new/2025/10/stability-ai-image-updates-amazon-bedrock/">Stability AI Image Services</a> added four new tools: Outpaint, Fast Upscale, Conservative Upscale, and Creative Upscale for high-resolution upscaling, outpainting, and controlled variations.</li>
<li><a href="https://aws.amazon.com/about-aws/whats-new/2025/10/model-context-protocol-proxy-available/">Model Context Protocol (MCP) Proxy for AWS</a> – Now generally available as a client-side proxy that connects MCP clients to remote AWS hosted MCP servers using SigV4 authentication. It works with tools like Amazon Q Developer CLI, Kiro, Cursor, and Strands Agents, and provides safety controls such as read-only mode, retry logic, and logging. The Proxy is open-source. You can visit the <a href="https://github.com/aws/mcp-proxy-for-aws">AWS GitHub repository</a> to view the installation and configuration options and start connecting with remote AWS MCP servers.</li>
<li><a href="https://aws.amazon.com/ecs/?nc2=type_a">Amazon Elastic Container Service (Amazon ECS)</a> – Now supports <a href="https://aws.amazon.com/about-aws/whats-new/2025/10/amazon-ecs-built-in-linear-canary-deployments/">built-in linear and canary deployment strategies</a>, providing gradual traffic shifting, canary testing with small production slices, deployment bake times for safe rollback, and Amazon CloudWatch alarm-based automated rollbacks.</li>
<li><a href="https://aws.amazon.com/documentdb/?nc2=type_a">Amazon DocumentDB</a> – Adds <a href="https://aws.amazon.com/about-aws/whats-new/2025/10/new-query-planner/">a new query planner</a> in Amazon DocumentDB 5.0 that delivers up to 10 times faster query performance with more optimal index plans and support for <code>$neq</code>, <code>$nin</code>, and nested <code>$elementMatch</code>, and can be enabled through cluster parameter groups without downtime.</li>
<li><a href="https://aws.amazon.com/ebs/?refid=ep_card_main_event_page">Amazon Elastic Block Store (Amazon EBS)</a> – You can now use <a href="https://aws.amazon.com/about-aws/whats-new/2025/10/amazon-ebs-performance-monitoring-metrics-ebs-volumes/">new per-volume CloudWatch metrics</a>, VolumeAvgIOPS and VolumeAvgThroughput, to get minute-level visibility into average IOPS and throughput for EBS volumes on AWS Nitro based instances. These metrics help monitor performance trends, troubleshoot bottlenecks, and optimize provisioned capacity.</li>
<li><a href="https://aws.amazon.com/kinesis/data-streams/?refid=ep_card_main_event_page">Amazon Kinesis Data Streams</a> – You can now send <a href="https://aws.amazon.com/about-aws/whats-new/2025/10/amazon-kinesis-data-streams-10x-larger-record-sizes/">individual records up to 10 MiB, a tenfold increase</a> from the previous limit, helping support larger Internet of Things (IoT), change data capture (CDC), and AI-generated payloads.<br /><a href="https://aws.amazon.com/sagemaker/?nc2=type_a">Amazon SageMaker</a> – Unified Studio search results now provide <a href="https://aws.amazon.com/about-aws/whats-new/2025/10/amazon-sagemaker-adds-search-context/">additional search context</a>, showing matched metadata fields and ranking rationale to improve transparency and relevance in data discovery.</li>
</ul><p><strong class="c7">Additional updates</strong><br />Here are some additional projects, blog posts, and news items that I found interesting:</p><ul><li><a href="https://aws.amazon.com/blogs/spatial/building-production-ready-3d-pipelines-with-aws-visual-asset-management-system-vams-and-4d-pipeline/">Building production-ready 3D pipelines with AWS VAMS and 4D Pipeline</a> – A reference architecture for creating scalable, cloud-based 3D asset pipelines using AWS Visual Asset Management System (VAMS) and 4D Pipeline, supporting ingest, validation, collaborative review, and distribution across games, visual effects (VFX), and digital twins.</li>
<li><a href="https://aws.amazon.com/about-aws/whats-new/2025/10/amazon-location-service-new-api-key-security-restrictions/">Amazon Location Service introduces new API key restrictions</a> – You can now create granular security policies with bundle IDs to restrict API access to specific mobile applications, improving access control and strengthening application-level security across location-based workloads.</li>
<li><a href="https://aws.amazon.com/about-aws/whats-new/2025/10/aws-clean-rooms-advanced-configurations-optimize-sql-performance/">AWS Clean Rooms launches advanced SQL configurations</a> – A performance enhancement for Spark SQL workloads that supports runtime customization of Spark properties and compute sizes, plus table caching for faster and more cost-efficient processing of large analytical queries.</li>
<li><a href="https://aws.amazon.com/about-aws/whats-new/2025/10/aws-serverless-mcp-server-tools-lambda-esm/">AWS Serverless MCP Server adds event source mappings (ESM) tools</a> – A capability for event-driven serverless applications that supports configuration, performance tuning, and troubleshooting of AWS Lambda event source mappings, including AWS Serverless Application Model (AWS SAM) template generation and diagnostic insights.</li>
<li><a href="https://aws.amazon.com/about-aws/whats-new/2025/10/ai-agent-context-pack-iot-greengrass-developers/">AWS IoT Greengrass releases an AI agent context pack</a> – A development accelerator for cloud-connected edge applications that provides ready-to-use instructions, examples, and templates, helping teams integrate generative AI tools such as Amazon Q for faster software creation, testing, and fleet-wide deployment. It’s available as open source on the <a href="https://github.com/aws-greengrass/greengrass-agent-context-pack">GitHub repository</a>.</li>
<li><a href="https://aws.amazon.com/about-aws/whats-new/2025/10/aws-step-functions-metrics-dashboard/">AWS Step Functions introduces a new metrics dashboard</a> – You can now view usage, billing, and performance metrics at the state-machine level for standard and express workflows in a single console view, improving visibility and troubleshooting for distributed applications.</li>
</ul><p><strong class="c7">Upcoming AWS events</strong><br />Check your calendars so that you can sign up for these upcoming events:</p><ul><li><a href="https://builder.aws.com/connect/events/builder-loft">AWS Builder Loft</a> – A community tech space in San Francisco where you can learn from expert sessions, join hands-on workshops, explore AI and emerging technologies, and collaborate with other builders to accelerate their ideas. Browse the <a href="https://luma.com/aws-builder-loft-events">upcoming sessions</a> and join the events that interest you.</li>
<li><a href="https://aws.amazon.com/events/community-day/">AWS Community Days</a> – Join community-led conferences that feature technical discussions, workshops, and hands-on labs led by experienced AWS users and industry leaders from around the world: <a href="https://awscommunity.hk/">Hong Kong</a> (November 2), <a href="https://tix.africa/discover/aws-women-user-group-abuja-community-day">Abuja</a> (November 8), <a href="https://tix.africa/discover/aws-women-user-group-abuja-community-day">Cameroon</a> (November 8), and <a href="https://tix.africa/discover/aws-women-user-group-abuja-community-day">Spain</a> (November 15).</li>
<li><a href="https://pulse.aws/survey/LOLZYMRD?p=0">AWS Skills Center Seattle 4th Anniversary Celebration</a> – A free, public event on November 20 with a keynote, learned panels, recruiter insights, raffles, and virtual participation options.</li>
</ul><p>Join the <a href="https://builder.aws.com/?trk=e61dee65-4ce8-4738-84db-75305c9cd4fe&amp;sc_channel=el">AWS Builder Center</a> to learn, build, and connect with builders in the AWS community. Browse here for <a href="https://aws.amazon.com/events/explore-aws-events/?trk=e61dee65-4ce8-4738-84db-75305c9cd4fe&amp;sc_channel=el">upcoming in-person events</a>, <a href="https://aws.amazon.com/developer/events/?trk=e61dee65-4ce8-4738-84db-75305c9cd4fe&amp;sc_channel=el">developer-focused events</a>, and <a href="https://aws.amazon.com/startups/events?trk=c4ea046f-18ad-4d23-a1ac-cdd1267f942c&amp;sc_channel=el">events for startups</a>.</p><p>That’s all for this week. Check back next Monday for another <a href="https://aws.amazon.com/blogs/aws/tag/week-in-review/?trk=7c8639c6-87c6-47d6-9bd0-a5812eecb848&amp;sc_channel=el">Weekly Roundup</a>!</p><p>– <a href="https://www.linkedin.com/in/zhengyubin714/">Betty</a></p></section><aside id="Comments" class="blog-comments"><div data-lb-comp="aws-blog:cosmic-comments" data-env="prod" data-content-id="c870ae53-fbd7-40a3-a2e0-8a4bcc3ba321" data-title="AWS Weekly Roundup: Project Rainier online, Amazon Nova, Amazon Bedrock, and more (November 3, 2025)" data-url="https://aws.amazon.com/blogs/aws/aws-weekly-roundup-project-rainier-online-amazon-nova-amazon-bedrock-and-more-november-3-2025/"><p data-failed-message="Comments cannot be loaded… Please refresh and try again.">Loading comments…</p></div>
</aside>]]></summary>
    <link href="https://aws.amazon.com/blogs/aws/aws-weekly-roundup-project-rainier-online-amazon-nova-amazon-bedrock-and-more-november-3-2025/"/>
    <updated>2025-11-03T18:55:00+01:00</updated>
  </entry>
  <entry>
    <id>https://aws.amazon.com/blogs/aws/build-more-accurate-ai-applications-with-amazon-nova-web-grounding/</id>
    <title><![CDATA[Build more accurate AI applications with Amazon Nova Web Grounding]]></title>
    <summary><![CDATA[<section class="blog-post-content lb-rtxt"><table><tbody><tr><td>
<p></p></td></tr></tbody>


</table><p>Imagine building AI applications that deliver accurate, current information without the complexity of developing intricate data retrieval systems. Today, we’re excited to announce the general availability of Web Grounding, a new built-in tool for Nova models on <a href="https://aws.amazon.com/bedrock/?trk=ac97e39c-d115-4d4a-b3fe-c695e0c9a7ee&amp;sc_channel=el">Amazon Bedrock</a>.</p><p>Web Grounding provides developers with a turnkey <a href="https://aws.amazon.com/what-is/retrieval-augmented-generation/?trk=ac97e39c-d115-4d4a-b3fe-c695e0c9a7ee&amp;sc_channel=el">Retrieval Augmented Generation (RAG)</a> option that allows the Amazon Nova <a href="https://aws.amazon.com/what-is/foundation-models/?trk=ac97e39c-d115-4d4a-b3fe-c695e0c9a7ee&amp;sc_channel=el">foundation models</a> to intelligently decide when to retrieve and incorporate relevant up-to-date information based on the context of the prompt. This helps to ground the model output by incorporating cited public sources as context, aiming to reduce hallucinations and improve accuracy.</p><p><strong>When should developers use Web Grounding?<br /></strong></p><p>Developers should consider using Web Grounding when building applications that require access to current, factual information or need to provide well-cited responses. The capability is particularly valuable across a range of applications, from knowledge-based chat assistants providing up-to-date information about products and services, to content generation tools requiring fact-checking and source verification. It’s also ideal for research assistants that need to synthesize information from multiple current sources, as well as customer support applications where accuracy and verifiability are crucial.</p><p>Web Grounding is especially useful when you need to reduce hallucinations in your AI applications or when your use case requires transparent source attribution. Because it automatically handles the retrieval and integration of information, it’s an efficient solution for developers who want to focus on building their applications rather than managing complex RAG implementations.</p><p><strong>Getting started</strong><br />Web Grounding seamlessly integrates with supported Amazon Nova models to handle information retrieval and processing during inference. This eliminates the need to build and maintain complex RAG pipelines, while also providing source attributions that verify the origin of information.</p><p>Let’s see an example of asking a question to Nova Premier using Python to call the Amazon Bedrock Converse API with Web Grounding enabled.</p><p>First, I created an Amazon Bedrock client using <a href="https://aws.amazon.com/sdk-for-python/?trk=ac97e39c-d115-4d4a-b3fe-c695e0c9a7ee&amp;sc_channel=el">AWS SDK for Python (Boto3)</a> in the usual way. For good practice, I’m using a session, which helps to group configurations and make them reusable. I then create a BedrockRuntimeClient.</p><pre class="lang-python">try:
    session = boto3.Session(region_name='us-east-1')
    client = session.client(
        'bedrock-runtime')</pre><p>I then prepare the Amazon Bedrock Converse API payload. It includes a “role” parameter set to “user”, indicating that the message comes from our application’s user (compared to “assistant” for AI-generated responses).</p><p>For this demo, I chose the question “What are the current AWS Regions and their locations?” This was selected intentionally because it requires current information, making it useful to demonstrate how Amazon Nova can automatically invoke searches using Web Grounding when it determines that up-to-date knowledge is needed.</p><pre class="lang-python"># Prepare the conversation in the format expected by Bedrock
question = "What are the current AWS regions and their locations?"
conversation = [
   {
     "role": "user",  # Indicates this message is from the user
     "content": [{"text": question}],  # The actual question text
      }
    ]</pre><p>First, let’s see what the output is without Web Grounding. I make a call to Amazon Bedrock Converse API.</p><pre class="lang-python"># Make the API call to Bedrock 
model_id = "us.amazon.nova-premier-v1:0" 
response = client.converse( 
    modelId=model_id, # Which AI model to use 
    messages=conversation, # The conversation history (just our question in this case) 
    )
print(response['output']['message']['content'][0]['text'])</pre><p>I get a list of all the current <a href="https://docs.aws.amazon.com/glossary/latest/reference/glos-chap.html#region?trk=ac97e39c-d115-4d4a-b3fe-c695e0c9a7ee&amp;sc_channel=el">AWS Regions</a> and their locations.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/10/27/Screenshot-2025-10-27-at-22.26.50.png"><img class="aligncenter size-full wp-image-100151" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/10/27/Screenshot-2025-10-27-at-22.26.50.png" alt="" width="2338" height="1438" /></a></p><p>Now let’s use Web Grounding. I make a similar call to the Amazon Bedrock Converse API, but declare <code>nova_grounding</code> as one of the tools available to the model.</p><pre class="lang-python">model_id = "us.amazon.nova-premier-v1:0" 
response = client.converse( 
    modelId=model_id, 
    messages=conversation, 
    toolConfig= {
          "tools":[ 
              {
                "systemTool": {
                   "name": "nova_grounding" # Enables the model to search real-time information
                 }
              }
          ]
     }
)</pre><p>After processing the response, I can see that the model used Web Grounding to access up-to-date information. The output includes reasoning traces that I can use to follow its thought process and see where it automatically queried external sources. The content of the responses from these external calls appear as <code>[HIDDEN]</code> – a standard practice in AI systems that both protects sensitive information and helps manage output size.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/10/28/output-with-web-grounding-part-1-tool-calls-and-responses-marked.png"><img class="aligncenter size-full wp-image-100153" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/10/28/output-with-web-grounding-part-1-tool-calls-and-responses-marked.png" alt="" width="1502" height="1694" /></a></p><p>Additionally, the output also includes <code>citationsContent</code> objects containing information about the sources queried by Web Grounding.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/10/28/output-citations.png"><img class="aligncenter size-full wp-image-100154" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/10/28/output-citations.png" alt="" width="1986" height="1524" /></a></p><p>Finally, I can see the list of AWS Regions. It finishes with a message right at the end stating that “These are the most current and active AWS regions globally.”</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/10/28/output-aws-regions-with-nova-grounding-marked.png"><img class="aligncenter size-full wp-image-100156" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/10/28/output-aws-regions-with-nova-grounding-marked.png" alt="" width="2688" height="352" /></a></p><p>Web Grounding represents a significant step forward in making AI applications more reliable and current with minimum effort. Whether you’re building customer service chat assistants that need to provide up-to-date accurate information, developing research applications that analyze and synthesize information from multiple sources, or creating travel applications that deliver the latest details about destinations and accommodations, Web Grounding can help you deliver more accurate and relevant responses to your users with a convenient turnkey solution that is straightforward to configure and use.</p><p><strong>Things to know<br /></strong> Amazon Nova Web Grounding is available now in US East (N. Virginia), US East (Ohio), and US West (Oregon).</p><p>Web Grounding incurs additional cost. Refer to the <a href="https://aws.amazon.com/bedrock/pricing/?trk=ac97e39c-d115-4d4a-b3fe-c695e0c9a7ee&amp;sc_channel=el">Amazon Bedrock pricing page</a> for more details.</p><p>Currently, you can only use Web Grounding with Nova Premier but support for other Nova models will be added soon.</p><p>If you haven’t used Amazon Nova before or are looking to go deeper, try this self-paced online <a href="https://catalog.us-east-1.prod.workshops.aws/workshops/012d9c20-25dc-4065-bdb6-50e935e8bd9f/en-US?trk=ac97e39c-d115-4d4a-b3fe-c695e0c9a7ee&amp;sc_channel=el">workshop where you can learn how to effectively use Amazon Nova foundation models</a> and related features for text, image, and video processing through hands-on exercises.</p><p>10/30/25: Updated to all available regions. Original launch only in US East (N. Virginia).</p><a href="https://link.codingmatheus.com/linkedin">Matheus Guimaraes | @codingmatheus</a></section><aside class="blog-comments"><div data-lb-comp="aws-blog:cosmic-comments" data-env="prod" data-content-id="ea0dfa9c-5503-4cff-b6f6-f801813933de" data-title="Build more accurate AI applications with Amazon Nova Web Grounding" data-url="https://aws.amazon.com/blogs/aws/build-more-accurate-ai-applications-with-amazon-nova-web-grounding/"><p data-failed-message="Comments cannot be loaded… Please refresh and try again.">Loading comments…</p></div>
</aside>]]></summary>
    <link href="https://aws.amazon.com/blogs/aws/build-more-accurate-ai-applications-with-amazon-nova-web-grounding/"/>
    <updated>2025-10-29T00:59:00+01:00</updated>
  </entry>
  <entry>
    <id>https://aws.amazon.com/blogs/aws/amazon-nova-multimodal-embeddings-now-available-in-amazon-bedrock/</id>
    <title><![CDATA[Amazon Nova Multimodal Embeddings: State-of-the-art embedding model for agentic RAG and semantic search]]></title>
    <summary><![CDATA[<section class="blog-post-content lb-rtxt"><table><tbody><tr><td>
<p></p></td></tr></tbody>


</table><p>Today, we’re introducing <a href="https://aws.amazon.com/ai/generative-ai/nova/">Amazon Nova Multimodal Embeddings</a>, a state-of-the-art multimodal embedding model for agentic <a href="https://aws.amazon.com/what-is/retrieval-augmented-generation/">retrieval-augmented generation (RAG)</a> and semantic search applications, available in <a href="https://aws.amazon.com/bedrock/">Amazon Bedrock</a>. It is the first unified embedding model that supports text, documents, images, video, and audio through a single model to enable crossmodal retrieval with leading accuracy.</p><p>Embedding models convert textual, visual, and audio inputs into numerical representations called <a href="https://aws.amazon.com/what-is/embeddings-in-machine-learning/">embeddings</a>. These embeddings capture the meaning of the input in a way that AI systems can compare, search, and analyze, powering use cases such as semantic search and RAG.</p><p>Organizations are increasingly seeking solutions to unlock insights from the growing volume of unstructured data that is spread across text, image, document, video, and audio content. For example, an organization might have product images, brochures that contain infographics and text, and user-uploaded video clips. Embedding models are able to unlock value from unstructured data, however traditional models are typically specialized to handle one content type. This limitation drives customers to either build complex crossmodal embedding solutions or restrict themselves to use cases focused on a single content type. The problem also applies to mixed-modality content types such as documents with interleaved text and images or video with visual, audio, and textual elements where existing models struggle to capture crossmodal relationships eﬀectively.</p><p>Nova Multimodal Embeddings supports a unified semantic space for text, documents, images, video, and audio for use cases such as crossmodal search across mixed-modality content, searching with a reference image, and retrieving visual documents.</p><p><strong>Evaluating Amazon Nova Multimodal Embeddings performance<br /></strong> We evaluated the model on a broad range of benchmarks, and it delivers leading accuracy out-of-the-box as described in the following table.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/10/28/nova-multimodal-embeddings-benchmarks-with-notes.png"><img class="aligncenter wp-image-100195 size-large" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/10/28/nova-multimodal-embeddings-benchmarks-with-notes-1024x642.png" alt="Amazon Nova Embeddings benchmarks" width="1024" height="642" /></a></p><p>Nova Multimodal Embeddings supports a context length of up to 8K tokens, text in up to 200 languages, and accepts inputs via synchronous and asynchronous APIs. Additionally, it supports segmentation (also known as “chunking”) to partition long-form text, video, or audio content into manageable segments, generating embeddings for each portion. Lastly, the model oﬀers four output embedding dimensions, trained using <a href="https://arxiv.org/abs/2205.13147">Matryoshka Representation Learning (MRL)</a> that enables low-latency end-to-end retrieval with minimal accuracy changes.</p><p>Nova Multimodal Embeddings supports batch inference, allowing users to convert large volumes of content into embeddings more efficiently. Instead of sending individual requests for each, users can send multiple items in a single request, reducing API overhead.</p><p>Let’s see how the new model can be used in practice.</p><p><strong>Using Amazon Nova Multimodal Embeddings</strong><br />Getting started with Nova Multimodal Embeddings follows the same pattern as <a href="https://aws.amazon.com/bedrock/model-choice/">other models in Amazon Bedrock</a>. The model accepts text, documents, images, video, or audio as input and returns numerical embeddings that you can use for semantic search, similarity comparison, or RAG.</p><p>Here’s a practical example using the <a href="https://aws.amazon.com/sdk-for-python/">AWS SDK for Python (Boto3)</a> that shows how to create embeddings from different content types and store them for later retrieval. For simplicity, I’ll use <a href="https://aws.amazon.com/s3/features/vectors/">Amazon S3 Vectors</a>, a cost-optimized storage with native support for storing and querying vectors at any scale, to store and search the embeddings.</p><p>Let’s start with the fundamentals: converting text into embeddings. This example shows how to transform a simple text description into a numerical representation that captures its semantic meaning. These embeddings can later be compared with embeddings from documents, images, videos, or audio to find related content.</p><p>To make the code easy to follow, I’ll show a section of the script at a time. The full script is included at the end of this walkthrough.</p><pre class="lang-python">import json
import base64
import time
import boto3
MODEL_ID = "amazon.nova-2-multimodal-embeddings-v1:0"
EMBEDDING_DIMENSION = 3072
# Initialize Amazon Bedrock Runtime client
bedrock_runtime = boto3.client("bedrock-runtime", region_name="us-east-1")
print(f"Generating text embedding with {MODEL_ID} ...")
# Text to embed
text = "Amazon Nova is a multimodal foundation model"
# Create embedding
request_body = {
    "taskType": "SINGLE_EMBEDDING",
    "singleEmbeddingParams": {
        "embeddingPurpose": "GENERIC_INDEX",
        "embeddingDimension": EMBEDDING_DIMENSION,
        "text": {"truncationMode": "END", "value": text},
    },
}
response = bedrock_runtime.invoke_model(
    body=json.dumps(request_body),
    modelId=MODEL_ID,
    contentType="application/json",
)
# Extract embedding
response_body = json.loads(response["body"].read())
embedding = response_body["embeddings"][0]["embedding"]
print(f"Generated embedding with {len(embedding)} dimensions")</pre><p>Now we’ll process visual content using the same embedding space using a <code>photo.jpg</code> file in the same folder as the script. This demonstrates the power of multimodality: Nova Multimodal Embeddings is able to capture both textual and visual context into a single embedding that provides enhanced understanding of the document.</p><p>Nova Multimodal Embeddings can generate embeddings that are optimized for how they are being used. When indexing for a search or retrieval use case, <code>embeddingPurpose</code> can be set to <code>GENERIC_INDEX</code>. For the query step, <code>embeddingPurpose</code> can be set depending on the type of item to be retrieved. For example, when retrieving documents, <code>embeddingPurpose</code> can be set to <code>DOCUMENT_RETRIEVAL</code>.</p><pre class="lang-python"># Read and encode image
print(f"Generating image embedding with {MODEL_ID} ...")
with open("photo.jpg", "rb") as f:
    image_bytes = base64.b64encode(f.read()).decode("utf-8")
# Create embedding
request_body = {
    "taskType": "SINGLE_EMBEDDING",
    "singleEmbeddingParams": {
        "embeddingPurpose": "GENERIC_INDEX",
        "embeddingDimension": EMBEDDING_DIMENSION,
        "image": {
            "format": "jpeg",
            "source": {"bytes": image_bytes}
        },
    },
}
response = bedrock_runtime.invoke_model(
    body=json.dumps(request_body),
    modelId=MODEL_ID,
    contentType="application/json",
)
# Extract embedding
response_body = json.loads(response["body"].read())
embedding = response_body["embeddings"][0]["embedding"]
print(f"Generated embedding with {len(embedding)} dimensions")</pre><p>To process video content, I use the asynchronous API. That’s a requirement for videos that are larger than 25MB when encoded as <a href="https://en.wikipedia.org/wiki/Base64">Base64</a>. First, I upload a local video to an S3 bucket in the same <a href="https://aws.amazon.com/about-aws/global-infrastructure/regions_az/">AWS Region</a>.</p><pre class="lang-bash">aws s3 cp presentation.mp4 s3://my-video-bucket/videos/</pre><p>This example shows how to extract embeddings from both visual and audio components of a video file. The segmentation feature breaks longer videos into manageable chunks, making it practical to search through hours of content efficiently.</p><pre class="lang-python"># Initialize Amazon S3 client
s3 = boto3.client("s3", region_name="us-east-1")
print(f"Generating video embedding with {MODEL_ID} ...")
# Amazon S3 URIs
S3_VIDEO_URI = "s3://my-video-bucket/videos/presentation.mp4"
S3_EMBEDDING_DESTINATION_URI = "s3://my-embedding-destination-bucket/embeddings-output/"
# Create async embedding job for video with audio
model_input = {
    "taskType": "SEGMENTED_EMBEDDING",
    "segmentedEmbeddingParams": {
        "embeddingPurpose": "GENERIC_INDEX",
        "embeddingDimension": EMBEDDING_DIMENSION,
        "video": {
            "format": "mp4",
            "embeddingMode": "AUDIO_VIDEO_COMBINED",
            "source": {
                "s3Location": {"uri": S3_VIDEO_URI}
            },
            "segmentationConfig": {
                "durationSeconds": 15  # Segment into 15-second chunks
            },
        },
    },
}
response = bedrock_runtime.start_async_invoke(
    modelId=MODEL_ID,
    modelInput=model_input,
    outputDataConfig={
        "s3OutputDataConfig": {
            "s3Uri": S3_EMBEDDING_DESTINATION_URI
        }
    },
)
invocation_arn = response["invocationArn"]
print(f"Async job started: {invocation_arn}")
# Poll until job completes
print("\nPolling for job completion...")
while True:
    job = bedrock_runtime.get_async_invoke(invocationArn=invocation_arn)
    status = job["status"]
    print(f"Status: {status}")
    if status != "InProgress":
        break
    time.sleep(15)
# Check if job completed successfully
if status == "Completed":
    output_s3_uri = job["outputDataConfig"]["s3OutputDataConfig"]["s3Uri"]
    print(f"\nSuccess! Embeddings at: {output_s3_uri}")
    # Parse S3 URI to get bucket and prefix
    s3_uri_parts = output_s3_uri[5:].split("/", 1)  # Remove "s3://" prefix
    bucket = s3_uri_parts[0]
    prefix = s3_uri_parts[1] if len(s3_uri_parts) &gt; 1 else ""
    # AUDIO_VIDEO_COMBINED mode outputs to embedding-audio-video.jsonl
    # The output_s3_uri already includes the job ID, so just append the filename
    embeddings_key = f"{prefix}/embedding-audio-video.jsonl".lstrip("/")
    print(f"Reading embeddings from: s3://{bucket}/{embeddings_key}")
    # Read and parse JSONL file
    response = s3.get_object(Bucket=bucket, Key=embeddings_key)
    content = response['Body'].read().decode('utf-8')
    embeddings = []
    for line in content.strip().split('\n'):
        if line:
            embeddings.append(json.loads(line))
    print(f"\nFound {len(embeddings)} video segments:")
    for i, segment in enumerate(embeddings):
        print(f"  Segment {i}: {segment.get('startTime', 0):.1f}s - {segment.get('endTime', 0):.1f}s")
        print(f"    Embedding dimension: {len(segment.get('embedding', []))}")
else:
    print(f"\nJob failed: {job.get('failureMessage', 'Unknown error')}")</pre><p>With our embeddings generated, we need a place to store and search them efficiently. This example demonstrates setting up a vector store using Amazon S3 Vectors, which provides the infrastructure needed for similarity search at scale. Think of this as creating a searchable index where semantically similar content naturally clusters together. When adding an embedding to the index, I use the metadata to specify the original format and the content being indexed.</p><pre class="lang-python"># Initialize Amazon S3 Vectors client
s3vectors = boto3.client("s3vectors", region_name="us-east-1")
# Configuration
VECTOR_BUCKET = "my-vector-store"
INDEX_NAME = "embeddings"
# Create vector bucket and index (if they don't exist)
try:
    s3vectors.get_vector_bucket(vectorBucketName=VECTOR_BUCKET)
    print(f"Vector bucket {VECTOR_BUCKET} already exists")
except s3vectors.exceptions.NotFoundException:
    s3vectors.create_vector_bucket(vectorBucketName=VECTOR_BUCKET)
    print(f"Created vector bucket: {VECTOR_BUCKET}")
try:
    s3vectors.get_index(vectorBucketName=VECTOR_BUCKET, indexName=INDEX_NAME)
    print(f"Vector index {INDEX_NAME} already exists")
except s3vectors.exceptions.NotFoundException:
    s3vectors.create_index(
        vectorBucketName=VECTOR_BUCKET,
        indexName=INDEX_NAME,
        dimension=EMBEDDING_DIMENSION,
        dataType="float32",
        distanceMetric="cosine"
    )
    print(f"Created index: {INDEX_NAME}")
texts = [
    "Machine learning on AWS",
    "Amazon Bedrock provides foundation models",
    "S3 Vectors enables semantic search"
]
print(f"\nGenerating embeddings for {len(texts)} texts...")
# Generate embeddings using Amazon Nova for each text
vectors = []
for text in texts:
    response = bedrock_runtime.invoke_model(
        body=json.dumps({
            "taskType": "SINGLE_EMBEDDING",
            "singleEmbeddingParams": {
                "embeddingDimension": EMBEDDING_DIMENSION,
                "text": {"truncationMode": "END", "value": text}
            }
        }),
        modelId=MODEL_ID,
        accept="application/json",
        contentType="application/json"
    )
    response_body = json.loads(response["body"].read())
    embedding = response_body["embeddings"][0]["embedding"]
    vectors.append({
        "key": f"text:{text[:50]}",  # Unique identifier
        "data": {"float32": embedding},
        "metadata": {"type": "text", "content": text}
    })
    print(f"  ✓ Generated embedding for: {text}")
# Add all vectors to store in a single call
s3vectors.put_vectors(
    vectorBucketName=VECTOR_BUCKET,
    indexName=INDEX_NAME,
    vectors=vectors
)
print(f"\nSuccessfully added {len(vectors)} vectors to the store in one put_vectors call!")</pre><p>This final example demonstrates the capability of searching across different content types with a single query, finding the most similar content regardless of whether it originated from text, images, videos, or audio. The distance scores help you understand how closely related the results are to your original query.</p><pre class="lang-python"># Text to query
query_text = "foundation models"  
print(f"\nGenerating embeddings for query '{query_text}' ...")
# Generate embeddings
response = bedrock_runtime.invoke_model(
    body=json.dumps({
        "taskType": "SINGLE_EMBEDDING",
        "singleEmbeddingParams": {
            "embeddingPurpose": "GENERIC_RETRIEVAL",
            "embeddingDimension": EMBEDDING_DIMENSION,
            "text": {"truncationMode": "END", "value": query_text}
        }
    }),
    modelId=MODEL_ID,
    accept="application/json",
    contentType="application/json"
)
response_body = json.loads(response["body"].read())
query_embedding = response_body["embeddings"][0]["embedding"]
print(f"Searching for similar embeddings...\n")
# Search for top 5 most similar vectors
response = s3vectors.query_vectors(
    vectorBucketName=VECTOR_BUCKET,
    indexName=INDEX_NAME,
    queryVector={"float32": query_embedding},
    topK=5,
    returnDistance=True,
    returnMetadata=True
)
# Display results
print(f"Found {len(response['vectors'])} results:\n")
for i, result in enumerate(response["vectors"], 1):
    print(f"{i}. {result['key']}")
    print(f"   Distance: {result['distance']:.4f}")
    if result.get("metadata"):
        print(f"   Metadata: {result['metadata']}")
    print()</pre><p>Crossmodal search is one of the key advantages of multimodal embeddings. With crossmodal search, you can query with text and find relevant images. You can also search for videos using text descriptions, find audio clips that match certain topics, or discover documents based on their visual and textual content. For your reference, the full script with all previous examples merged together is here:</p><pre class="lang-python">import json
import base64
import time
import boto3
MODEL_ID = "amazon.nova-2-multimodal-embeddings-v1:0"
EMBEDDING_DIMENSION = 3072
# Initialize Amazon Bedrock Runtime client
bedrock_runtime = boto3.client("bedrock-runtime", region_name="us-east-1")
print(f"Generating text embedding with {MODEL_ID} ...")
# Text to embed
text = "Amazon Nova is a multimodal foundation model"
# Create embedding
request_body = {
    "taskType": "SINGLE_EMBEDDING",
    "singleEmbeddingParams": {
        "embeddingPurpose": "GENERIC_INDEX",
        "embeddingDimension": EMBEDDING_DIMENSION,
        "text": {"truncationMode": "END", "value": text},
    },
}
response = bedrock_runtime.invoke_model(
    body=json.dumps(request_body),
    modelId=MODEL_ID,
    contentType="application/json",
)
# Extract embedding
response_body = json.loads(response["body"].read())
embedding = response_body["embeddings"][0]["embedding"]
print(f"Generated embedding with {len(embedding)} dimensions")
# Read and encode image
print(f"Generating image embedding with {MODEL_ID} ...")
with open("photo.jpg", "rb") as f:
    image_bytes = base64.b64encode(f.read()).decode("utf-8")
# Create embedding
request_body = {
    "taskType": "SINGLE_EMBEDDING",
    "singleEmbeddingParams": {
        "embeddingPurpose": "GENERIC_INDEX",
        "embeddingDimension": EMBEDDING_DIMENSION,
        "image": {
            "format": "jpeg",
            "source": {"bytes": image_bytes}
        },
    },
}
response = bedrock_runtime.invoke_model(
    body=json.dumps(request_body),
    modelId=MODEL_ID,
    contentType="application/json",
)
# Extract embedding
response_body = json.loads(response["body"].read())
embedding = response_body["embeddings"][0]["embedding"]
print(f"Generated embedding with {len(embedding)} dimensions")
# Initialize Amazon S3 client
s3 = boto3.client("s3", region_name="us-east-1")
print(f"Generating video embedding with {MODEL_ID} ...")
# Amazon S3 URIs
S3_VIDEO_URI = "s3://my-video-bucket/videos/presentation.mp4"
# Amazon S3 output bucket and location
S3_EMBEDDING_DESTINATION_URI = "s3://my-video-bucket/embeddings-output/"
# Create async embedding job for video with audio
model_input = {
    "taskType": "SEGMENTED_EMBEDDING",
    "segmentedEmbeddingParams": {
        "embeddingPurpose": "GENERIC_INDEX",
        "embeddingDimension": EMBEDDING_DIMENSION,
        "video": {
            "format": "mp4",
            "embeddingMode": "AUDIO_VIDEO_COMBINED",
            "source": {
                "s3Location": {"uri": S3_VIDEO_URI}
            },
            "segmentationConfig": {
                "durationSeconds": 15  # Segment into 15-second chunks
            },
        },
    },
}
response = bedrock_runtime.start_async_invoke(
    modelId=MODEL_ID,
    modelInput=model_input,
    outputDataConfig={
        "s3OutputDataConfig": {
            "s3Uri": S3_EMBEDDING_DESTINATION_URI
        }
    },
)
invocation_arn = response["invocationArn"]
print(f"Async job started: {invocation_arn}")
# Poll until job completes
print("\nPolling for job completion...")
while True:
    job = bedrock_runtime.get_async_invoke(invocationArn=invocation_arn)
    status = job["status"]
    print(f"Status: {status}")
    if status != "InProgress":
        break
    time.sleep(15)
# Check if job completed successfully
if status == "Completed":
    output_s3_uri = job["outputDataConfig"]["s3OutputDataConfig"]["s3Uri"]
    print(f"\nSuccess! Embeddings at: {output_s3_uri}")
    # Parse S3 URI to get bucket and prefix
    s3_uri_parts = output_s3_uri[5:].split("/", 1)  # Remove "s3://" prefix
    bucket = s3_uri_parts[0]
    prefix = s3_uri_parts[1] if len(s3_uri_parts) &gt; 1 else ""
    # AUDIO_VIDEO_COMBINED mode outputs to embedding-audio-video.jsonl
    # The output_s3_uri already includes the job ID, so just append the filename
    embeddings_key = f"{prefix}/embedding-audio-video.jsonl".lstrip("/")
    print(f"Reading embeddings from: s3://{bucket}/{embeddings_key}")
    # Read and parse JSONL file
    response = s3.get_object(Bucket=bucket, Key=embeddings_key)
    content = response['Body'].read().decode('utf-8')
    embeddings = []
    for line in content.strip().split('\n'):
        if line:
            embeddings.append(json.loads(line))
    print(f"\nFound {len(embeddings)} video segments:")
    for i, segment in enumerate(embeddings):
        print(f"  Segment {i}: {segment.get('startTime', 0):.1f}s - {segment.get('endTime', 0):.1f}s")
        print(f"    Embedding dimension: {len(segment.get('embedding', []))}")
else:
    print(f"\nJob failed: {job.get('failureMessage', 'Unknown error')}")
# Initialize Amazon S3 Vectors client
s3vectors = boto3.client("s3vectors", region_name="us-east-1")
# Configuration
VECTOR_BUCKET = "my-vector-store"
INDEX_NAME = "embeddings"
# Create vector bucket and index (if they don't exist)
try:
    s3vectors.get_vector_bucket(vectorBucketName=VECTOR_BUCKET)
    print(f"Vector bucket {VECTOR_BUCKET} already exists")
except s3vectors.exceptions.NotFoundException:
    s3vectors.create_vector_bucket(vectorBucketName=VECTOR_BUCKET)
    print(f"Created vector bucket: {VECTOR_BUCKET}")
try:
    s3vectors.get_index(vectorBucketName=VECTOR_BUCKET, indexName=INDEX_NAME)
    print(f"Vector index {INDEX_NAME} already exists")
except s3vectors.exceptions.NotFoundException:
    s3vectors.create_index(
        vectorBucketName=VECTOR_BUCKET,
        indexName=INDEX_NAME,
        dimension=EMBEDDING_DIMENSION,
        dataType="float32",
        distanceMetric="cosine"
    )
    print(f"Created index: {INDEX_NAME}")
texts = [
    "Machine learning on AWS",
    "Amazon Bedrock provides foundation models",
    "S3 Vectors enables semantic search"
]
print(f"\nGenerating embeddings for {len(texts)} texts...")
# Generate embeddings using Amazon Nova for each text
vectors = []
for text in texts:
    response = bedrock_runtime.invoke_model(
        body=json.dumps({
            "taskType": "SINGLE_EMBEDDING",
            "singleEmbeddingParams": {
                "embeddingPurpose": "GENERIC_INDEX",
                "embeddingDimension": EMBEDDING_DIMENSION,
                "text": {"truncationMode": "END", "value": text}
            }
        }),
        modelId=MODEL_ID,
        accept="application/json",
        contentType="application/json"
    )
    response_body = json.loads(response["body"].read())
    embedding = response_body["embeddings"][0]["embedding"]
    vectors.append({
        "key": f"text:{text[:50]}",  # Unique identifier
        "data": {"float32": embedding},
        "metadata": {"type": "text", "content": text}
    })
    print(f"  ✓ Generated embedding for: {text}")
# Add all vectors to store in a single call
s3vectors.put_vectors(
    vectorBucketName=VECTOR_BUCKET,
    indexName=INDEX_NAME,
    vectors=vectors
)
print(f"\nSuccessfully added {len(vectors)} vectors to the store in one put_vectors call!")
# Text to query
query_text = "foundation models"  
print(f"\nGenerating embeddings for query '{query_text}' ...")
# Generate embeddings
response = bedrock_runtime.invoke_model(
    body=json.dumps({
        "taskType": "SINGLE_EMBEDDING",
        "singleEmbeddingParams": {
            "embeddingPurpose": "GENERIC_RETRIEVAL",
            "embeddingDimension": EMBEDDING_DIMENSION,
            "text": {"truncationMode": "END", "value": query_text}
        }
    }),
    modelId=MODEL_ID,
    accept="application/json",
    contentType="application/json"
)
response_body = json.loads(response["body"].read())
query_embedding = response_body["embeddings"][0]["embedding"]
print(f"Searching for similar embeddings...\n")
# Search for top 5 most similar vectors
response = s3vectors.query_vectors(
    vectorBucketName=VECTOR_BUCKET,
    indexName=INDEX_NAME,
    queryVector={"float32": query_embedding},
    topK=5,
    returnDistance=True,
    returnMetadata=True
)
# Display results
print(f"Found {len(response['vectors'])} results:\n")
for i, result in enumerate(response["vectors"], 1):
    print(f"{i}. {result['key']}")
    print(f"   Distance: {result['distance']:.4f}")
    if result.get("metadata"):
        print(f"   Metadata: {result['metadata']}")
    print()</pre><p>For production applications, embeddings can be stored in any vector database. <a href="https://aws.amazon.com/opensearch-service/">Amazon OpenSearch Service</a> offers native integration with Nova Multimodal Embeddings at launch, making it straightforward to build scalable search applications. As shown in the examples before, <a href="https://aws.amazon.com/s3/features/vectors/">Amazon S3 Vectors</a> provides a simple way to store and query embeddings with your application data.</p><p><strong>Things to know</strong><br />Nova Multimodal Embeddings offers four output dimension options: 3,072, 1,024, 384, and 256. Larger dimensions provide more detailed representations but require more storage and computation. Smaller dimensions offer a practical balance between retrieval performance and resource efficiency. This flexibility helps you optimize for your specific application and cost requirements.</p><p>The model handles substantial context lengths. For text inputs, it can process up to 8,192 tokens at once. Video and audio inputs support segments of up to 30 seconds, and the model can segment longer files. This segmentation capability is particularly useful when working with large media files—the model splits them into manageable pieces and creates embeddings for each segment.</p><p>The model includes responsible AI features built into Amazon Bedrock. Content submitted for embedding goes through Amazon Bedrock content safety filters, and the model includes fairness measures to reduce bias.</p><p>As described in the code examples, the model can be invoked through both synchronous and asynchronous APIs. The synchronous API works well for real-time applications where you need immediate responses, such as processing user queries in a search interface. The asynchronous API handles latency insensitive workloads more efficiently, making it suitable for processing large content such as videos.</p><p><strong>Availability and pricing</strong><br /><a href="https://aws.amazon.com/ai/generative-ai/nova/">Amazon Nova Multimodal Embeddings</a> is available today in Amazon Bedrock in the US East (N. Virginia) <a href="https://aws.amazon.com/about-aws/global-infrastructure/regions_az/">AWS Region</a>. For detailed pricing information, visit the <a href="https://aws.amazon.com/bedrock/pricing/">Amazon Bedrock pricing page</a>.</p><p>To learn more, see the <a href="https://docs.aws.amazon.com/nova/latest/userguide/what-is-nova.html">Amazon Nova User Guide</a> for comprehensive documentation and the <a href="https://github.com/aws-samples/amazon-nova-samples">Amazon Nova model cookbook on GitHub</a> for practical code examples.</p><p>If you’re using an AI–powered assistant for software development such as <a href="https://aws.amazon.com/q/developer/">Amazon Q Developer</a> or <a href="https://kiro.dev/">Kiro</a>, you can set up the <a href="https://awslabs.github.io/mcp/servers/aws-api-mcp-server">AWS API MCP Server</a> to help the AI assistants interact with AWS services and resources and the <a href="https://awslabs.github.io/mcp/servers/aws-knowledge-mcp-server">AWS Knowledge MCP Server</a> to provide up-to-date documentation, code samples, knowledge about the regional availability of AWS APIs and CloudFormation resources.</p><p>Start building multimodal AI-powered applications with Nova Multimodal Embeddings today, and share your feedback through <a href="https://repost.aws/tags/TAQeKlaPaNRQ2tWB6P7KrMag/amazon-bedrock">AWS re:Post for Amazon Bedrock</a> or your usual AWS Support contacts.</p><p>Editors note: 11/5/2025- Support of batch inference added</p><p>— <a href="https://x.com/danilop">Danilo</a></p></section><aside class="blog-comments"><div data-lb-comp="aws-blog:cosmic-comments" data-env="prod" data-content-id="7db9b83e-6c50-4d7a-bada-d7260da06335" data-title="Amazon Nova Multimodal Embeddings: State-of-the-art embedding model for agentic RAG and semantic search" data-url="https://aws.amazon.com/blogs/aws/amazon-nova-multimodal-embeddings-now-available-in-amazon-bedrock/"><p data-failed-message="Comments cannot be loaded… Please refresh and try again.">Loading comments…</p></div>
</aside>]]></summary>
    <link href="https://aws.amazon.com/blogs/aws/amazon-nova-multimodal-embeddings-now-available-in-amazon-bedrock/"/>
    <updated>2025-10-28T16:12:00+01:00</updated>
  </entry>
  <entry>
    <id>https://aws.amazon.com/blogs/aws/aws-weekly-roundup-aws-rtb-fabric-aws-customer-carbon-footprint-tool-aws-secret-west-region-and-more-october-27-2025/</id>
    <title><![CDATA[AWS Weekly Roundup: AWS RTB Fabric, AWS Customer Carbon Footprint Tool, AWS Secret-West Region, and more (October 27, 2025)]]></title>
    <summary><![CDATA[<section class="blog-post-content lb-rtxt"><table><tbody><tr><td>
<p></p></td></tr></tbody>


</table><p>This week started with challenges for many using services in the the North Virginia (us-east-1) Region. On Monday, we experienced a service disruption affecting DynamoDB and several other services due to a DNS configuration problem. The issue has been fully resolved, and you can read the full details in our <a href="https://aws.amazon.com/message/101925/">official summary</a>. As someone who works closely with developers, I know how disruptive these incidents can be to your applications and your users. The teams are learning valuable lessons from this event that will help improve our services going forward.</p><p><strong>Last week’s launches</strong></p><p>On a brighter note, I’m excited to share some launches and updates from this past week that I think you’ll find interesting.</p><p><a href="https://aws.amazon.com/blogs/aws/introducing-aws-rtb-fabric-for-real-time-advertising-technology-workloads/">AWS RTB Fabric is now generally available</a> — If you’re working in advertising technology, you’ll be interested in AWS RTB Fabric, a fully managed service for real-time bidding workloads. It connects AdTech partners like SSPs, DSPs, and publishers through a private, high-performance network that delivers single-digit millisecond latency—critical for those split-second ad auctions. The service reduces networking costs by up to 80% compared to standard cloud solutions with no upfront commitments, and includes three built-in modules to optimize traffic, improve bid efficiency, and increase bid response rates. AWS RTB Fabric is available in US East (N. Virginia), US West (Oregon), Asia Pacific (Singapore and Tokyo), and Europe (Frankfurt and Ireland).</p><p><a href="https://aws.amazon.com/about-aws/whats-new/2025/10/aws-customer-carbon-footprint-tool-scope-3-emissions-data/">Customer Carbon Footprint Tool now includes Scope 3 emissions</a> <a href="https://aws.amazon.com/about-aws/whats-new/2025/10/aws-customer-carbon-footprint-tool-scope-3-emissions-data/">data</a> — Understanding the full environmental impact of your cloud usage just got more comprehensive. The AWS Customer Carbon Footprint Tool (CCFT) now covers all three industry-standard emission scopes as defined by the Greenhouse Gas Protocol. This update adds Scope 3 emissions—covering the lifecycle carbon impact from manufacturing servers, powering AWS facilities, and transporting equipment to data centers—plus Scope 1 natural gas and refrigerants. With historical data available back to January 2022, you can track your progress over time and make informed decisions about your cloud strategy to meet sustainability goals. Access the data through the CCFT dashboard or AWS Billing and Cost Management Data Exports.</p><p><strong>Additional updates</strong></p><p>I thought these projects, blog posts, and news items were also interesting:</p><p><a href="https://aws.amazon.com/about-aws/whats-new/2025/10/aws-secret-west-region-is-now-available">AWS Secret-West Region is now</a> <a href="https://aws.amazon.com/about-aws/whats-new/2025/10/aws-secret-west-region-is-now-available">available</a> — AWS launched its second Secret Region in the western United States, capable of handling mission-critical workloads at the Secret U.S. security classification level. This new region provides enhanced performance for latency-sensitive workloads and offers multi-region resiliency with geographic separation for Intelligence Community and Department of Defense missions. The infrastructure features data centers and network architecture designed, built, accredited, and operated for security compliance with Intelligence Community Directive requirements.</p><p><a href="https://aws.amazon.com/about-aws/whats-new/2025/10/amazon-cloudwatch-incident-report/">Amazon CloudWatch now generates incident reports</a> — CloudWatch investigations can now automatically generate comprehensive incident reports that include executive summaries, timeline of events, impact assessments, and actionable recommendations. The feature collects and correlates telemetry data along with investigation actions to help teams identify patterns and implement preventive measures through structured post-incident analysis.</p><p><a href="https://aws.amazon.com/about-aws/whats-new/2025/10/amazon-connect-threaded-views-conversation-history/">Amazon Connect introduces threaded email views</a> — Amazon Connect email now displays exchanges in a threaded format and automatically includes prior conversation context when agents compose responses. These enhancements make it easier for both agents and customers to maintain context and continuity across interactions, delivering a more natural and familiar email experience.</p><p><a href="https://aws.amazon.com/about-aws/whats-new/2025/10/amazon-ec2-i8g-instances-available-in-additional-aws/">Amazon EC2 I8g instances expand to additional regions</a> — Storage Optimized I8g instances are now available in Europe (London), Asia Pacific (Singapore), and Asia Pacific (Tokyo). Powered by AWS Graviton4 processors and third-generation AWS Nitro SSDs, these instances deliver up to 60% better compute performance and 65% better real-time storage performance per TB compared to previous generation I4g instances, with storage I/O latency reduced by up to 50%.</p><p><a href="https://aws.amazon.com/about-aws/whats-new/2025/10/aws-location-services-new-map-styling-enchanced-customization/">AWS Location Service adds enhanced map styling</a> — Developers can now incorporate terrain visualization, contour lines, real-time traffic overlays, and transportation-specific routing details through the GetStyleDescriptor API. The new styling parameters enable tailored maps for specific applications—from outdoor navigation to logistics planning.</p><p><a href="https://aws.amazon.com/about-aws/whats-new/2025/10/amazon-cloudwatch-synthetics-bundled-multi-check-canaries/">CloudWatch Synthetics introduces multi-check canaries</a> — You can now bundle up to 10 different monitoring steps in a single canary using JSON configuration without custom scripts. The multi-check blueprints support HTTP endpoints with authentication, DNS validation, SSL certificate monitoring, and TCP port checks, making API monitoring more cost-effective.</p><p><a href="https://aws.amazon.com/about-aws/whats-new/2025/10/amazon-s3-generates-aws-cloudtrail-events/">Amazon S3 Tables now generates CloudTrail events</a> — S3 Tables now logs AWS CloudTrail events for automatic maintenance operations, including compaction and snapshot expiration. This enables organizations to audit the maintenance activities that S3 Tables automatically performs to enhance query performance and reduce operational costs.</p><p><a href="https://aws.amazon.com/about-aws/whats-new/2025/10/aws-lambda-payload-size-256-kb-1-mb-invocations/">AWS Lambda increases asynchronous invocation payload size to 1 MB</a> — Lambda has quadrupled the maximum payload size for asynchronous invocations from 256 KB to 1 MB across all AWS Commercial and GovCloud (US) Regions. This expansion streamlines architectures by allowing comprehensive data to be included in a single event, eliminating the need for complex data chunking or external storage solutions. Use cases now better supported include large language model prompts, detailed telemetry signals, complex ML output structures, and complete user profiles. The update applies to asynchronous invocations through the Lambda API or push-based events from services like S3, CloudWatch, SNS, EventBridge, and Step Functions. Pricing remains at 1 request charge for the first 256 KB, with 1 additional charge per 64 KB chunk thereafter.</p><p><strong>Upcoming AWS events</strong></p><p>Keep a look out and be sure to sign up for these upcoming events:</p><p><a href="https://reinvent.awsevents.com/">AWS re:Invent 2025</a> (December 1-5, 2025, Las Vegas) — AWS flagship annual conference offering collaborative innovation through peer-to-peer learning, expert-led discussions, and invaluable networking opportunities. Registration is now open.</p><p>Join the <a href="https://builder.aws.com/">AWS Builder Center</a> to learn, build, and connect with builders in the AWS community. Browse for upcoming in-person and virtual developer-focused events in your area.</p><p>That’s all for this week. Check back next Monday for another Weekly Roundup!</p><p>~ micah</p></section><aside class="blog-comments"><div data-lb-comp="aws-blog:cosmic-comments" data-env="prod" data-content-id="83b10682-8d36-42a5-a833-47569c89f8c0" data-title="AWS Weekly Roundup: AWS RTB Fabric, AWS Customer Carbon Footprint Tool, AWS Secret-West Region, and more (October 27, 2025)" data-url="https://aws.amazon.com/blogs/aws/aws-weekly-roundup-aws-rtb-fabric-aws-customer-carbon-footprint-tool-aws-secret-west-region-and-more-october-27-2025/"><p data-failed-message="Comments cannot be loaded… Please refresh and try again.">Loading comments…</p></div>
</aside>]]></summary>
    <link href="https://aws.amazon.com/blogs/aws/aws-weekly-roundup-aws-rtb-fabric-aws-customer-carbon-footprint-tool-aws-secret-west-region-and-more-october-27-2025/"/>
    <updated>2025-10-27T17:37:00+01:00</updated>
  </entry>
  <entry>
    <id>https://aws.amazon.com/blogs/aws/introducing-aws-rtb-fabric-for-real-time-advertising-technology-workloads/</id>
    <title><![CDATA[Introducing AWS RTB Fabric for real-time advertising technology workloads]]></title>
    <summary><![CDATA[<section class="blog-post-content lb-rtxt"><table id="amazon-polly-audio-table"><tbody><tr><td id="amazon-polly-audio-tab">
<p></p></td></tr></tbody>


</table><p>Today, we’re announcing AWS RTB Fabric, a fully managed service purpose built for real-time bidding (RTB) advertising workloads. The service helps advertising technology (AdTech) companies seamlessly connect with their supply and demand partners, such as <a href="https://advertising.amazon.com/lp/build-your-business-with-amazon-advertising?tag=googhydr-20&amp;ref=pd_sl_32yvxwiyd_e_ps_gg_b_au_en_d_core_e_646005230145&amp;k_amazon%20ads&amp;group_145097256426">Amazon Ads</a>, <a href="https://gumgum.com/">GumGum</a>, <a href="https://www.kargo.com/">Kargo</a>, <a href="https://mobilefuse.com/">MobileFuse</a>, <a href="https://www.sovrn.com/">Sovrn</a>, <a href="https://triplelift.com/">TripleLift</a>, <a href="https://www.viantinc.com/">Viant</a>, <a href="https://yieldmo.com/">Yieldmo</a> and more, to run high-volume, latency-sensitive RTB workloads on <a href="https://aws.amazon.com/">Amazon Web Services (AWS)</a> with consistent single-digit millisecond performance and up to 80% lower networking costs compared to standard networking costs.</p><p>AWS RTB Fabric provides a dedicated, high-performance network environment for RTB workloads and partner integrations without requiring colocated, on-premises infrastructure or upfront commitments. The following diagram shows the high-level architecture of RTB Fabric.</p><p><img class="aligncenter size-full wp-image-99960" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/10/20/Screenshot-2025-10-20-at-14.05.49.png" alt="" width="792" height="343" /></p><p>AWS RTB Fabric also includes modules, a capability that helps customers bring their own and partner applications securely into the compute environment used for real-time bidding. Modules support containerized applications and <a href="https://aws.amazon.com/what-is/foundation-models/">foundation models (FMs)</a> that can enhance transaction efficiency and bidding effectiveness. At launch, AWS RTB Fabric includes modules for optimizing traffic management, improving bid efficiency, and increasing bid response rates, all running inline within the service for consistent low-latency execution.</p><p>The growth of programmatic advertising has created a need for low-latency, cost-efficient infrastructure to support RTB workloads. AdTech companies process millions of bid requests per second across publishers, supply-side platforms (SSPs), and demand-side platforms (DSPs). These workloads are highly sensitive to latency because most RTB auctions must complete within 200–300 milliseconds and require reliable, high-speed exchange of OpenRTB requests and responses among multiple partners. Many companies have addressed this by deploying infrastructure in colocation data centers near key partners, which reduces latency but adds operational complexity, long provisioning cycles, and high costs. Others have turned to cloud infrastructure to gain elasticity and scale, but they often face complex provisioning, partner-specific connectivity, and long-term commitments to achieve cost efficiency. These gaps add operational overhead and limit agility. AWS RTB Fabric solves these challenges by providing a managed private network built for RTB workloads that delivers consistent performance, simplifies partner onboarding, and achieves predictable cost efficiency without the burden of maintaining colocation or custom networking setups.</p><p><strong class="c6">Key capabilities</strong><br />AWS RTB Fabric introduces a managed foundation for running RTB workloads at scale. The service provides the following key capabilities:</p><ul><li><strong>Simplified connectivity to AdTech partners</strong> – When you register an RTB Fabric gateway, the service automatically generates secure endpoints that can be shared with selected partners. Using the AWS RTB Fabric API, you can create optimized, private connections to exchange RTB traffic securely across different environments. External Links are also available to connect with partners who aren’t using RTB Fabric, such as those operating on premises or in third-party cloud environments. This approach shortens integration time and simplifies collaboration among AdTech participants.</li>
<li><strong>Dedicated network for low-latency advertising transactions –</strong> AWS RTB Fabric provides a managed, high-performance network layer optimized for OpenRTB communication. It connects AdTech participants such as SSPs, DSPs, and publishers through private, high-speed links that deliver consistent single-digit millisecond latency. The service automatically optimizes routing paths to maintain predictable performance and reduce networking costs, without requiring manual peering or configuration.</li>
<li><strong>Pricing model aligned with RTB economics –</strong> AWS RTB Fabric uses a transaction-based pricing model designed to align with programmatic advertising economics. Customers are billed per billion transactions, providing predictable infrastructure costs that align with how advertising exchanges, SSPs, and DSPs operate.</li>
<li><strong>Built-in traffic management modules</strong> – AWS RTB Fabric includes configurable modules that help AdTech workloads operate efficiently and reliably. Modules such as Rate Limiter, OpenRTB Filter, and Error Masking help you control request volume, validate message formats, and manage response handling directly in the network path. These modules execute inline within the AWS RTB Fabric environment, maintaining network-speed performance without adding application-level latency. All configurations are managed through the AWS RTB Fabric API, so you can define and update rules programmatically as your workloads scale.</li>
</ul><p><strong class="c6">Getting started</strong><br />Today, you can start building with AWS RTB Fabric using the <a href="https://aws.amazon.com/console/?nc2=type_a">AWS Management Console</a>, <a href="https://aws.amazon.com/cli/">AWS Command Line Interface (AWS CLI)</a>, or <a href="https://aws.amazon.com/what-is/iac/">infrastructure-as-code (IaC)</a> tools such as <a href="https://aws.amazon.com/cloudformation/?nc2=type_a">AWS CloudFormation</a> and <a href="https://docs.aws.amazon.com/prescriptive-guidance/latest/choose-iac-tool/terraform.html">Terraform</a>.</p><p>The console provides a visual entry point to view and manage RTB gateways and links, as shown on the <strong>Dashboard</strong> of the <a href="https://console.aws.amazon.com/rtbfabric/home">AWS RTB Fabric console</a>.</p><p><img class="aligncenter wp-image-100076 size-full c7" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/10/23/2025-rtb-fabric-dashboard.png" alt="" width="2540" height="1404" /></p><p>You can also use the AWS CLI to configure gateways, create links, and manage traffic programmatically. When I started building with AWS RTB Fabric, I used the AWS CLI to configure everything from gateway creation to link setup and traffic monitoring. The setup ran inside my <a href="https://aws.amazon.com/vpc/">Amazon Virtual Private Cloud (Amazon VPC)</a> endpoint while AWS managed the low-latency infrastructure that connected workloads.</p><p>To begin, I created a <strong>requester gateway</strong> to send bid requests and a <strong>responder gateway</strong> to receive and process bid responses. These gateways act as secure communication points within the AWS RTB Fabric.</p><pre class="lang-bash"># Create a requester gateway with required parameters
aws rtbfabric create-requester-gateway \
  --description "My RTB requester gateway" \
  --vpc-id vpc-12345678 \
  --subnet-ids subnet-abc12345 subnet-def67890 \
  --security-group-ids sg-12345678 \
  --client-token "unique-client-token-123"
</pre><pre class="lang-bash"># Create a responder gateway with required parameters
aws rtbfabric create-responder-gateway \
  --description "My RTB responder gateway" \
  --vpc-id vpc-01f345ad6524a6d7 \
  --subnet-ids subnet-abc12345 subnet-def67890 \
  --security-group-ids sg-12345678 \
  --dns-name responder.example.com \
  --port 443 \
  --protocol HTTPS
</pre><p>After both gateways were active, I created a link from the requester to the responder to establish a private, low-latency communication path for OpenRTB traffic. The link handled routing and load balancing automatically.</p><pre class="lang-bash"># Requester account creating a link from requester gateway to a responder gateway
aws rtbfabric create-link \
  --gateway-id rtb-gw-requester123 \
  --peer-gateway-id rtb-gw-responder456 \
  --log-settings '{"applicationLogs:{"sampling":"errorLog":10.0,"filterLog":10.0}}'</pre><pre class="lang-bash"># Responder account accepting a link from requester gateway to responder gateway
aws rtbfabfic accept-link \
  --gateway-id rtb-gw-responder456 \
  --link-id link-reqtoresplink789 \
  --log-settings '{"applicationLogs:{"sampling":"errorLog":10.0,"filterLog":10.0}}'</pre><p>I also connected with external partners using <strong>External Links</strong>, which extended my RTB workloads to on-premises or third-party environments while maintaining the same latency and security characteristics.</p><pre class="lang-bash"># Create an inbound external link endpoint for an external partner to send bid requests to
aws rtbfabric create-inbound-external-link \
  --gateway-id rtb-gw-responder456</pre><pre class="lang-bash"># Create an outbound external link for sending bid requests to an external partner
aws rtbfabric create-outbound-external-link \
  --gateway-id rtb-gw-requester123 \
  --public-endpoint "https://my-external-partner-responder.com"
</pre><p>To manage traffic efficiently, I added modules directly into the data path. The Rate Limiter module controlled request volume, and the OpenRTB Filter validated message formats inline at network speed.</p><pre class="lang-bash"># Attach a rate limiting module
aws rtbfabric update-link-module-flow \
  --gateway-id rtb-gw-responder456 \
  --link-id link-toresponder789 \
  --modules '{"name":"RateLimiter":"moduleParameters":{"rateLimiter":{"tps":10000}}}'</pre><p>Finally, I used <a href="https://aws.amazon.com/cloudwatch/?nc2=type_a">Amazon CloudWatch</a> to monitor throughput, latency, and module performance, and I exported logs to <a href="https://aws.amazon.com/s3/">Amazon Simple Storage Service (Amazon S3)</a> for auditing and optimization.</p><p>All configurations can also be automated with AWS CloudFormation or Terraform, allowing consistent, repeatable deployment across multiple environments. With RTB Fabric, I could focus on optimizing bidding logic while AWS maintained predictable, single-digit millisecond performance across my AdTech partners.</p><p>For more details, refer to the <a href="https://docs.aws.amazon.com/rtb-fabric/latest/userguide/what-is-rtb-fabric.html">AWS RTB Fabric User Guide</a>.</p><p><strong class="c6">Now available</strong><br />AWS RTB Fabric is available today in the following <a href="https://docs.aws.amazon.com/glossary/latest/reference/glos-chap.html#region">AWS Regions</a>: US East (N. Virginia), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Tokyo), Europe (Frankfurt), and Europe (Ireland).</p><p>AWS RTB Fabric is continually evolving to address the changing needs of the AdTech industry. The service expands its capabilities to support secure integration of advanced applications and AI-driven optimizations in real-time bidding workflows that help customers simplify operations and improve performance on AWS. To learn more about AWS RTB Fabric, visit the <a href="http://aws.amazon.com/rtb-fabric">AWS RTB Fabric page</a>.</p><p>– <a href="https://www.linkedin.com/in/zhengyubin714/">Betty</a></p></section><aside id="Comments" class="blog-comments"><div data-lb-comp="aws-blog:cosmic-comments" data-env="prod" data-content-id="dfebd13c-c19d-45bd-9c12-159eda5df7d3" data-title="Introducing AWS RTB Fabric for real-time advertising technology workloads" data-url="https://aws.amazon.com/blogs/aws/introducing-aws-rtb-fabric-for-real-time-advertising-technology-workloads/"><p data-failed-message="Comments cannot be loaded… Please refresh and try again.">Loading comments…</p></div>
</aside>]]></summary>
    <link href="https://aws.amazon.com/blogs/aws/introducing-aws-rtb-fabric-for-real-time-advertising-technology-workloads/"/>
    <updated>2025-10-23T10:32:00+02:00</updated>
  </entry>
  <entry>
    <id>https://aws.amazon.com/blogs/aws/aws-customer-carbon-footprint-tool-now-includes-scope-3-emissions/</id>
    <title><![CDATA[Customer Carbon Footprint Tool Expands: Additional emissions categories including Scope 3 are now available]]></title>
    <summary><![CDATA[<section class="blog-post-content lb-rtxt"><table id="amazon-polly-audio-table"><tbody><tr><td id="amazon-polly-audio-tab">
<p></p></td></tr></tbody>


</table><p>Since it <a href="https://aws.amazon.com/blogs/aws/new-customer-carbon-footprint-tool/">launched</a> in 2022, the <a href="https://aws.amazon.com/aws-cost-management/aws-customer-carbon-footprint-tool/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">Customer Carbon Footprint Tool (CCFT)</a> has supported our customers’ sustainability journey to track, measure, and review their carbon emissions by providing the estimated carbon emissions associated with their usage of <a href="https://aws.amazon.com/">Amazon Web Services (AWS)</a> services.</p><p>In April, we made <a href="https://aws.amazon.com/blogs/aws-cloud-financial-management/updated-carbon-methodology-for-the-aws-customer-carbon-footprint-tool/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">major updates in the CCFT</a>, including easier access to carbon emissions data, visibility into emissions by <a href="https://docs.aws.amazon.com/glossary/latest/reference/glos-chap.html#region?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">AWS Region</a>, inclusion of location-based emissions (LBM), an updated, independently-verified methodology as well as moving to a <a href="https://aws.amazon.com/about-aws/whats-new/2025/01/customer-carbon-footprint-tool-dedicated-page/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">dedicated page in the AWS Billing console</a>.</p><p>The CCFT is informed by the <a href="https://ghgprotocol.org/">Greenhouse Gas (GHG) Protocol</a>’s classification of emissions, which classifies a company’s emissions. Today, we’re announcing the inclusion of Scope 3 emissions data and an update to Scope 1 emissions in the CCFT. The new emission categories complement the existing Scope 1 and 2 data, and they’ll give our customers a comprehensive look into their carbon emissions data.</p><p><img class="aligncenter size-full wp-image-99814" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/10/14/2025-ccft-scope3-ghg-protocol.jpg" alt="" width="1896" height="1314" /></p><p>In this updated methodology we incorporate new emissions categories. We’ve added Scope 1 refrigerants and natural gas, alongside the existing Scope 1 emissions from fuel combustion in emergency backup generators (diesel). Although Scope 1 emissions represent a small share of overall emissions, we provide our customers with a complete image of their carbon emissions.</p><p class="jss375" data-pm-slice="1 1 []">To decide which categories of Scope 3 to include in our model we looked at how material each of them were to the overall carbon impact and confirmed the vast majority of emissions were represented. With that in mind, the methodology now includes:</p><ul><li>
<p class="jss375" data-pm-slice="1 1 []"><strong>Fuel- and energy-related activities (“FERA” under the GHG Protocol)</strong> – This includes upstream emissions from purchased fuels, upstream emissions of purchased electricity, and transmission and distribution (T&amp;D) losses. AWS calculates these emissions using both LBM and the market-based method (MBM).</p>
</li>
<li>
<p class="jss375" data-pm-slice="1 1 []"><strong>IT hardware</strong> – AWS uses a comprehensive cradle-to-gate approach that tracks emissions from raw material extraction through manufacturing and transportation to AWS data centers. We use four calculation pathways: process-based life cycle assessment (LCA) with engineering attributes, extrapolation, representative category average LCA, and economic input-output LCA. AWS prioritizes the most detailed and accurate methods for components that contribute significantly to overall emissions.</p>
</li>
<li>
<p class="jss375" data-pm-slice="1 1 []"><strong>Buildings and equipment</strong> – AWS follows established whole building life cycle assessment (wbLCA) standards, considering emissions from construction, use, and end-of-life phases. The analysis covers data center shells, rooms, and long-lead equipment such as air handling units and generators. The methodology uses both process-based life cycle assessment models and economic input-output analysis to provide comprehensive coverage.</p>
</li>
</ul><p class="jss375" data-pm-slice="1 1 []">The Scope 3 emissions are then amortized over the assets’ service life (6 years for IT hardware, 50 years for buildings) to calculate monthly emissions that can be allocated to customers. This amortization means that we fairly distribute the total embodied carbon of each asset across its operational lifetime, accounting for scenarios such as early retirement or extended use.</p><p data-pm-slice="1 1 []">All these updates are part of methodology version 3.0.0 and are explained in detail in <a href="https://sustainability.aboutamazon.com/aws-customer-carbon-footprint-tool-methodology.pdf">our methodology document</a>, which has been independently verified by a third party.</p><p><strong class="c6">How to access the CCFT</strong><br />To get started, go to the <a href="https://console.aws.amazon.com/costmanagement/home?#/customer-carbon-footprint-tool?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">AWS Billing and Cost Management console</a> and choose <strong>Customer Carbon Footprint Tool</strong> under <strong>Cost and Usage Analysis</strong>. You can access your carbon emissions data in the dashboard, download a csv file, or export all data using basic SQL and visualize your data by integrating with <a href="https://aws.amazon.com/aws-cost-management/aws-data-exports/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">AWS Data Exports</a> and <a href="https://aws.amazon.com/quicksuite/quicksight/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">Amazon Quick Sight</a>.</p><p>To ensure you can make meaningful year-over-year comparisons, we’ve recalculated historical data back to January 2022 using version 3 of the methodology. All the data displayed in the CCFT now uses version 3. To see historical data using v3, choose <strong>Create custom data export</strong>. A new data export now includes new columns breaking down emissions by Scope 1, 2, and 3.</p><p><img class="aligncenter wp-image-100026 size-full c7" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/10/22/2025-ccft-scope3-dashboard-1-1.png" alt="" width="2474" height="1320" /></p><p>You can see estimated AWS emissions and estimated emissions savings. The tool shows emissions calculated using the MBM for 38 months of data by default. You can find your emissions calculated using the LBM by choosing <strong>LBM</strong> in the <strong>Calculation method</strong> filter on the dashboard. The unit of measurement for carbon emissions is metric tons of carbon dioxide equivalent (MTCO2e), an industry-standard measure.</p><p><img class="aligncenter wp-image-100028 size-full c7" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/10/22/2025-ccft-scope3-dashboard-2-1.png" alt="" width="2552" height="2408" /></p><p>In the <strong>Carbon emissions summary</strong>, it shows trends of your carbon emissions over time. You can also find emissions resulting from your usage of AWS services and across all AWS Regions. To learn more, visit <a href="https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/what-is-ccft.html?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">Viewing your carbon footprint</a> in the AWS documentation.</p><p><strong class="c6">Voice of the customer</strong><br />Some of our customers had early access to these updates. This is what they shared with us:</p><p>Sunya Norman, senior vice president, Impact at Salesforce shared “Effective decarbonization begins with visibility into our carbon footprint, especially in Scope 3 emissions. Industry averages are only a starting point. The granular carbon data we get from cloud providers like AWS are critical to helping us better understand the actual emissions associated with our cloud infrastructure and focus reductions where they matter most.”</p><p>Gerhard Loske, Head of Environmental Management at SAP said “The latest updates to the CCFT are a big step forward in helping us managing SAP’s sustainability goals. With new Region-specific data, we can now see better where emissions are coming from and take targeted action. The upcoming addition of Scope 3 emissions will give us a much fuller picture of our carbon footprint across AWS workloads. These improvements make it easier for us to turn data into meaningful climate action.”</p><p>Pinterest’s Global Sustainability Lead, Mia Ketterling highlighted the benefits of the Scope 3 emission data, saying, “By including Scope 3 emissions data in their CCFT, AWS empowers customers like Pinterest to more accurately measure and report the full carbon footprint of our digital operations. Enhanced transparency helps us drive meaningful climate action across our value chain.”</p><p>If you’re attending <a href="https://reinvent.awsevents.com/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">AWS re:Invent</a> in person in December, join technical leaders from <a href="https://registration.awsevents.com/flow/awsevents/reinvent2025/eventcatalog/page/eventcatalog?trk=registration.awsevents.com&amp;search=AIM332&amp;trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">AWS, Adobe, and Salesforce</a> as they reveal how the Customer Carbon Footprint Tool supports their environmental initiatives.</p><p><strong class="c6">Now available</strong><br />With Scope 1, 2, and 3 coverage in the CCFT, you can track your emissions over time to understand how you’re trending towards your sustainability goals and see the impact of any carbon reduction projects you’ve implemented. To learn more, visit the <a href="https://aws.amazon.com/aws-cost-management/aws-customer-carbon-footprint-tool/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">Customer Carbon Footprint Tool (CCFT) page</a>.</p><p>Give these new features a try in the <a href="https://console.aws.amazon.com/costmanagement/home?#/customer-carbon-footprint-tool?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">AWS Billing and Cost Management console</a> and send feedback to <a href="https://repost.aws/tags/TAjDoYksr1R5imySsYgWbsEQ/aws-customer-carbon-footprint-tool?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">AWS re:Post for the CCFT</a> or through your usual AWS Support contacts.</p><p>— <a href="https://twitter.com/channyun">Channy</a></p></section><aside id="Comments" class="blog-comments"><div data-lb-comp="aws-blog:cosmic-comments" data-env="prod" data-content-id="f21db48e-fca1-4e4b-bba6-6b6e0dcac14a" data-title="Customer Carbon Footprint Tool Expands: Additional emissions categories including Scope 3 are now available" data-url="https://aws.amazon.com/blogs/aws/aws-customer-carbon-footprint-tool-now-includes-scope-3-emissions/"><p data-failed-message="Comments cannot be loaded… Please refresh and try again.">Loading comments…</p></div>
</aside>]]></summary>
    <link href="https://aws.amazon.com/blogs/aws/aws-customer-carbon-footprint-tool-now-includes-scope-3-emissions/"/>
    <updated>2025-10-22T19:48:00+02:00</updated>
  </entry>
  <entry>
    <id>https://aws.amazon.com/blogs/aws/aws-weekly-roundup-kiro-waitlist-ebs-volume-clones-ec2-capacity-manager-and-more-october-20-2025/</id>
    <title><![CDATA[AWS Weekly Roundup: Kiro waitlist, EBS Volume Clones, EC2 Capacity Manager, and more (October 20, 2025)]]></title>
    <summary><![CDATA[<section class="blog-post-content lb-rtxt"><table id="amazon-polly-audio-table"><tbody><tr><td id="amazon-polly-audio-tab">
<p></p></td></tr></tbody>


</table><p>I’ve been inspired by all the activities that tech communities around the world have been hosting and participating in throughout the year. Here in the southern hemisphere we’re starting to dream about our upcoming summer breaks and closing out on some of the activities we’ve initiated this year. The tech community in South Africa is participating in <a href="https://www.linkedin.com/posts/veliswa-boya_south-africa-read-til-the-end-activity-7383492800132182016-2zHQ?utm_source=share&amp;utm_medium=member_desktop&amp;rcm=ACoAACfTl5IBUDxU_AXvLsUBjqE61lr2YXVFW6k">Amazon Q Developer coding challenges</a> that my colleagues and I are hosting throughout this month as a fun way to wind down activities for the year. The first one was hosted in Johannesburg last Friday with Durban and Cape Town coming up next.<br /><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/10/20/IMG-20251020-WA0005.jpg"><img class="size-medium wp-image-99971 alignnone" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/10/20/IMG-20251020-WA0005-300x169.jpg" alt="" width="300" height="169" /></a> <a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/10/20/IMG-20251020-WA0006.jpg"><img class="size-medium wp-image-99972 alignnone" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/10/20/IMG-20251020-WA0006-300x169.jpg" alt="" width="300" height="169" /></a></p><p><strong>Last week’s launches</strong><br />These are the launches from last week that caught my attention:</p><ul><li><strong><a href="https://kiro.dev/?trk=7c8639c6-87c6-47d6-9bd0-a5812eecb848&amp;sc_channel=el">Kiro</a> is now available for every developer</strong> — Since its launch more than 90 days ago, more than 100,000 developers have joined the waitlist to try Kiro out. The waitlist is gone so if you want to try out this spec-driven approach to coding with AI, <a href="https://kiro.dev/downloads/?trk=7c8639c6-87c6-47d6-9bd0-a5812eecb848&amp;sc_channel=el">sign up now</a>.</li>
<li><strong>Amazon EC2 Capacity Manager</strong> — If you’re using <a href="https://aws.amazon.com/ec2/?trk=7c8639c6-87c6-47d6-9bd0-a5812eecb848&amp;sc_channel=el">Amazon Elastic Compute Cloud (Amazon EC2)</a> at scale operate hundreds of instance types across multiple Availability Zones and accounts, using On-Demand Instances, Spot Instances, and Capacity Reservations, you’ll be pleased to learn that <a href="https://aws.amazon.com/blogs/aws/monitor-analyze-and-manage-capacity-usage-from-a-single-interface-with-amazon-ec2-capacity-manager/">EC2 Capacity Manager is now available to provide you a centralized solution to monitor, analyze, and manage capacity usage across all account and AWS Regions from a single interface</a>.</li>
<li><strong>Amazon EBS Volume Clones</strong> — Sometimes you need production data to test a fix in a non-production environment before implementing it in production. Usually you’d take an EBS snapshot of this data and then create a new volume from that snapshot, meanwhile dealing with the operational overhead of this process. <a href="https://aws.amazon.com/blogs/aws/introducing-amazon-ebs-volume-clones-create-instant-copies-of-your-ebs-volumes/">Learn about the availability of Amazon EBS Volume Clones</a>, a new capability for you to create instant point-in-time copies of your EBS volumes within the same Availability Zone.</li>
</ul><p><strong>Additional updates</strong><br />I thought these projects, blog posts, and news items were also interesting:</p><ul><li><strong>AWS Transfer Family SFTP connectors now support VPC-based connectivity</strong> — <a href="https://aws.amazon.com/blogs/aws/aws-transfer-family-sftp-connectors-now-support-vpc-based-connectivity/">AWS Transfer Family SFTP connectors now support</a> connectivity to remote SFTP servers through Amazon Virtual Private Cloud (Amazon VPC) environments.</li>
<li>As your business evolves, you might need to migrate workloads between AWS Regions. Perhaps you’re looking to reduce latency for users in new geographic areas, meet Region-specific compliance requirements, or you’re an ISV expanding your product’s availability. Whatever your reason, cross-Region migration needs careful planning, especially when dealing with encrypted resources. <a href="https://aws.amazon.com/blogs/compute/migrate-encrypted-amazon-ec2-instances-across-aws-regions-without-sharing-aws-kms-keys/">Read how to migrate encrypted Amazon EC2 instances across AWS Regions without sharing AWS KMS keys</a>.</li>
<li>Internet of Things (IoT) devices have transformed how we interact with our environments, from homes to industrial settings. However, as the number of connected devices grows, so does the complexity of managing them. Learn <a href="https://aws.amazon.com/blogs/machine-learning/build-a-device-management-agent-with-amazon-bedrock-agentcore/">how to build a device management agent with Amazon Bedrock AgentCore</a>.</li>
</ul><p><strong>Upcoming AWS events</strong><br />Keep a look out and be sure to sign up for these upcoming events:</p><p><a href="https://reinvent.awsevents.com/?trk=7c8639c6-87c6-47d6-9bd0-a5812eecb848&amp;sc_channel=el">AWS re:Invent 2025</a> (December 1-5, 2025, Las Vegas) — AWS flagship annual conference offering collaborative innovation through peer-to-peer learning, expert-led discussions, and invaluable networking opportunities.</p><p>Join the <a href="https://builder.aws.com/?trk=7c8639c6-87c6-47d6-9bd0-a5812eecb848&amp;sc_channel=el">AWS Builder Center</a> to learn, build, and connect with builders in the AWS community. Browse here for <a href="https://aws.amazon.com/events/explore-aws-events/?trk=7c8639c6-87c6-47d6-9bd0-a5812eecb848&amp;sc_channel=el">upcoming in-person</a> and <a href="https://aws.amazon.com/developer/events/?trk=7c8639c6-87c6-47d6-9bd0-a5812eecb848&amp;sc_channel=el">virtual developer-focused events</a>.</p><p>That’s all for this week. Check back next Monday for another <a href="https://aws.amazon.com/blogs/aws/tag/week-in-review/?trk=7c8639c6-87c6-47d6-9bd0-a5812eecb848&amp;sc_channel=el">Weekly Roundup</a>!</p><p>– <a href="https://www.linkedin.com/in/veliswa-boya/">Veliswa</a>.</p></section><aside id="Comments" class="blog-comments"><div data-lb-comp="aws-blog:cosmic-comments" data-env="prod" data-content-id="a8b06a44-3364-4919-90ef-da9b7456f0e2" data-title="AWS Weekly Roundup: Kiro waitlist, EBS Volume Clones, EC2 Capacity Manager, and more (October 20, 2025)" data-url="https://aws.amazon.com/blogs/aws/aws-weekly-roundup-kiro-waitlist-ebs-volume-clones-ec2-capacity-manager-and-more-october-20-2025/"><p data-failed-message="Comments cannot be loaded… Please refresh and try again.">Loading comments…</p></div>
</aside>]]></summary>
    <link href="https://aws.amazon.com/blogs/aws/aws-weekly-roundup-kiro-waitlist-ebs-volume-clones-ec2-capacity-manager-and-more-october-20-2025/"/>
    <updated>2025-10-20T18:00:00+02:00</updated>
  </entry>
  <entry>
    <id>https://aws.amazon.com/blogs/aws/monitor-analyze-and-manage-capacity-usage-from-a-single-interface-with-amazon-ec2-capacity-manager/</id>
    <title><![CDATA[Monitor, analyze, and manage capacity usage from a single interface with Amazon EC2 Capacity Manager]]></title>
    <summary><![CDATA[<section class="blog-post-content lb-rtxt"><table id="amazon-polly-audio-table"><tbody><tr><td id="amazon-polly-audio-tab">
<p></p></td></tr></tbody>


</table><p>Today, I’m happy to announce Amazon EC2 Capacity Manager, a centralized solution to monitor, analyze, and manage capacity usage across all accounts and AWS Regions from a single interface. This service aggregates capacity information with hourly refresh rates and provides prioritized optimization opportunities, streamlining capacity management workflows that previously required custom automation or manual data collection from multiple AWS services.</p><p>Organizations using <a href="https://aws.amazon.com/ec2/">Amazon Elastic Compute Cloud (Amazon EC2)</a> at scale operate hundreds of instance types across multiple Availability Zones and accounts, using On-Demand Instances, Spot Instances, and Capacity Reservations. This complexity means customers currently access capacity data through various AWS services including the <a href="https://aws.amazon.com/console/">AWS Management Console</a>, <a href="https://aws.amazon.com/aws-cost-management/aws-cost-and-usage-reporting/">Cost and Usage Reports</a>, <a href="https://aws.amazon.com/cloudwatch/">Amazon CloudWatch</a>, and EC2 <code>describe</code> APIs. This distributed approach can create operational overhead through manual data collection, context switching between tools, and the need for custom automation to aggregate information for capacity optimization analysis.</p><p>EC2 Capacity Manager helps you overcome these operational complexities by consolidating all capacity data into a unified dashboard. You can now view cross-account and cross-Region capacity metrics for On-Demand Instances, Spot Instances, and Capacity Reservations across all commercial AWS Regions from a single location, eliminating the need to build custom data collection tools or navigate between multiple AWS services.</p><p>This consolidated visibility can help you discover cost savings by highlighting underutilized Capacity Reservations, analyzing usage patterns across instance types, and providing insights into Spot Instance interruption patterns. By having access to comprehensive capacity data in one place, you can make more informed decisions about rightsizing your infrastructure and optimizing your EC2 spending.</p><p>Let me show you the capabilities of EC2 Capacity Manager in detail.</p><p><strong>Getting started with EC2 Capacity Manager<br /></strong> On the AWS Management Console, I navigate to Amazon EC2 and select <strong>Capacity Manager</strong> from the navigation pane. I enable EC2 Capacity Manager through the service settings. The service aggregates historical data from the previous 14 days during initial setup.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/10/12/AN2274-0.png"><img class="alignnone size-full wp-image-99780" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/10/12/AN2274-0.png" alt="" width="1924" height="872" /></a></p><p>The main <strong>Dashboard</strong> displays capacity utilization across all instance types through a comprehensive overview section that presents key metrics at a glance. The capacity overview cards for <strong>Reservations</strong>, <strong>Usage</strong>, and <strong>Spot</strong> show trend indicators and percentage changes to help you identify capacity patterns quickly. You can apply filtering through the date filter controls, which include date range selection, time zone configuration, and interval settings.</p><p>You can select different units to analyze data by vCPUs, instance counts, or estimated costs to understand resource consumption patterns. Estimated costs are based on published On-Demand rates and do not include Savings Plans or other discounts. This pricing reference helps you compare the relative impact of underutilized capacity across different instance types—for example, 100 vCPU hours of unused p5 reservations represents a larger cost impact than 100 vCPU hours of unused t3 reservations.</p><p>The dashboard includes detailed <strong>Usage metrics</strong> with both total usage visualization and usage over time charts. The total usage section shows the breakdown between reserved usage, unreserved usage, and Spot usage. The usage over time chart provides visualization that tracks capacity trends over time, helping you identify usage patterns and peak demand periods.</p><p>Under <strong>Reservation metrics,</strong> <strong>Reserved capacity trends</strong> visualizes used and unused reserved capacity across the selected period, showing the proportion of reserved vCPU hours that remain unutilized compared with those actively consumed, helping you track reservation efficiency patterns and identify periods of consistent low utilization. This visibility can help you reduce costs by identifying underutilized reservations and helping you to make informed decisions about capacity adjustments.</p><p>The <strong>Unused capacity</strong> section lists underutilized capacity reservations by instance type and Availability Zone combinations, displaying specific utilization percentages and instance types across different Availability Zones. This prioritized list helps you identify potential savings with direct visibility into unused capacity costs.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/10/16/AN2274-1f.png"><img class="alignnone size-full wp-image-99900" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/10/16/AN2274-1f.png" alt="" width="1924" height="2003" /></a></p><p>The <strong>Usage</strong> tab provides detailed historical trends and usage statistics across all AWS Regions for Spot Instances, On-Demand Instances, Capacity Reservations, Reserved Instances, and Savings Plans. Dedicated Hosts usage is not included. The <strong>Dimension filter</strong> helps you group by and filter capacity data by Account ID, Region, Instance Family, Availability Zone, and Instance Type, creating custom views that reveal usage patterns across your accounts and AWS Organizations. This helps you analyze specific configurations and compare performance across accounts or Regions.</p><p>The <strong>Aggregations</strong> section provides a comprehensive usage table across EC2 and Spot Instances. You can select different units to analyze data by vCPUs, instance counts, or estimated costs to understand resource consumption patterns. The table shows instance family breakdowns with total usage statistics, reserved usage hours, unreserved usage hours, and Spot usage data. Each row includes a <strong>View breakdown</strong> action for a detailed analysis.</p><p>The <strong>Capacity usage or estimated cost trends</strong> section visualizes usage trends, reserved usage, unreserved usage, and Spot usage. You can filter the displayed data and adjust the unit of measurement to view historical patterns. These filtering and analysis tools help you identify usage trends, compare costs across dimensions, and make informed decisions for capacity planning and optimization.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/10/16/AN2274-2c.png"><img class="alignnone size-full wp-image-99901" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/10/16/AN2274-2c.png" alt="" width="1924" height="2157" /></a></p><p>When you choose <strong>View breakdown</strong> from the <strong>Aggregations</strong> table, you access detailed <strong>Usage breakdown</strong> based on the dimension filters you selected. This breakdown view shows usage patterns for individual instance types within the selected family and Availability Zone combinations, helping you identify specific optimization opportunities.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/10/16/AN2274-3b.png"><img class="alignnone size-full wp-image-99902" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/10/16/AN2274-3b.png" alt="" width="1924" height="1795" /></a></p><p>The <strong>Reservations</strong> tab displays capacity reservation utilization with automated analysis capabilities that generate prioritized lists of optimization opportunities. Similar to the <strong>Usage</strong> tab, you can apply dimension filters by Account ID, Region, Instance Family, Availability Zone, and Instance Type along with additional options related to the reservation details. On each of the tabs you can drill down to see data for individual line items. For reservations specifically, you can view specific reservations and access detailed information about On-Demand Capacity Reservations (ODCRs), including utilization history, configuration parameters, and current status. When the ODCR exists in the same account as Capacity Manager, you can modify reservation parameters directly from this interface, eliminating the need to navigate to separate EC2 console sections for reservation management.</p><p>The <strong>Statistics</strong> section provides summary metrics, including total reservations count, overall utilization percentage, reserved capacity totals, used and unused capacity volumes, average scheduled reservations, and counts of accounts, instance families, and Regions with reservations.</p><p>This consolidated view helps you understand reservation distribution and utilization patterns across your infrastructure. For example, you might discover that your development accounts consistently show 30% reservation utilization while production accounts exceed 95%, indicating an opportunity to redistribute or modify reservations. Similarly, you could identify that specific instance families in certain Regions have sustained low utilization rates, suggesting candidates for reservation adjustments or workload optimization. These insights help you make data-driven decisions about reservation purchases, modifications, or cancellations to better align your reserved capacity with actual usage patterns.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/10/16/AN2274-3c.png"><img class="alignnone size-full wp-image-99903" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/10/16/AN2274-3c.png" alt="" width="1924" height="2171" /></a></p><p>The <strong>Spot</strong> tab focuses on Spot Instance usage and displays the amount of time your Spot instances run before being interrupted. This analysis of Spot Instance usage patterns helps you identify optimization opportunities for Spot Instance workloads. You can use Spot placement score recommendations to improve workload flexibility.</p><p>For organizations requiring data export capabilities, Capacity Manager includes data exports to <a href="https://aws.amazon.com/s3/">Amazon Simple Storage Service (Amazon S3)</a> buckets for capacity analysis. You can view and manage your data exports through the <strong>Data exports</strong> tab, which helps you create new exports, monitor delivery status, and configure export schedules to analyze capacity data outside the AWS Management Console.</p><p>Data exports extend your analytical capabilities by storing capacity data beyond the 90-day retention period available through the console and APIs. This extended retention enables long-term trend analysis and historical capacity planning. You can also integrate exported data with existing analytics workflows, business intelligence tools, or custom reporting systems to incorporate EC2 capacity metrics into broader infrastructure analysis and decision-making processes.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/10/14/AN2274-4a.png"><img class="alignnone size-full wp-image-99831" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/10/14/AN2274-4a.png" alt="" width="1924" height="851" /></a></p><p>The <strong>Settings</strong> section provides configuration options for AWS Organizations integration, enabling centralized capacity management across multiple accounts. Organization administrators can enable enterprise-wide capacity visibility or delegate access to specific accounts while maintaining appropriate permissions and access controls.</p><p><strong>Now available</strong><br />EC2 Capacity Manager eliminates the operational overhead of collecting and analyzing capacity data from multiple sources. The service provides automated optimization opportunities, centralized multi-account visibility, and direct access to capacity management tools. You can reduce manual analysis time while improving capacity utilization and cost optimization across your EC2 infrastructure.</p><p>Amazon EC2 Capacity Manager is available at no additional cost. To begin using Amazon EC2 Capacity Manager, visit the <a href="https://console.aws.amazon.com/ec2/">Amazon EC2 console</a> or access the service APIs. The service is available in all commercial AWS Regions.</p><p>To learn more, visit the <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/capacity-manager.html">EC2 Capacity Manager documentation</a>.</p><a href="https://www.linkedin.com/in/esrakayabali/">— Esra</a></section><aside id="Comments" class="blog-comments"><div data-lb-comp="aws-blog:cosmic-comments" data-env="prod" data-content-id="0dd92c18-c897-42a1-b9d9-880bc415a8d9" data-title="Monitor, analyze, and manage capacity usage from a single interface with Amazon EC2 Capacity Manager" data-url="https://aws.amazon.com/blogs/aws/monitor-analyze-and-manage-capacity-usage-from-a-single-interface-with-amazon-ec2-capacity-manager/"><p data-failed-message="Comments cannot be loaded… Please refresh and try again.">Loading comments…</p></div>
</aside>]]></summary>
    <link href="https://aws.amazon.com/blogs/aws/monitor-analyze-and-manage-capacity-usage-from-a-single-interface-with-amazon-ec2-capacity-manager/"/>
    <updated>2025-10-16T17:48:00+02:00</updated>
  </entry>
  <entry>
    <id>https://aws.amazon.com/blogs/aws/introducing-amazon-ebs-volume-clones-create-instant-copies-of-your-ebs-volumes/</id>
    <title><![CDATA[Introducing Amazon EBS Volume Clones: Create instant copies of your EBS volumes]]></title>
    <summary><![CDATA[<section class="blog-post-content lb-rtxt"><table id="amazon-polly-audio-table"><tbody><tr><td id="amazon-polly-audio-tab">
<p></p></td></tr></tbody>


</table><a href="https://w.amazon.com/bin/view/AWS_Blog/AWS_Blog_Handbook#H3.Reviewthedraft">ATTENTION BLOG POST REVIEWERS—please note the following before beginning your review: (1) Focus your review on technical accuracy. (2) Provide comments. Do not try to rewrite the post, as that may result in your post being canceled. (3) Respect the writer's voice. If you see a typo or grammatical mistake, you can cite it, but decisions of word choice, style, and structure are at the writer's discretion. Thank you for respecting our production process.</a><p>As someone that used to work at <a href="https://en.wikipedia.org/wiki/Sun_Microsystems">Sun Microsystems</a>, where <a href="https://en.wikipedia.org/wiki/ZFS">ZFS</a> was invented, I’ve always loved working with storage systems that offer instant volume copies for my development and testing needs.</p><p>Today, I’m excited to share that AWS is bringing similar capabilities to <a href="https://aws.amazon.com/ebs/">Amazon Elastic Block Store (Amazon EBS)</a> with the launch of Amazon EBS Volume Clones, a new capability that lets you create instant point-in-time copies of your EBS volumes within the same Availability Zone.</p><p>Many customers need to create copies of their production data to support development and testing activities in a separate nonproduction environment. Until now, this process required taking an EBS snapshot (stored in <a href="https://aws.amazon.com/s3/">Amazon Simple Storage Service (Amazon S3)</a>) and then creating a new volume from that snapshot. Although this approach works, the process creates operational overhead due to multiple steps.</p><p>With Amazon EBS Volume Clones, you can now create copies of your EBS volumes with a single API call or console click. The copied volumes are available within seconds and provide immediate access to your data with single-digit millisecond latency. This makes Volume Clones particularly useful for quickly setting up test environments with production data or creating temporary copies of databases for development purposes.</p><p><strong>Let me show you how Volume Clones works<br /></strong> For this post, I created a small <a href="https://aws.amazon.com/ec2/">Amazon Elastic Compute Cloud (Amazon EC2)</a> instance, with an attached volume. I created a file on the root file system with the command <code>echo "Hello CopyVolumes" &gt; hello.txt</code>.</p><p>To initiate the copy, I open a browser on the <a href="https://console.aws.amazon.com">AWS Management Console</a> and I navigate to <strong>EC2</strong>, <strong>Elastic Block Store</strong>, <strong>Volumes</strong>. I select the volume I want to copy.</p><p>Note that, at the time of publication of this post, only encrypted volumes can be copied.</p><p>On the <strong>Actions</strong> menu, I choose the <strong>Copy Volume</strong> option.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/10/07/2025-10-06_15-35-57.png"><img class="aligncenter size-full wp-image-99703" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/10/07/2025-10-06_15-35-57.png" alt="Copy Volume - initiate" width="800" height="433" /></a></p><p>Next, I choose the details of the target volume. I can change the <strong>Volume type</strong> and adjust the <strong>Size</strong>, <strong>IOPS</strong>, and <strong>Throughput</strong> parameters. I choose <strong>Copy volume</strong> to start the Volume Clone operation.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/10/07/2025-10-06_15-36-22.png"><img class="aligncenter size-full wp-image-99707" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/10/07/2025-10-06_15-36-22.png" alt="Copy Volume - Parameters" width="800" height="807" /></a></p><p>The copied volume enters the <strong>Creating</strong> state and becomes available within seconds. I can then attach it to an EC2 instance and start using it immediately.</p><p>Data blocks are copied from the source volume and written to the volume copy in the background. The volume remains in the <strong>Initializing</strong> state until the process is complete. I can monitor its progress with the <a href="https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeVolumeStatus.html"><code>describe-volume-status</code> API</a>. The initializing operation doesn’t affect the performance of the source volume. I can continue using it normally during the copy process.</p><p>I love that the copied volume is available immediately. I don’t need to wait for its initialization to complete. During the initialization phase, my copied volume delivers performance based on the lowest of: a baseline of 3,000 IOPS and 125 MiB/s, the source volume’s provisioned performance, or the copied volume’s provisioned performance.</p><p>After initialization is completed, the copied volume becomes fully independent of the source volume and delivers its full provisioned performance.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/10/07/2025-10-07_11-12-41.png"><img class="aligncenter size-full wp-image-99710" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/10/07/2025-10-07_11-12-41.png" alt="Copy Volume - Initializing" width="800" height="310" /></a>Alternatively, I can use the <a href="https://aws.amazon.com/cli/">AWS Command Line Interface (AWS CLI)</a> to initiate the copy:</p><pre class="lang-bash">aws ec2 copy-volumes                          \
     --source-volume-id vol-1234567890abcdef0 \
     --size 500                               \
     --volume-type gp3</pre><p>After the volume copy is created, I attach it to my EC2 instance and mount it. I can check the file I created at start is present.</p><p>First, I attach the volume from my laptop, using the <code>attach-volume</code> command:</p><pre class="lang-bash">aws ec2 attach-volume \
         --volume-id 'vol-09b700e3a23a9b4ad' \
         --instance-id 'i-079e6504ad25b029e'   \
         --device '/dev/sdb'</pre><p>Then, I connect to the instance, and I type these commands:</p><pre class="lang-bash">$ sudo lsblk -f
NAME          FSTYPE FSVER LABEL UUID                                 FSAVAIL FSUSE% MOUNTPOINTS
nvme0n1                                                                              
├─nvme0n1p1   xfs          /     49e26d9d-0a9d-4667-b93e-a23d1de8eacd    6.2G    22% /
└─nvme0n1p128 vfat   FAT16       3105-2F44                               8.6M    14% /boot/efi
nvme1n1                                                                              
├─nvme1n1p1   xfs          /     49e26d9d-0a9d-4667-b93e-a23d1de8eacd                
└─nvme1n1p128 vfat   FAT16       3105-2F44     
$ sudo mount -t xfs /dev/nvme1n1p1 /data
$ df -h
Filesystem        Size  Used Avail Use% Mounted on
devtmpfs          4.0M     0  4.0M   0% /dev
tmpfs             924M     0  924M   0% /dev/shm
tmpfs             370M  476K  369M   1% /run
/dev/nvme0n1p1    8.0G  1.8G  6.2G  22% /
tmpfs             924M     0  924M   0% /tmp
/dev/nvme0n1p128   10M  1.4M  8.7M  14% /boot/efi
tmpfs             185M     0  185M   0% /run/user/1000
/dev/nvme1n1p1    8.0G  1.8G  6.2G  22% /data
$ cat /data/home/ec2-user/hello.txt 
Hello CopyVolumes</pre><p><strong>Things to know<br /></strong> Volume Clones creates copies within the same Availability Zone as your source volume. You can create copies from encrypted volumes only, and the size of your copy must be equal to or greater than the source volume.</p><p>Volume Clones creates crash-consistent copies of your volumes, exactly like snapshots. For application consistency, you need to pause application I/O operations before creating the copy. For example, with PostgreSQL databases, you can use the <code>pg_start_backup()</code> and <code>pg_stop_backup()</code> functions to pause writes and create a consistent copy. At the operating system level on Linux with XFS, you can use the <code>xfs_freeze</code> command to temporarily suspend and resume access to the file system and ensure all cached updates are written to disk.</p><p>Although Volume Clones creates point-in-time copies, it complements rather than replaces EBS snapshots for backup purposes. EBS snapshots remain the recommended solution for data backup and protection against AZ-level and volume failures. Snapshots provide incremental backups to Amazon S3 with 11 nines of durability, compared to Volume Clones which maintains EBS volume durability (99.999% for io2, 99.9% for other volume types). Consider using Volume Clones specifically for test and development environment scenarios where you need instant access to volume copies.</p><p>Copied volumes exist independently of their source volumes and continue to incur standard EBS volume charges until you delete them. To manage costs effectively, implement governance rules to identify and remove copied volumes that are no longer needed for your development or testing activities.</p><p><strong>Pricing and availability<br /></strong> Volume Clones supports all EBS volume types and works with volumes in the same AWS account and Availability Zone. This new capability is available in all AWS commercial <a href="https://docs.aws.amazon.com/glossary/latest/reference/glos-chap.html#region">Regions</a>, selected <a href="https://aws.amazon.com/about-aws/global-infrastructure/localzones/locations/">Local Zones</a>, and in the <a href="https://aws.amazon.com/govcloud-us/">AWS GovCloud (US)</a>.</p><p>For pricing, you’re charged a one-time fee per GiB of data on the source volume at initiation and standard EBS pricing for the new volume.</p><p>I find Volume Clones particularly valuable for database workloads and continuous integration (CI) scenarios. For instance, you can quickly create a copy of your production database for testing new features or troubleshooting issues without impacting your production environment or waiting for data to hydrate from Amazon S3.</p><p>To get started with Amazon EBS Volume Clones, visit the <a href="https://console.aws.amazon.com/ec2/home#Volumes:">Amazon EBS section on the console</a> or check out the <a href="https://docs.aws.amazon.com/ebs/latest/userguide/ebs-copying-volume.html">EBS documentation</a>. I look forward to hearing how you use this capability to improve your development workflows.</p><a href="https://linktr.ee/sebsto">— seb</a></section><aside id="Comments" class="blog-comments"><div data-lb-comp="aws-blog:cosmic-comments" data-env="prod" data-content-id="cb430914-91d2-4e69-b65e-b09f93b46939" data-title="Introducing Amazon EBS Volume Clones: Create instant copies of your EBS volumes" data-url="https://aws.amazon.com/blogs/aws/introducing-amazon-ebs-volume-clones-create-instant-copies-of-your-ebs-volumes/"><p data-failed-message="Comments cannot be loaded… Please refresh and try again.">Loading comments…</p></div>
</aside>]]></summary>
    <link href="https://aws.amazon.com/blogs/aws/introducing-amazon-ebs-volume-clones-create-instant-copies-of-your-ebs-volumes/"/>
    <updated>2025-10-14T23:35:00+02:00</updated>
  </entry>
  <entry>
    <id>https://aws.amazon.com/blogs/aws/aws-transfer-family-sftp-connectors-now-support-vpc-based-connectivity/</id>
    <title><![CDATA[AWS Transfer Family SFTP connectors now support VPC-based connectivity]]></title>
    <summary><![CDATA[<section class="blog-post-content lb-rtxt"><table id="amazon-polly-audio-table"><tbody><tr><td id="amazon-polly-audio-tab">
<p></p></td></tr></tbody>


</table><p>Many organizations rely on the <a href="https://aws.amazon.com/what-is/sftp/">Secure File Transfer Protocol (SFTP)</a> as the industry standard for exchanging critical business data. Traditionally, securely connecting to private SFTP servers required custom infrastructure, manual scripting, or exposing endpoints to the public internet.</p><p>Today, <a href="https://aws.amazon.com/aws-transfer-family/">AWS Transfer Family</a> <a href="https://docs.aws.amazon.com/transfer/latest/userguide/creating-connectors.html">SFTP connectors</a> now support connectivity to remote SFTP servers through <a href="https://aws.amazon.com/vpc/?trk=ba8b32c9-8088-419f-9258-82e9375ad130&amp;sc_channel=el">Amazon Virtual Private Cloud (Amazon VPC)</a> environments. You can transfer files between <a href="https://aws.amazon.com/s3/?nc2=type_a&amp;?trk=ba8b32c9-8088-419f-9258-82e9375ad130&amp;sc_channel=el">Amazon Simple Storage Service (Amazon S3)</a> and private or public SFTP servers while applying the security controls and network configurations already defined in your VPC. This capability helps you integrate data sources across on-premises environments, partner-hosted private servers, or internet-facing endpoints, with the operational simplicity of a fully managed <a href="https://aws.amazon.com/">Amazon Web Services (AWS)</a> service.</p><p><strong class="c6">New capabilities with SFTP connectors<br /></strong> The following are the key enhancements:</p><ul><li><strong>Connect to private SFTP servers</strong> – SFTP connectors can now reach endpoints that are only accessible within your AWS VPC connection. These include servers hosted in your VPC or a shared VPC, on-premises systems connected over <a href="https://aws.amazon.com/directconnect/?nc2=type_a&amp;?trk=ba8b32c9-8088-419f-9258-82e9375ad130&amp;sc_channel=el">AWS Direct Connect</a>, and partner-hosted servers connected through VPN tunnels.</li>
<li><strong>Security and compliance</strong> – All file transfers are routed through the security controls already applied in your VPC, such as <a href="https://aws.amazon.com/network-firewall/?nc2=h_prod_se_netf&amp;?trk=ba8b32c9-8088-419f-9258-82e9375ad130&amp;sc_channel=el">AWS Network Firewall</a> or centralized ingress and egress inspection. Private SFTP servers remain private and don’t need to be exposed to the internet. You can also present static Elastic IP or bring your own IP (BYOIP) addresses to meet partner allowlist requirements.</li>
<li><strong>Performance and simplicity</strong> – By using your own network resources such as NAT gateways, AWS Direct Connect or VPN connections, connectors can take advantage of higher bandwidth capacity for large-scale transfers. You can configure connectors in minutes through the <a href="https://console.aws.amazon.com/?trk=ba8b32c9-8088-419f-9258-82e9375ad130&amp;sc_channel=el">AWS Management Console</a>,  <a href="https://aws.amazon.com/cli/?trk=ba8b32c9-8088-419f-9258-82e9375ad130&amp;sc_channel=el">AWS Command Line Interface (AWS CLI)</a>, or <a href="https://aws.amazon.com/tools/?trk=ba8b32c9-8088-419f-9258-82e9375ad130&amp;sc_channel=el">AWS SDKs</a> without building custom scripts or third-party tools.</li>
</ul><p><strong class="c6">How VPC- based SFTP connections work<br /></strong> SFTP connectors use <a href="https://aws.amazon.com/vpc/lattice/">Amazon VPC Lattice</a> resources to establish secure connectivity through your VPC. Key constructs include a <strong>resource configuration</strong> and a <strong>resource gateway</strong>. The resource configuration represents the target SFTP server, which you specify using a private IP address or public DNS name. The resource gateway provides SFTP connector access to these configurations, enabling file transfers to flow through your VPC and its security controls.</p><p>The following architecture diagram illustrates how traffic flows between Amazon S3 and remote SFTP servers. <a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/10/02/Screenshot-2025-10-02-at-22.53.51.png"><img class="aligncenter size-full wp-image-99633" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/10/02/Screenshot-2025-10-02-at-22.53.51.png" alt="" width="1968" height="926" /></a>As shown in the architecture, traffic flows from Amazon S3 through the SFTP connector into your VPC. A resource gateway is the entry point that handles inbound connections from the connector to your VPC resources. Outbound traffic is routed through your configured egress path, using Amazon VPC NAT gateways with Elastic IPs for public servers or AWS Direct Connect and VPN connections for private servers. You can use existing IP addresses from your VPC CIDR range, simplifying partner server allowlists. Centralized firewalls in the VPC enforce security policies, and customer-owned NAT gateways provide higher bandwidth for large-scale transfers.</p><p><strong class="c6">When to use this feature<br /></strong> With this capability, developers and IT administrators can simplify workflows while meeting security and compliance requirements across a range of scenarios:</p><ul><li><strong>Hybrid environments</strong> – Transfer files between Amazon S3 and on-premises SFTP servers using AWS Direct Connect or <a href="https://aws.amazon.com/vpn/site-to-site-vpn/">AWS Site-to-Site VPN</a>, without exposing endpoints to the internet.</li>
<li><strong>Partner integrations</strong> – Connect with business partners’ SFTP servers that are only accessible through private VPN tunnels or shared VPCs. This avoids building custom scripts or managing third-party tools, reducing operational complexity.</li>
<li><strong>Regulated industries</strong> – Route file transfers through centralized firewalls and inspection points in VPCs to comply with financial services, government, or healthcare security requirements.</li>
<li><strong>High-throughput transfers</strong> – Use your own network configurations such as NAT gateways, AWS Direct Connect, or VPN connections with Elastic IP or BYOIP to handle large-scale, high-bandwidth transfers while retaining IP addresses already on partner allowlists.</li>
<li><strong>Unified file transfer solution</strong> – Standardize on Transfer Family for both internal and external SFTP connectivity, reducing fragmentation across file transfer tools.</li>
</ul><p><strong class="c6">Start building with SFTP connectors<br /></strong> To begin transferring files with SFTP connectors through my VPC environment, I follow these steps:</p><p>First, I configure my VPC Lattice resources. In the <a href="https://us-east-1.console.aws.amazon.com/vpcconsole/home/?trk=ba8b32c9-8088-419f-9258-82e9375ad130&amp;sc_channel=el">Amazon VPC console</a>, under <strong>PrivateLink and Lattice</strong> in the navigation pane<strong>,</strong> I choose <strong>Resource gateways</strong>, choose <strong>Create resource gateway</strong> to create one to act as the ingress point into my VPC. <a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/09/30/Create-or-select-a-resource-gateway-1.png"><img class="aligncenter size-full wp-image-99586" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/09/30/Create-or-select-a-resource-gateway-1.png" alt="" width="3836" height="1074" /></a>Next, under <strong>PrivateLink and Lattice</strong> in the navigation pane, I choose <strong>Resource configuration</strong> and choose <strong>Create resource configuration</strong> to create a resource configuration for my target SFTP server. Specify the private IP address or public DNS name, and the port (typically 22). <a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/09/30/create-resouce-configurations.png"><img class="aligncenter size-full wp-image-99587" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/09/30/create-resouce-configurations.png" alt="" width="3838" height="990" /></a></p><p>Then, I configure <a href="https://aws.amazon.com/iam/?nc2=type_a&amp;?trk=ba8b32c9-8088-419f-9258-82e9375ad130&amp;sc_channel=el">AWS Identity and Access Management (IAM)</a> permissions. I ensure that the IAM role used for connector creation has <code>transfer:*</code> permissions, and VPC Lattice permissions (<code>vpc-lattice:CreateServiceNetworkResourceAssociation</code>, <code>vpc-lattice:GetResourceConfiguration,</code> <code>vpc-lattice:AssociateViaAWSService</code>). I update the trust policy on the IAM role to specify <code>transfer.amazonaws.com</code> as a trusted principal. This enables AWS Transfer Family to assume the role when creating and managing my SFTP connectors.</p><p>After that, I create an SFTP connector through the <a href="https://console.aws.amazon.com/transfer/home?refid=30641bb5-5f59-4f87-9a27-a89f5ad26ab6">AWS Transfer Family console</a>. I choose <strong>SFTP Connectors</strong> and then choose <strong>Create SFTP connector</strong>. <a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/09/30/create-SFTP-connector-1.png"><img class="aligncenter size-full wp-image-99583" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/09/30/create-SFTP-connector-1.png" alt="" width="1457" height="331" /></a>In the <strong>Connector configuration</strong> section, I select <strong>VPC Lattice</strong> as the egress type, then provide the Amazon Resource Name (ARN) of the <strong>Resource Configuration</strong>, <strong>Access role,</strong> and <strong>Connector credentials</strong>. Optionally, include a trusted host key for enhanced security, or override the default port if my SFTP server uses a nonstandard port.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/09/30/configure-SFTP-connector-1.png"><img class="aligncenter size-full wp-image-99549" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/09/30/configure-SFTP-connector-1.png" alt="" width="2515" height="971" /></a>Next, I test the connection. On the <strong>Actions</strong> menu, I choose <strong>Test connection</strong> to confirm that the connector can reach the target SFTP server.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/10/03/test-SFTP-connector-2.png"><img class="aligncenter size-full wp-image-99641" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/10/03/test-SFTP-connector-2.png" alt="" width="3006" height="608" /></a>Finally, after the connector status is <strong>ACTIVE</strong>, I can begin file operations with my remote SFTP server programmatically by calling Transfer Family APIs such as <code>StartDirectoryListing</code>, <code>StartFileTransfer</code>, <code>StartRemoteDelete</code>, or <code>StartRemoteMove</code>. All traffic is routed through my VPC using my configured resources such as NAT gateways, AWS Direct Connect, or VPN connections together with my IP addresses and security controls.</p><p>For the complete set of options and advanced workflows, refer to the <a href="https://docs.aws.amazon.com/transfer/">AWS Transfer Family documentation</a>.</p><p><strong class="c6">Now available</strong></p><p>SFTP connectors with VPC-based connectivity are now available in 21 <a href="https://docs.aws.amazon.com/glossary/latest/reference/glos-chap.html#region">AWS Regions</a>. Check the <a href="https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/?utm_source=chatgpt.com/?trk=ba8b32c9-8088-419f-9258-82e9375ad130&amp;sc_channel=el">AWS Services by Region</a> for the latest supported AWS Regions. You can now securely connect AWS Transfer Family SFTP connectors to private, on-premises, or internet-facing servers using your own VPC resources such as NAT gateways, Elastic IPs, and network firewalls.</p><p>— <a href="https://www.linkedin.com/in/zhengyubin714/">Betty</a></p></section><aside id="Comments" class="blog-comments"><div data-lb-comp="aws-blog:cosmic-comments" data-env="prod" data-content-id="e8345089-4614-4b42-b43b-c6a94dfc2e10" data-title="AWS Transfer Family SFTP connectors now support VPC-based connectivity" data-url="https://aws.amazon.com/blogs/aws/aws-transfer-family-sftp-connectors-now-support-vpc-based-connectivity/"><p data-failed-message="Comments cannot be loaded… Please refresh and try again.">Loading comments…</p></div>
</aside>]]></summary>
    <link href="https://aws.amazon.com/blogs/aws/aws-transfer-family-sftp-connectors-now-support-vpc-based-connectivity/"/>
    <updated>2025-10-14T21:27:00+02:00</updated>
  </entry>
  <entry>
    <id>https://aws.amazon.com/blogs/aws/aws-weekly-roundup-amazon-quick-suite-amazon-ec2-amazon-eks-and-more-october-13-2025/</id>
    <title><![CDATA[AWS Weekly Roundup: Amazon Quick Suite, Amazon EC2, Amazon EKS, and more (October 13, 2025)]]></title>
    <summary><![CDATA[<section class="blog-post-content lb-rtxt"><table id="amazon-polly-audio-table"><tbody><tr><td id="amazon-polly-audio-tab">
<p></p></td></tr></tbody>


</table><p>This week I was at the inaugural <a href="https://www.meetup.com/awsuguk-ai-in-practice/">AWS AI in Practice meetup from the AWS User Group UK</a>. AI-assisted software development and agents were the focus of the evening! Next week I’ll be in Italy for <a href="https://conferences.codemotion.com/milan2025/agenda/">Codemotion</a> (Milan) and an <a href="https://www.meetup.com/amazon-web-services-rome/events/311302816">AWS User Group meetup</a> (Rome). I am also excited to <a href="https://aws.amazon.com/blogs/aws/reimagine-the-way-you-work-with-ai-agents-in-amazon-quick-suite/">try the new Amazon Quick Suite</a> that brings AI-powered research, business intelligence, and automation capabilities into a single workspace.</p><p><strong>Last week’s launches</strong><br />Here are the launches that got my attention this week:</p><ul><li><a href="https://aws.amazon.com/quicksuite/">Amazon Quick Suite</a> – A new agentic teammate that quickly answers your questions at work and turns those insights into actions for you. <a href="https://aws.amazon.com/blogs/aws/reimagine-the-way-you-work-with-ai-agents-in-amazon-quick-suite/">Read more in Esra’s launch post</a>.</li>
<li><a href="https://aws.amazon.com/ec2/">Amazon EC2</a> – General-purpose <a href="https://aws.amazon.com/blogs/aws/new-general-purpose-amazon-ec2-m8a-instances-are-now-available/">M8a instances</a> powered by the 5th Generation AMD EPYC (codename Turin) processors and compute-optimized <a href="https://aws.amazon.com/blogs/aws/introducing-new-compute-optimized-amazon-ec2-c8i-and-c8i-flex-instances/">C8i and C8i-flex instances</a> powered by custom Intel Xeon 6 processors are now available.</li>
<li><a href="https://aws.amazon.com/eks/">Amazon EKS</a> – EKS and <a href="https://aws.amazon.com/eks/eks-distro/">EKS Distro</a> <a href="https://aws.amazon.com/about-aws/whats-new/2025/10/amazon-eks-distro-kubernetes-version-1-34/">now support Kubernetes version 1.34</a> with several improvements.</li>
<li><a href="https://aws.amazon.com/iam/identity-center/">AWS IAM Identity Center</a> – AWS Key Management Service keys can now be used to <a href="https://aws.amazon.com/blogs/aws/aws-iam-identity-center-now-supports-customer-managed-kms-keys-for-encryption-at-rest/">encrypt identity data stored in IAM Identity Center organization instances</a>.</li>
<li><a href="https://aws.amazon.com/vpc/lattice/">Amazon VPC Lattice</a> – You can now <a href="https://aws.amazon.com/about-aws/whats-new/2025/10/amazon-vpc-lattice-configurable-ip-resource-gateway/">configure the number of IPv4 addresses assigned to resource gateway elastic network interfaces (ENIs)</a>. The IPv4 addresses are used for network address translation and determine the maximum number of concurrent IPv4 connections to a resource</li>
<li><a href="https://aws.amazon.com/q/developer/">Amazon Q Developer</a> – Amazon Q Developer can help you <a href="https://aws.amazon.com/about-aws/whats-new/2025/10/amazon-q-developer-understand-service-prices-estimate-workload-costs/">get information about AWS product and service pricing, availability, and attributes</a>, making it easier to select the right resources and estimate workload costs using natural language. <a href="https://aws.amazon.com/blogs/aws-cloud-financial-management/introducing-aws-pricing-capabilities-in-amazon-q-developer-ask-questions-get-instant-cost-insights/">More info in this blog post</a>.</li>
<li><a href="https://aws.amazon.com/rds/db2/">Amazon RDS for Db2</a> – You can now <a href="https://aws.amazon.com/about-aws/whats-new/2025/10/amazon-rds-for-db2-native-database-backup/">perform native database-level backups</a>, offering greater flexibility in database management and migration.</li>
<li><a href="https://docs.aws.amazon.com/servicequotas/latest/userguide/intro.html">AWS Service Quotas</a> – Get notified of your quota usage with <a href="https://aws.amazon.com/about-aws/whats-new/2025/10/automatic-quota-management-service-quotas/">automatic quota management</a>. Configure your preferred notifications channels, such as email, SMS, or Slack. Notifications are also available in <a href="https://docs.aws.amazon.com/health/latest/ug/what-is-aws-health.html">AWS Health</a>, and you can subscribe to related <a href="https://aws.amazon.com/cloudtrail/">AWS Cloudtrail</a> events for automation workflows.</li>
<li><a href="https://aws.amazon.com/connect/">Amazon Connect</a> – You can now <a href="https://aws.amazon.com/about-aws/whats-new/2025/10/amazon-connect-cases-api-link-search/">programmatically enrich case data with the new case APIs</a> to link related cases, add custom related items, and search across them. You can now also <a href="https://aws.amazon.com/about-aws/whats-new/2025/10/amazon-connect-enables-service-level-calculation-configuration/">customize service level calculations</a> to your specific needs. New capabilities that have just been introduced include <a href="https://aws.amazon.com/about-aws/whats-new/2025/10/amazon-connect-copy-bulk-edit-agent-scheduling/">copy and bulk edit of agent scheduling configuration</a> and <a href="https://aws.amazon.com/about-aws/whats-new/2025/10/amazon-connect-agent-adherence-notifications/">agent schedule adherence notifications</a>.</li>
<li><a href="https://aws.amazon.com/vpn/">AWS Client VPN</a> – Now <a href="https://aws.amazon.com/about-aws/whats-new/2025/10/aws-client-vpn-macos-tahoe/">supports MacOS Tahoe</a>.</li>
</ul><p><strong>Additional updates</strong><br />Here are some additional projects, blog posts, and news items that I found interesting:</p><p><strong>Upcoming AWS events</strong><br />Check your calendars so that you can sign up for these upcoming events:</p><ul><li><a href="https://info.devpost.com/blog/aws-ai-agent-global-hackathon?trk=c4ea046f-18ad-4d23-a1ac-cdd1267f942c&amp;sc_channel=el">AWS AI Agent Global Hackathon</a> – This is your chance to dive deep into our powerful generative AI stack and create something truly awesome. From September 8th to October 20th, you have the opportunity to create AI agents using AWS suite of AI services, competing for over $45,000 in prizes and exclusive go-to-market opportunities.</li>
<li><a href="https://aws.amazon.com/startups/lp/aws-gen-ai-lofts?trk=c4ea046f-18ad-4d23-a1ac-cdd1267f942c&amp;sc_channel=el">AWS Gen AI Lofts</a> – You can learn AWS AI products and services with exclusive sessions, meet industry-leading experts, and have valuable networking opportunities with investors and peers. Register in your nearest city: <a href="https://aws.amazon.com/startups/lp/aws-gen-ai-loft-paris?trk=c4ea046f-18ad-4d23-a1ac-cdd1267f942c&amp;sc_channel=el">Paris</a> (October 7–21), <a href="https://aws.amazon.com/startups/lp/aws-gen-ai-loft-london?trk=c4ea046f-18ad-4d23-a1ac-cdd1267f942c&amp;sc_channel=el">London</a> (Oct 13–21), and <a href="https://aws.amazon.com/startups/lp/aws-gen-ai-loft-tel-aviv?trk=c4ea046f-18ad-4d23-a1ac-cdd1267f942c&amp;sc_channel=el">Tel Aviv</a> (November 11–19).</li>
<li><a href="https://aws.amazon.com/events/community-day/">AWS Community Days</a> – Join community-led conferences that feature technical discussions, workshops, and hands-on labs led by expert AWS users and industry leaders from around the world: <a href="https://awscommunity.eu/">Budapest</a> (October 16).</li>
</ul><p>Join the <a href="https://builder.aws.com/?trk=e61dee65-4ce8-4738-84db-75305c9cd4fe&amp;sc_channel=el">AWS Builder Center</a> to learn, build, and connect with builders in the AWS community. Browse here <a href="https://aws.amazon.com/events/explore-aws-events/?trk=e61dee65-4ce8-4738-84db-75305c9cd4fe&amp;sc_channel=el">upcoming in-person events</a>, <a href="https://aws.amazon.com/developer/events/?trk=e61dee65-4ce8-4738-84db-75305c9cd4fe&amp;sc_channel=el">developer-focused events</a>, and <a href="https://aws.amazon.com/startups/events?trk=c4ea046f-18ad-4d23-a1ac-cdd1267f942c&amp;sc_channel=el">events for startups</a>.</p><p>That’s all for this week. Check back next Monday for another <a href="https://aws.amazon.com/blogs/aws/tag/week-in-review/?trk=7c8639c6-87c6-47d6-9bd0-a5812eecb848&amp;sc_channel=el">Weekly Roundup</a>!</p><p>– <a href="https://x.com/danilop">Danilo</a></p></section><aside id="Comments" class="blog-comments"><div data-lb-comp="aws-blog:cosmic-comments" data-env="prod" data-content-id="489055ef-54f0-4ab2-b729-ad3a04171934" data-title="AWS Weekly Roundup: Amazon Quick Suite, Amazon EC2, Amazon EKS, and more (October 13, 2025)" data-url="https://aws.amazon.com/blogs/aws/aws-weekly-roundup-amazon-quick-suite-amazon-ec2-amazon-eks-and-more-october-13-2025/"><p data-failed-message="Comments cannot be loaded… Please refresh and try again.">Loading comments…</p></div>
</aside>]]></summary>
    <link href="https://aws.amazon.com/blogs/aws/aws-weekly-roundup-amazon-quick-suite-amazon-ec2-amazon-eks-and-more-october-13-2025/"/>
    <updated>2025-10-13T18:24:00+02:00</updated>
  </entry>
  <entry>
    <id>https://aws.amazon.com/blogs/aws/reimagine-the-way-you-work-with-ai-agents-in-amazon-quick-suite/</id>
    <title><![CDATA[Announcing Amazon Quick Suite: your agentic teammate for answering questions and taking action]]></title>
    <summary><![CDATA[<section class="blog-post-content lb-rtxt"><table id="amazon-polly-audio-table"><tbody><tr><td id="amazon-polly-audio-tab">
<p></p></td></tr></tbody>


</table><p>Today, we’re announcing <a href="https://aws.amazon.com/quicksuite/">Amazon Quick Suite</a>, a new agentic teammate that quickly answers your questions at work and turns those insights into actions for you. Instead of switching between multiple applications to gather data, find important signals and trends, and complete manual tasks, Quick Suite brings AI-powered research, business intelligence, and automation capabilities into a single workspace. You can now analyze data through natural language queries, find critical information across enterprise and external sources in minutes, and automate processes from simple tasks to complex multi-department workflows.</p><p>Here’s a look into Quick Suite.</p><p><img class="aligncenter size-full wp-image-99599" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/10/01/2025_quick-suite_1.png" alt="" width="1920" height="974" /></p><p>Business users often need to gather data across multiple applications—pulling customer details, checking performance metrics, reviewing internal product information, and performing competitive intelligence. This fragmented process often requires consultation with specialized teams to analyze advanced datasets, and in some cases, must be repeated regularly, reducing efficiency and leading to incomplete insights for decision-making.</p><p>Quick Suite helps you overcome these challenges by combining agentic teammates for research, business intelligence, and automation into a unified digital workspace for your day-to-day work.</p><p><strong>Integrated capabilities that power productivity </strong><br />Quick Suite includes the following integrated capabilities:</p><ul><li><strong>Research –</strong> Quick Research accelerates complex research by combining enterprise knowledge, premium third-party data, and data from the internet for more comprehensive insights.</li>
<li><strong>Business intelligence –</strong> Quick Sight provides AI-powered business intelligence capabilities that transform data into actionable insights through natural language queries and interactive visualizations, helping everyone make faster decisions and achieve better business outcomes.</li>
<li><strong>Automation –</strong> Quick Flows and Quick Automate help users and technical teams to automate any business process from simple, routine tasks to complex multi-department workflows, enabling faster execution and reducing manual work across the organization.</li>
</ul><p>Let’s dive into some of these key capabilities.</p><p><strong>Quick Index: Your unified knowledge foundation</strong><br />Quick Index creates a secure, searchable repository that consolidates documents, files, and application data to power AI-driven insights and responses across your organization.</p><p>As a foundational component of Quick Suite, Quick Index operates in the background to bring together all your data—from databases and data warehouses to documents and email. This creates a single, intelligent knowledge base that makes AI responses more accurate and reduces time spent searching for information.</p><p>Quick Index automatically indexes and prepares any uploaded files or unstructured data you add to your Quick Suite, enabling efficient searching, sorting, and data access. For example, when you search for a specific project update, Quick Index instantly returns results from uploaded documents, meeting notes, project files, and reference materials—all from one unified search instead of checking different repositories and file systems.</p><p>To learn more, visit the <a href="https://aws.amazon.com/quicksuite/index/">Quick Index overview page</a>.</p><p><strong>Quick Research: From complex business challenges to expert-level insights<br /></strong> Quick Research is a powerful agent that conducts comprehensive research across your enterprise data and external sources to deliver contextualized, actionable insights in minutes or hours — work that previously could take longer.</p><p><img class="aligncenter size-full wp-image-99699" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/10/07/2025_quick-suite_research_1-1.png" alt="" width="1920" height="968" /></p><p>Quick Research systematically breaks down complex questions into organized research plans. Starting with a simple prompt, it automatically creates detailed research frameworks that outline the approach and data sources needed for comprehensive analysis.</p><p><img class="aligncenter size-full wp-image-99700" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/10/07/2025_quick-suite_research_1-2.png" alt="" width="1920" height="968" /></p><p>After Quick Research creates the plan, you can easily refine it through natural language conversations. When you are happy with the plan, it works in the background to gather information from multiple sources, using advanced reasoning to validate findings and provide thorough analysis with citations.</p><p><img class="aligncenter size-full wp-image-99718" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/10/07/2025_quick-suite_research_1-3-1.png" alt="" width="1516" height="969" /></p><p><img class="aligncenter size-full wp-image-99615" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/10/02/2025_quick-suite_research_5.png" alt="" width="1920" height="975" /></p><p>Quick Research integrates with your enterprise data connected to Quick Suite, the unified knowledge foundation that connects to your dashboards, documents, databases, and external sources, including Amazon S3, Snowflake, Google Drive, and Microsoft SharePoint. Quick Research grounds key insights to original sources and reveals clear reasoning paths, helping you verify accuracy, understand the logic behind recommendations, and present findings with confidence. You can trace findings back to their original sources and validate conclusions through source citations. This makes it ideal for complex topics requiring in-depth analysis.</p><p>To learn more, visit the <a href="https://aws.amazon.com/quicksuite/research/">Quick Research overview page</a>.</p><p><strong>Quick Sight: AI-powered business intelligence<br /></strong> Quick Sight provides AI-powered business intelligence capabilities that transform data into actionable insights through natural language queries and interactive visualizations.</p><p><img class="aligncenter size-full wp-image-99630" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/10/02/2025_quick-suite_quicksight-1.png" alt="" width="3024" height="1716" /></p><p>You can create dashboards and executive summaries using conversational prompts, reducing dashboard development time while making advanced analytics accessible without specialized skills.</p><p><img class="aligncenter size-full wp-image-99671" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/10/05/2025_quicksuite_quicksight_0.gif" alt="" width="1915" height="1080" /></p><p>Quick Sight helps you ask questions about your data in natural language and receive instant visualizations, executive summaries, and insights. This generative AI integration provides you with answers from your dashboards and datasets without requiring technical expertise.</p><p><img class="aligncenter wp-image-99672 size-full" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/10/05/2025_quicksuite_quicksight_1-1.gif" alt="" width="1300" height="731" /></p><p>Using the scenarios capability, you can perform what-if analysis in natural language with step-by-step guidance, exploring complex business scenarios and finding answers faster than before.</p><p><img class="aligncenter wp-image-99674 size-full" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/10/05/2025_quicksuite_quicksight_2-1.gif" alt="" width="1230" height="691" /></p><p>Additionally, you can respond to insights with one-click actions by creating tickets, sending alerts, updating records, or triggering automated workflows directly from your dashboards without switching applications.</p><p><img class="aligncenter size-full wp-image-99675" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/10/05/2025_quicksuite_quicksight_3-1.gif" alt="" width="1700" height="958" /></p><p>To learn more, visit <a href="https://aws.amazon.com/quicksuite/quicksight">Quick Sight overview page</a>.</p><p><strong>Quick Flows: Automation for everyone<br /></strong> With Quick Flows, any user can automate repetitive tasks by describing their workflow using natural language without requiring any technical knowledge. Quick Flows fetches information from internal and external sources, takes action in business applications, generates content, and handles process-specific requirements.</p><p><img class="aligncenter size-full wp-image-99617" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/10/02/2025_quick-suite_flows_3.png" alt="" width="1920" height="914" /></p><p>Starting with straightforward business requirements, it creates a multi-step flow including input steps for gathering information, reasoning groups for AI-powered processing, and output steps for generating and presenting results.</p><p><img class="aligncenter size-full wp-image-99618" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/10/02/2025_quick-suite_flows_4.png" alt="" width="1920" height="969" /></p><p>After the flow is configured, you can share it with a single click to your coworkers and other teams. To execute the flow, users can open it from the library or invoke it from chat, provide the necessary inputs, and then chat with the agent to refine the outputs and further customize the results.</p><p><img class="aligncenter size-full wp-image-99619" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/10/02/2025_quick-suite_flows_5.png" alt="" width="1920" height="975" /></p><p>To learn more, visit the <a href="https://aws.amazon.com/quicksuite/flows/">Quick Flows overview page</a>.</p><p><strong>Quick Automate: Enterprise-scale process automation<br /></strong> Quick Automate helps technical teams build and deploy sophisticated automation for complex, multistep processes that span departments, systems, and third-party integrations. Using AI-powered natural language processing, Quick Automate transforms complex business processes into multi-agent workflows that can be created merely by describing what you want to automate or uploading process documentation.</p><p><img class="aligncenter size-full wp-image-99623" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/10/02/2025_quick-suite_automate_1.png" alt="" width="1920" height="968" /></p><p>While Quick Flows handles straightforward workflows, Quick Automate is designed for comprehensive and complex business processes like customer onboarding, procurement automations, or compliance procedures that involve multiple approval steps, system integrations, and cross-departmental coordination. Quick Automate offers advanced orchestration capabilities with extensive monitoring, debugging, versioning, and deployment features.</p><p><img class="aligncenter size-full wp-image-99624" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/10/02/2025_quick-suite_automate_4.png" alt="" width="1920" height="969" /></p><p>Quick Automate then generates a comprehensive automation plan with detailed steps and actions. You will find a UI agent that understands natural language instructions to autonomously navigate websites, complete form inputs, extract data, and produces structured outputs for downstream automation steps.</p><p>Additionally, you can define a custom agent, complete with instructions, knowledge, and tools, to complete process-specific tasks using the visual building experience – no code required.</p><p><img class="aligncenter size-full wp-image-99739" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/10/09/2025_quicksuite_quickautomate-1-1.gif" alt="" width="1000" height="485" /></p><p>Quick Automate includes enterprise-grade features such as user role management and human-in-the-loop capabilities that route specific tasks to users or groups for review and approval before continuing workflows. The service provides comprehensive observability with real-time monitoring, success rate tracking, and audit trails for compliance and governance.</p><p>To learn more, visit the <a href="https://aws.amazon.com/quicksuite/automate/">Quick Automate overview page</a>.</p><p><strong>Additional foundational capabilities<br /></strong> Quick Suite includes other foundational capabilities that deliver seamless data organization and contextual AI interactions across your enterprise.</p><p><strong>Spaces</strong> – Spaces provide a straightforward way for every business user to add their own context by uploading files or connecting to specific datasets and repositories specific to their work or to a particular function. For example, you might create a space for quarterly planning that includes budget spreadsheets, market research reports, and strategic planning documents. Or you could set up a product launch space that connects to your project management system and customer feedback databases. Spaces can scale from personal use to enterprise-wide deployment while maintaining access permissions and seamless integration with Quick Suite capabilities.</p><p><img class="aligncenter size-full wp-image-99627" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/10/02/2025_quick-suite_space-1.png" alt="" width="2834" height="1634" /></p><p><strong>Chat agents</strong> – Quick Suite includes insights agents that you can use to interact with your data and workflows through natural language. Quick Suite includes a built-in agent to answer questions across all of your data and custom chat agents that you can configure with specific expertise and business context. Custom chat agents can be tailored for particular departments or use cases—such as a sales agent connected to your product catalog data and pricing information stored in a space or a compliance agent configured with your regulatory requirements and actions to request approvals.</p><p><img class="aligncenter size-full wp-image-99628" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/10/02/2025_quick-suite_chat-agent-2.png" alt="" width="1920" height="969" /></p><p><strong>Additional things to know<br /></strong> <strong>If you’re an existing Amazon QuickSight customer –</strong> Amazon QuickSight customers will be upgraded to Quick Suite, a unified digital workspace that includes all your existing QuickSight business intelligence capabilities (now called “Quick Sight”) plus new agentic AI capabilities. This is an interface and capability change—your data connectivity, user access, content, security controls, user permissions, and privacy settings remain exactly the same. No data is moved, migrated, or changed.</p><p>Quick Suite offers per-user subscription-based pricing with consumption-based charges for the Quick Index and other optional features. You can find more detail on the <a href="https://aws.amazon.com/quicksuite/pricing/">Quick Suite pricing page</a>.</p><p><strong>Now available</strong><br />Amazon Quick Suite gives you a set of agentic teammates that helps you get the answers you need using all your data and move instantly from answers to action so you can focus on high value activities that drive better business and customer outcomes.</p><p>Visit the <a href="https://aws.amazon.com/quicksuite/getting-started/">getting started page</a> to start using Amazon Quick Suite today.</p><p>Happy building<br />— Esra and Donnie</p></section><aside id="Comments" class="blog-comments"><div data-lb-comp="aws-blog:cosmic-comments" data-env="prod" data-content-id="6670c026-0d46-4439-9d0f-9435bcaa0dab" data-title="Announcing Amazon Quick Suite: your agentic teammate for answering questions and taking action" data-url="https://aws.amazon.com/blogs/aws/reimagine-the-way-you-work-with-ai-agents-in-amazon-quick-suite/"><p data-failed-message="Comments cannot be loaded… Please refresh and try again.">Loading comments…</p></div>
</aside>]]></summary>
    <link href="https://aws.amazon.com/blogs/aws/reimagine-the-way-you-work-with-ai-agents-in-amazon-quick-suite/"/>
    <updated>2025-10-09T17:42:00+02:00</updated>
  </entry>
  <entry>
    <id>https://aws.amazon.com/blogs/aws/new-general-purpose-amazon-ec2-m8a-instances-are-now-available/</id>
    <title><![CDATA[New general-purpose Amazon EC2 M8a instances are now available]]></title>
    <summary><![CDATA[<section class="blog-post-content lb-rtxt"><table id="amazon-polly-audio-table"><tbody><tr><td id="amazon-polly-audio-tab">
<p></p></td></tr></tbody>


</table><p>Today, we’re announcing the availability of <a href="https://aws.amazon.com/ec2/">Amazon Elastic Compute Cloud (Amazon EC2)</a> M8a instances, the latest addition to the general-purpose M instance family. These instances are powered by the <a href="https://www.amd.com/en/products/processors/server/epyc/9005-series.html">5th Generation AMD EPYC (codename Turin) processors</a> with a maximum frequency of 4.5GHz. Customers can expect up to 30% higher performance and up to 19% better price performance compared to M7a instances. They also provide higher memory bandwidth, improved networking and storage throughput, and flexible configuration options for a broad set of general-purpose workloads.</p><p><strong class="c6">Improvements in M8a<br /></strong> M8a instances deliver up to 30% better performance per vCPU compared to M7a instances, making them ideal for applications that require benefit from high performance and high throughput such as financial applications, gaming, rendering, application servers, simulation modeling, midsize data stores, application development environments, and caching fleets.</p><p>They provide 45% more memory bandwidth compared to M7a instances, accelerating in-memory databases, distributed caches, and real-time analytics.</p><p>For workloads with high I/O requirements, M8a instances provide up to 75 Gbps of networking bandwidth and 60 Gbps of <a href="https://aws.amazon.com/ebs/">Amazon Elastic Block Store (Amazon EBS)</a> bandwidth, a 50% improvement over the previous generation. These enhancements support modern applications that rely on rapid data transfer and low-latency network communication.</p><p>Each vCPU on an M8a instance corresponds to a physical CPU core, meaning there is no simultaneous multithreading (SMT). In application benchmarks, M8a instances delivered up to 60% faster performance for <a href="https://groovy-lang.org/">GroovyJVM</a> and up to 39% faster performance for <a href="https://cassandra.apache.org/_/index.html">Cassandra</a> compared to M7a instances.</p><p>M8a instances support <a href="https://docs.aws.amazon.com/ebs/latest/userguide/instance-bandwidth-configuration.html">instance bandwidth configuration (IBC)</a>, which provides flexibility to allocate resources between networking and EBS bandwidth. This gives customers the flexibility to scale network or EBS bandwidth by up to 25% and improve database performance, query processing, and logging speeds.</p><p>M8a is available in ten virtualized sizes and two bare metal options (<strong>metal-24xl</strong> and <strong>metal-48xl)</strong>, providing deployment choices that scale from small applications to large enterprise workloads. All of these improvements are built on the <a href="https://aws.amazon.com/ec2/nitro/">AWS Nitro System</a>, which delivers low virtualization overhead, consistent performance, and advanced security across all instance sizes. These instances are built using the latest sixth generation AWS Nitro Cards, which offload and accelerate I/O for functions, increasing overall system performance.</p><p>M8a instances feature sizes of up to 192 vCPU with 768GiB RAM. Here are the detailed specs:</p><table class="c11"><tbody><tr class="c9"><td class="c7"><strong>M8a</strong></td>
<td class="c8"><strong>vCPUs</strong></td>
<td class="c8"><strong>Memory (GiB)</strong></td>
<td class="c8"><strong>Network bandwidth (Gbps)</strong></td>
<td><strong>EBS bandwidth (Gbps)</strong></td>
</tr><tr class="c10"><td class="c7"><strong>medium</strong></td>
<td class="c7">1</td>
<td class="c7">4</td>
<td class="c7">Up to 12.5</td>
<td class="c7">Up to 10</td>
</tr><tr class="c10"><td class="c7"><strong>large</strong></td>
<td class="c7">2</td>
<td class="c7">8</td>
<td class="c7">Up to 12.5</td>
<td class="c7">Up to 10</td>
</tr><tr class="c10"><td class="c7"><strong>xlarge</strong></td>
<td class="c7">4</td>
<td class="c7">16</td>
<td class="c7">Up to 12.5</td>
<td class="c7">Up to 10</td>
</tr><tr class="c10"><td class="c7"><strong>2xlarge</strong></td>
<td class="c7">8</td>
<td class="c7">32</td>
<td class="c7">Up to 15</td>
<td class="c7">Up to 10</td>
</tr><tr class="c10"><td class="c7"><strong>4xlarge</strong></td>
<td class="c7">16</td>
<td class="c7">64</td>
<td class="c7">Up to 15</td>
<td class="c7">Up to 10</td>
</tr><tr class="c10"><td class="c7"><strong>8xlarge</strong></td>
<td class="c7">32</td>
<td class="c7">128</td>
<td class="c7">15</td>
<td class="c7">10</td>
</tr><tr class="c10"><td class="c7"><strong>12xlarge</strong></td>
<td class="c7">48</td>
<td class="c7">192</td>
<td class="c7">22.5</td>
<td class="c7">15</td>
</tr><tr class="c10"><td class="c7"><strong>16xlarge</strong></td>
<td class="c7">64</td>
<td class="c7">256</td>
<td class="c7">30</td>
<td class="c7">20</td>
</tr><tr class="c10"><td class="c7"><strong>24xlarge</strong></td>
<td class="c7">96</td>
<td class="c7">384</td>
<td class="c7">40</td>
<td class="c7">30</td>
</tr><tr class="c10"><td class="c7"><strong>48xlarge</strong></td>
<td class="c7">192</td>
<td class="c7">768</td>
<td class="c7">75</td>
<td class="c7">60</td>
</tr><tr class="c10"><td class="c7"><strong>metal-24xl</strong></td>
<td class="c7">96</td>
<td class="c7">384</td>
<td class="c7">40</td>
<td class="c7">30</td>
</tr><tr class="c10"><td class="c7"><strong>metal-48xl</strong></td>
<td class="c7">192</td>
<td class="c7">768</td>
<td class="c7">75</td>
<td class="c7">60</td>
</tr></tbody></table><p>For a complete list of instance sizes and specifications, refer to the <a href="https://aws.amazon.com/ec2/instance-types/m8a">Amazon EC2 M8a instances page</a>.</p><p><strong class="c6">When to use M8a instances<br /></strong> M8a is a strong fit for general-purpose applications that need a balance of compute, memory, and networking. M8a instances are ideal for web and application hosting, microservices architectures, and databases where predictable performance and efficient scaling are important.</p><p>These instances are SAP certified and also well suited for enterprise workloads such as financial applications and enterprise resource planning (ERP) systems. They’re equally effective for in-memory caching and customer relationship management (CRM), in addition to development and test environments that require cost efficiency and flexibility. With this versatility, M8a supports a wide spectrum of workloads while helping customers improve price performance.</p><p><strong class="c6">Now available<br /></strong> Amazon EC2 M8a instances are available today in US East (Ohio) US West (Oregon) and Europe (Spain) <a href="https://docs.aws.amazon.com/glossary/latest/reference/glos-chap.html#region">AWS Regions</a>. M8a instances can be purchased as <a href="https://aws.amazon.com/ec2/pricing/on-demand/">On-Demand</a>, <a href="https://aws.amazon.com/savingsplans/">Savings Plans</a>, and <a href="https://aws.amazon.com/ec2/spot/pricing/">Spot Instances</a>. M8a instances are also available on <a href="https://aws.amazon.com/ec2/dedicated-hosts/pricing/">Dedicated Hosts</a>. To learn more, visit the <a href="https://aws.amazon.com/ec2/pricing">Amazon EC2 Pricing page</a>.</p><p>To learn more, visit the <a href="https://aws.amazon.com/ec2/instance-types/m8a">Amazon EC2 M8a instances page</a> and send feedback to <a href="https://repost.aws/tags/TAO-wqN9fYRoyrpdULLa5y7g/amazon-ec-2/">AWS re:Post for EC2</a> or through your usual AWS support contacts.</p><p>— <a href="https://www.linkedin.com/in/zhengyubin714/">Betty</a></p></section><aside id="Comments" class="blog-comments"><div data-lb-comp="aws-blog:cosmic-comments" data-env="prod" data-content-id="4f6493a2-44b6-4518-b404-8d9a43045c9d" data-title="New general-purpose Amazon EC2 M8a instances are now available" data-url="https://aws.amazon.com/blogs/aws/new-general-purpose-amazon-ec2-m8a-instances-are-now-available/"><p data-failed-message="Comments cannot be loaded… Please refresh and try again.">Loading comments…</p></div>
</aside>]]></summary>
    <link href="https://aws.amazon.com/blogs/aws/new-general-purpose-amazon-ec2-m8a-instances-are-now-available/"/>
    <updated>2025-10-08T21:03:00+02:00</updated>
  </entry>
  <entry>
    <id>https://aws.amazon.com/blogs/aws/introducing-new-compute-optimized-amazon-ec2-c8i-and-c8i-flex-instances/</id>
    <title><![CDATA[Introducing new compute-optimized Amazon EC2 C8i and C8i-flex instances]]></title>
    <summary><![CDATA[<section class="blog-post-content lb-rtxt"><table id="amazon-polly-audio-table"><tbody><tr><td id="amazon-polly-audio-tab">
<p></p></td></tr></tbody>


</table><p>After launching <a href="https://aws.amazon.com/pm/ec2/?trk=7c8639c6-87c6-47d6-9bd0-a5812eecb848&amp;sc_channel=el">Amazon Elastic Compute Cloud (Amazon EC2)</a> memory-optimized <a href="https://aws.amazon.com/blogs/aws/best-performance-and-fastest-memory-with-the-new-amazon-ec2-r8i-and-r8i-flex-instances/">R8i and R8i-flex instances</a> and general-purpose <a href="https://aws.amazon.com/blogs/aws/new-general-purpose-amazon-ec2-m8i-and-m8i-flex-instances-are-now-available/">M8i and M8i-flex instances</a>, I am happy to announce the general availability of compute-optimized <a href="https://aws.amazon.com/ec2/instance-types/c8i/">C8i and C8i-flex instances</a> powered by custom Intel Xeon 6 processors available only on AWS with sustained all-core 3.9 GHz turbo frequency and feature a 2:1 ratio of memory to vCPU. These instances deliver the highest performance and fastest memory bandwidth among comparable Intel processors in the cloud.</p><p>The C8i and C8i-flex instances offer up to 15 percent better price-performance, and 2.5 times more memory bandwidth compared to <a href="https://aws.amazon.com/ec2/instance-types/c7i/">C7i and C7i-flex instances</a>. The C8i and C8i-flex instances are up to 60 percent faster for NGINX web applications, up to 40 percent faster for AI deep learning recommendation models, and 35 percent faster for Memcached stores compared to C7i and C7i-flex instances.</p><p>C8i and C8i-flex instances are ideal for running compute-intensive workloads, such as web servers, caching, Apache.Kafka, ElasticSearch, batch processing, distributed analytics, high performance computing (HPC), ad serving, highly scalable multiplayer gaming, and video encoding.</p><p>As like other 8th generation instances, these instances use the new sixth generation <a href="https://aws.amazon.com/ec2/nitro/">AWS Nitro Cards</a>, delivering up to two times more network and <a href="https://aws.amazon.com/ebs/">Amazon Elastic Block Storage (Amazon EBS)</a> bandwidth compared to the previous generation instances. They also support bandwidth configuration with 25 percent allocation adjustments between network and Amazon EBS bandwidth, enabling better database performance, query processing, and logging speeds.</p><p><strong class="c6">C8i instances</strong><br />C8i instances provide up to 384 vCPUs and 768 TB memory including bare metal instances that provide dedicated access to the underlying physical hardware. These instances help you to run compute-intensive workloads, such as CPU-based inference, and video streaming that need the largest instance sizes or high CPU continuously.</p><p>Here are the specs for C8i instances:</p><table class="c10"><tbody><tr class="c8"><td class="c7"><strong>Instance size</strong></td>
<td class="c7"><strong>vCPUs</strong></td>
<td class="c7"><strong>Memory (GiB)</strong></td>
<td class="c7"><strong>Network bandwidth (Gbps)</strong></td>
<td class="c7"><strong>EBS bandwidth (Gbps)</strong></td>
</tr><tr class="c9"><td class="c7"><strong>c8i.large</strong></td>
<td class="c7">2</td>
<td class="c7">4</td>
<td class="c7">Up to 12.5</td>
<td class="c7">Up to 10</td>
</tr><tr class="c9"><td class="c7"><strong>c8i.xlarge</strong></td>
<td class="c7">4</td>
<td class="c7">8</td>
<td class="c7">Up to 12.5</td>
<td class="c7">Up to 10</td>
</tr><tr class="c9"><td class="c7"><strong>c8i.2xlarge</strong></td>
<td class="c7">8</td>
<td class="c7">16</td>
<td class="c7">Up to 15</td>
<td class="c7">Up to 10</td>
</tr><tr class="c9"><td class="c7"><strong>c8i.4xlarge</strong></td>
<td class="c7">16</td>
<td class="c7">32</td>
<td class="c7">Up to 15</td>
<td class="c7">Up to 10</td>
</tr><tr class="c9"><td class="c7"><strong>c8i.8xlarge</strong></td>
<td class="c7">32</td>
<td class="c7">64</td>
<td class="c7">15</td>
<td class="c7">10</td>
</tr><tr class="c9"><td class="c7"><strong>c8i.12xlarge</strong></td>
<td class="c7">48</td>
<td class="c7">96</td>
<td class="c7">22.5</td>
<td class="c7">15</td>
</tr><tr class="c9"><td class="c7"><strong>c8i.16xlarge</strong></td>
<td class="c7">64</td>
<td class="c7">128</td>
<td class="c7">30</td>
<td class="c7">20</td>
</tr><tr class="c9"><td class="c7"><strong>c8i.24xlarge</strong></td>
<td class="c7">96</td>
<td class="c7">192</td>
<td class="c7">40</td>
<td class="c7">30</td>
</tr><tr class="c9"><td class="c7"><strong>c8i.32xlarge</strong></td>
<td class="c7">128</td>
<td class="c7">256</td>
<td class="c7">50</td>
<td class="c7">40</td>
</tr><tr class="c9"><td class="c7"><strong>c8i.48xlarge</strong></td>
<td class="c7">192</td>
<td class="c7">384</td>
<td class="c7">75</td>
<td class="c7">60</td>
</tr><tr class="c9"><td class="c7"><strong>c8i.96xlarge</strong></td>
<td class="c7">384</td>
<td class="c7">768</td>
<td class="c7">100</td>
<td class="c7">80</td>
</tr><tr class="c9"><td class="c7"><strong>c8i.metal-48xl</strong></td>
<td class="c7">192</td>
<td class="c7">384</td>
<td class="c7">75</td>
<td class="c7">60</td>
</tr><tr class="c9"><td class="c7"><strong>c8i.metal-96xl</strong></td>
<td class="c7">384</td>
<td class="c7">768</td>
<td class="c7">100</td>
<td class="c7">80</td>
</tr></tbody></table><p><strong class="c6">C8i-flex instances</strong><br />C8i-flex instances are a lower-cost variant of the C8i instances, with 5 percent better price performance at 5 percent lower prices. These instances are designed for workloads that benefit from the latest generation performance but don’t fully utilize all compute resources. These instances can reach up to the full CPU performance 95 percent of the time.</p><p>Here are the specs for the C8i-flex instances:</p><table class="c10"><tbody><tr class="c8"><td class="c7"><strong>Instance size</strong></td>
<td class="c7"><strong>vCPUs</strong></td>
<td class="c7"><strong>Memory (GiB)</strong></td>
<td class="c7"><strong>Network bandwidth (Gbps)</strong></td>
<td class="c7"><strong>EBS bandwidth (Gbps)</strong></td>
</tr><tr class="c9"><td class="c7"><strong>c8i-flex.large</strong></td>
<td class="c7">2</td>
<td class="c7">4</td>
<td class="c7">Up to 12.5</td>
<td class="c7">Up to 10</td>
</tr><tr class="c9"><td class="c7"><strong>c8i-flex.xlarge</strong></td>
<td class="c7">4</td>
<td class="c7">8</td>
<td class="c7">Up to 12.5</td>
<td class="c7">Up to 10</td>
</tr><tr class="c9"><td class="c7"><strong>c8i-flex.2xlarge</strong></td>
<td class="c7">8</td>
<td class="c7">16</td>
<td class="c7">Up to 15</td>
<td class="c7">Up to 10</td>
</tr><tr class="c9"><td class="c7"><strong>c8i-flex.4xlarge</strong></td>
<td class="c7">16</td>
<td class="c7">32</td>
<td class="c7">Up to 15</td>
<td class="c7">Up to 10</td>
</tr><tr class="c9"><td class="c7"><strong>c8i-flex.8xlarge</strong></td>
<td class="c7">32</td>
<td class="c7">64</td>
<td class="c7">Up to 15</td>
<td class="c7">Up to 10</td>
</tr><tr class="c9"><td class="c7"><strong>c8i-flex.12xlarge</strong></td>
<td class="c7">48</td>
<td class="c7">96</td>
<td class="c7">Up to 22.5</td>
<td class="c7">Up to 15</td>
</tr><tr class="c9"><td class="c7"><strong>c8i-flex.16xlarge</strong></td>
<td class="c7">64</td>
<td class="c7">128</td>
<td class="c7">Up to 30</td>
<td class="c7">Up to 20</td>
</tr></tbody></table><p>If you’re currently using earlier generations of compute-optimized instances, you can adopt C8i-flex instances without having to make changes to your application or your workload.</p><p><strong class="c6">Now available</strong><br />Amazon EC2 C8i and C8i-flex instances are available today in the US East (N. Virginia), US East (Ohio), US West (Oregon), and Europe (Spain) <a href="https://docs.aws.amazon.com/glossary/latest/reference/glos-chap.html#region">AWS Regions</a>. C8i and C8i-flex instances can be purchased as <a href="https://aws.amazon.com/ec2/pricing/on-demand/?trk=cf96f8ec-de40-4ee0-8b64-3f7cf7660da2&amp;sc_channel=el">On-Demand</a>, <a href="https://aws.amazon.com/savingsplans/?trk=cc9e0036-98c5-4fa8-8df0-5281f75284ca&amp;sc_channel=el">Savings Plan</a>, and <a href="https://aws.amazon.com/ec2/spot/pricing/?trk=307341f6-3463-47d5-ba81-0957847a9b73&amp;sc_channel=el">Spot instances</a>. C8i instances are also available in <a href="https://aws.amazon.com/ec2/pricing/dedicated-instances/">Dedicated Instances</a> and <a href="https://aws.amazon.com/ec2/dedicated-hosts/pricing/">Dedicated Hosts</a>. To learn more, visit the <a href="https://aws.amazon.com/ec2/pricing">Amazon EC2 Pricing page</a>.</p><p>Give C8i and C8i-flex instances a try in the <a href="https://console.aws.amazon.com/ec2/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">Amazon EC2 console</a>. To learn more, visit the <a href="https://aws.amazon.com/ec2/instance-types/c8i/">Amazon EC2 C8i instances page</a> and send feedback to <a href="https://repost.aws/tags/TAO-wqN9fYRoyrpdULLa5y7g/amazon-ec-2">AWS re:Post for EC2</a> or through your usual AWS Support contacts.</p><p>— <a href="https://linkedin.com/in/channy/">Channy</a></p></section><aside id="Comments" class="blog-comments"><div data-lb-comp="aws-blog:cosmic-comments" data-env="prod" data-content-id="c4381084-d9b8-446d-bdc9-0aadf2f308c7" data-title="Introducing new compute-optimized Amazon EC2 C8i and C8i-flex instances" data-url="https://aws.amazon.com/blogs/aws/introducing-new-compute-optimized-amazon-ec2-c8i-and-c8i-flex-instances/"><p data-failed-message="Comments cannot be loaded… Please refresh and try again.">Loading comments…</p></div>
</aside>]]></summary>
    <link href="https://aws.amazon.com/blogs/aws/introducing-new-compute-optimized-amazon-ec2-c8i-and-c8i-flex-instances/"/>
    <updated>2025-10-06T20:33:00+02:00</updated>
  </entry>
  <entry>
    <id>https://aws.amazon.com/blogs/aws/aws-iam-identity-center-now-supports-customer-managed-kms-keys-for-encryption-at-rest/</id>
    <title><![CDATA[AWS IAM Identity Center now supports customer-managed KMS keys for encryption at rest]]></title>
    <summary><![CDATA[<section class="blog-post-content lb-rtxt"><table id="amazon-polly-audio-table"><tbody><tr><td id="amazon-polly-audio-tab">
<p></p></td></tr></tbody>


</table><p>Starting today, you can use your own <a href="https://aws.amazon.com/kms/">AWS Key Management Service (AWS KMS)</a> keys to encrypt identity data, such as user and group attributes, stored in <a href="https://aws.amazon.com/iam/identity-center/">AWS IAM Identity Center</a> organization instances.</p><p>Many organizations operating in regulated industries need complete control over encryption key management. While Identity Center already encrypts data at rest using AWS-owned keys, some customers require the ability to manage their own encryption keys for audit and compliance purposes.</p><p>With this launch, you can now use <a href="https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html">customer-managed KMS keys</a> (CMKs) to encrypt Identity Center identity data at rest. CMKs provide you with full control over the key lifecycle, including creation, rotation, and deletion. You can configure granular access controls to keys with <a href="https://aws.amazon.com/kms/">AWS Key Management Service (AWS KMS)</a> key policies and IAM policies, helping to ensure that only authorized principals can access your encrypted data. At launch time, the CMK must reside in the same AWS account and Region as your IAM Identity Center instance. The integration between Identity Center and KMS provides detailed <a href="https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-user-guide.html">AWS CloudTrail</a> logs for auditing key usage and helps meet regulatory compliance requirements.</p><p>Identity Center supports both single-Region and multi-Region keys to match your deployment needs. While Identity Center instances can currently only be deployed in a single Region, we recommend using multi-Region AWS KMS keys unless your company policies restrict you to single-Region keys. Multi-Region keys provide consistent key material across Regions while maintaining independent key infrastructure in each Region. This gives you more flexibility in your encryption strategy and helps future-proof your deployment.</p><p><strong>Let’s get started<br /></strong> Let’s imagine I want to use a CMK to encrypt the identity data of my Identity Center organization instance. My organization uses Identity Center to give employees access to <a href="https://docs.aws.amazon.com/singlesignon/latest/userguide/awsapps.html">AWS managed applications</a>, such as <a href="https://aws.amazon.com/q/business/">Amazon Q Business</a> or <a href="https://aws.amazon.com/athena">Amazon Athena</a>.</p><p>As of today, some AWS managed applications cannot be used with Identity Center configured with a customer managed KMS key. See <a href="https://docs.aws.amazon.com/singlesignon/latest/userguide/awsapps-that-work-with-identity-center.html">AWS managed applications that you can use with Identity Center</a> to keep you updated with the ever evolving list of compatible applications.</p><p>The high-level process requires first to create a symmetric customer managed key (CMK) in AWS KMS. The key must be configured for encrypt and decrypt operations. Next, I configure the key policies to grant access to Identity Center, AWS managed applications, administrators, and other principals who need access the Identity Center and IAM Identity Center service APIs. Depending on your usage of Identity Center, you’ll have to define different policies for the key and IAM policies for IAM principals. <a href="https://docs.aws.amazon.com/singlesignon/latest/userguide/identity-center-customer-managed-keys.html">The service documentation has more details to help you cover the most common use cases</a>.</p><p>This demo is in three parts. I first create a customer managed key in AWS KMS and configure it with permissions that will authorize Identity Center and AWS managed applications to use it. Second, I update the IAM policies for the principals that will use the key from another AWS account, such as AWS applications administrators. Finally, I configure Identity Center to use the key.</p><p><strong>Part 1: Create the key and define permissions</strong></p><p>First, let’s create a new CMK in AWS KMS.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/07/04/2025-07-04_11-01-01.png"><img class="aligncenter wp-image-97759" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/07/04/2025-07-04_11-01-01.png" alt="AWS KMW, screate key, part 1" width="800" height="550" /></a></p><p>The key must be in the same AWS Region and AWS account as the Identity Center instance. You must create the Identity Center instance and the key in the management account of your organization within AWS Organization.</p><p>I navigate to the AWS Key Management Service (AWS KMS) console in the same Region as my Identity Center instance, then I choose <strong>Create a key</strong>. This launches me into the key creation wizard.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/07/04/2025-07-04_11-01-52.png"><img class="aligncenter wp-image-97760" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/07/04/2025-07-04_11-01-52.png" alt="AWS KMW, screate key, part 2" width="800" height="511" /></a></p><p>Under <strong>Step 1–Configure key</strong>, I select the key type–either Symmetric (a single key used for both encryption and decryption) or Asymmetric (a public-private key pair for encryption/decryption and signing/verification). Identity Center requires symmetric keys for encryption at rest. I select <strong>Symmetric</strong>.</p><p>For key usage, I select <strong>Encrypt and decrypt</strong> which allows the key to be used only for encrypting and decrypting data.</p><p>Under <strong>Advanced options</strong>, I select <strong>KMS – recommended</strong> for <strong>Key material origin,</strong> so AWS KMS creates and manages the key material.</p><p>For <strong>Regionality</strong>, I choose between Single-Region or Multi-Region key. I select <strong>Multi-Region key</strong> to allow key administrators to replicate the key to other Regions. As explained already, Identity Center doesn’t require this today but it helps to future-proof your configuration. Remember that you can not transform a single-Region key to a multi-Region one after its creation (but you change the key used by Identity Center).</p><p>Then, I choose <strong>Next</strong> to proceed with additional configuration steps, such as adding labels, defining administrative permissions, setting usage permissions, and reviewing the final configuration before creating the key.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/07/04/2025-07-04_11-11-35.png"><img class="aligncenter wp-image-97761" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/07/04/2025-07-04_11-11-35.png" alt="AWS KMS, screate key, part 3" width="800" height="484" /></a></p><p>Under <strong>Step 2–Add Labels</strong>, I enter an <strong>Alias</strong> name for my key and select <strong>Next</strong>.</p><p>In this demo, I am editing the key policy by adding policy statements using templates provided <a href="https://docs.aws.amazon.com/singlesignon/latest/userguide/identity-center-customer-managed-keys.html#choose-kms-key-policy-statements">in the documentation</a>. I skip Step 3 and Step 4 and navigate to <strong>Step 5–Edit key policy</strong>.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/07/04/2025-07-04_11-47-58.png"><img class="aligncenter wp-image-97786" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/07/04/2025-07-04_11-47-58.png" alt="AWS KMS, screate key, part 5" width="800" height="517" /></a></p><p>Identity Center requires, at the minimum, permissions allowing Identity Center and its administrators to use the key. Therefore, I add three policy statements, the first and second authorize the administrators of the service, the third one to authorize the Identity Center service itself.</p><pre class="lang-json">{
        "Version": "2012-10-17",
        "Id": "key-consolepolicy-3",
        "Statement": [
                {
                        "Sid": "Allow_IAMIdentityCenter_Admin_to_use_the_KMS_key_via_IdentityCenter_and_IdentityStore",
                        "Effect": "Allow",
                        "Principal": {
                                "AWS": "ARN_OF_YOUR_IDENTITY_CENTER_ADMIN_IAM_ROLE"
                        },
                        "Action": [
                                "kms:Decrypt",
                                "kms:Encrypt",
                                "kms:GenerateDataKeyWithoutPlaintext"
                        ],
                        "Resource": "*",
                        "Condition": {
                                "StringLike": {
                                        "kms:ViaService": [
                                                "sso.*.amazonaws.com",
                                                "identitystore.*.amazonaws.com"
                                        ]
                                }
                        }
                },
                {
                        "Sid": "Allow_IdentityCenter_admin_to_describe_the_KMS_key",
                        "Effect": "Allow",
                        "Principal": {
                                "AWS": "ARN_OF_YOUR_IDENTITY_CENTER_ADMIN_IAM_ROLE"
                        },
                        "Action": "kms:DescribeKey",
                        "Resource": "*"
                },
                {
                        "Sid": "Allow_IdentityCenter_and_IdentityStore_to_use_the_KMS_key",
                        "Effect": "Allow",
                        "Principal": {
                                "Service": [
                                        "sso.amazonaws.com",
                                        "identitystore.amazonaws.com"
                                ]
                        },
                        "Action": [
                                "kms:Decrypt",
                                "kms:ReEncryptTo",
                                "kms:ReEncryptFrom",
                                "kms:GenerateDataKeyWithoutPlaintext"
                        ],
                        "Resource": "*",
            "Condition": {
               "StringEquals": { 
                      "aws:SourceAccount": "&lt;Identity Center Account ID&gt;" 
                   }
            }           
                },
                {
                        "Sid": "Allow_IdentityCenter_and_IdentityStore_to_describe_the_KMS_key",
                        "Effect": "Allow",
                        "Principal": {
                                "Service": [
                                        "sso.amazonaws.com",
                                        "identitystore.amazonaws.com"
                                ]
                        },
                        "Action": [
                                "kms:DescribeKey"
                        ],
                        "Resource": "*"
                }               
        ]
}</pre><p>I also have to add additional policy statements to allow my use case: the use of AWS managed applications. I add these two policy statements to authorize AWS managed applications and their administrators to use the KMS key. <a href="https://docs.aws.amazon.com/singlesignon/latest/userguide/identity-center-customer-managed-keys.html#choose-kms-key-policy-statements">The document lists additional use cases and their respective policies</a>.</p><pre class="lang-json">{
    "Sid": "Allow_AWS_app_admins_in_the_same_AWS_organization_to_use_the_KMS_key",
    "Effect": "Allow",
    "Principal": "*",
    "Action": [
        "kms:Decrypt"
    ],
    "Resource": "*",
    "Condition": {
        "StringEquals" : {
           "aws:PrincipalOrgID": "MY_ORG_ID (format: o-xxxxxxxx)"
        },
        "StringLike": {
            "kms:ViaService": [
                "sso.*.amazonaws.com", "identitystore.*.amazonaws.com"
            ]
        }
    }
},
{
   "Sid": "Allow_managed_apps_to_use_the_KMS_Key",
   "Effect": "Allow",
   "Principal": "*",
   "Action": [
      "kms:Decrypt"
    ],
   "Resource": "*",
   "Condition": {
      "Bool": { "aws:PrincipalIsAWSService": "true" },
      "StringLike": {
         "kms:ViaService": [
             "sso.*.amazonaws.com", "identitystore.*.amazonaws.com"
         ]
      },
      "StringEquals": { "aws:SourceOrgID": "MY_ORG_ID (format: o-xxxxxxxx)" }
   }
}</pre><p>You can further restrict the key usage to a specific Identity Center instance, specific application instances, or specific application administrators. <a href="https://docs.aws.amazon.com/singlesignon/latest/userguide/advanced-kms-policy.html">The documentation contains examples of advanced key policies for your use cases</a>.</p><p>To help protect against IAM role name changes when permission sets are recreated, use the approach described in the Custom trust policy example.</p><p><strong>Part 2: Update IAM policies to allow use of the KMS key from another AWS account</strong></p><p>Any IAM principal that uses the Identity Center service APIs from another AWS account, such as Identity Center delegated administrators and AWS application administrators, need an IAM policy statement that allows use of the KMS key via these APIs.</p><p>I grant permissions to access the key by creating a new policy and attaching the policy to the IAM role relevant for my use case. You can also add these statements to the existing identity-based policies of the IAM role.</p><p>To do so, after the key is created, I locate its ARN and replace the <code>key_ARN</code>in the template below. Then, I attach the policy to the managed application administrator IAM principal. The documentation also covers <a href="https://docs.aws.amazon.com/singlesignon/latest/userguide/baseline-KMS-key-policy.html#baseline-kms-key-policy-statements-for-use-of-iam-identity-center-mandatory">IAM policies that grants Identity Center delegated administrators permissions to access the key</a>.</p><p>Here is an example for managed application administrators:</p><pre class="lang-json">{
      "Sid": "Allow_app_admins_to_use_the_KMS_key_via_IdentityCenter_and_IdentityStore",
      "Effect": "Allow",
      "Action": 
        "kms:Decrypt",
      "Resource": "&lt;key_ARN&gt;",
      "Condition": {
        "StringLike": {
          "kms:ViaService": [
            "sso.*.amazonaws.com",
            "identitystore.*.amazonaws.com"
          ]
        }
      }
    }</pre><p><a href="https://docs.aws.amazon.com/singlesignon/latest/userguide/baseline-KMS-key-policy.html">The documentation shares IAM policies template for the most common use cases</a>.</p><p><strong>Part 3: Configure IAM Identity Center to use the key</strong></p><p>I can configure a CMK either during the enablement of an Identity Center organization instance or on an existing instance, and I can change the encryption configuration at any time by switching between CMKs or reverting to AWS-owned keys.</p><p>Please note that an incorrect configuration of KMS key permissions can disrupt Identity Center operations and access to AWS managed applications and accounts through Identity Center. Proceed carefully to this final step and ensure you have read and understood the documentation.</p><p>After I have created and configured my CMK, I can select it under <strong>Advanced configuration</strong> when enabling Identity Center.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/06/25/2025-06-25_10-39-53.png"><img class="aligncenter size-full wp-image-97502" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/06/25/2025-06-25_10-39-53.png" alt="IDC with CMK configuration" width="800" height="583" /></a></p><p>To configure a CMK on an existing Identity Center instance using the AWS Management Console, I start by navigating to the Identity Center section of the <a href="https://console.aws.amazon.com">AWS Management Console</a>. From there, I select <strong>Settings</strong> from the navigation pane, then I select the <strong>Management</strong> tab, and select <strong>Manage encryption</strong> in the <strong>Key for encrypting IAM Identity Center data at rest</strong> section.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/06/25/2025-06-25_15-04-27.png"><img class="aligncenter size-full wp-image-97503" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/06/25/2025-06-25_15-04-27.png" alt="Change key on existing IDC" width="800" height="545" /></a></p><p>At any time, I can select another CMK from the same AWS Account, or switch back to an AWS-managed key.</p><p>After choosing <strong>Save</strong>, the key change process takes a few seconds to complete. All service functionalities continue uninterrupted during the transition. If, for whatever reasons, Identity Center can not access the new key, an error message will be returned and Identity Center will continue to use the current key, keeping your identity data encrypted with the mechanism it is already encrypted with.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/06/25/2025-06-25_15-04-43.png"><img class="aligncenter size-full wp-image-97504" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/06/25/2025-06-25_15-04-43.png" alt="CMK on IDC, select a new key" width="400" height="246" /></a></p><p><strong>Things to keep in mind<br /></strong> The encryption key you create becomes a crucial component of your Identity Center. When you choose to use your own managed key to encrypt identity attributes at rest, you have to verify the following points.</p><ul><li>Have you configured <a href="https://docs.aws.amazon.com/singlesignon/latest/userguide/baseline-KMS-key-policy.html">the necessary permissions</a> to use the KMS key? Without proper permissions, enabling the CMK may fail or disrupt IAM Identity Center administration and AWS managed applications.</li>
<li>Have you verified that your AWS managed applications are compatible with CMK keys? For a list of compatible applications, see <a href="https://docs.aws.amazon.com/singlesignon/latest/userguide/awsapps-that-work-with-identity-center.html">AWS managed applications that you can use with IAM Identity Center.</a> Enabling CMK for Identity Center that is used by AWS managed applications incompatible with CMK will result in operational disruption for those applications. If you have incompatible applications, do not proceed.</li>
<li>Is your organization using AWS managed applications that require additional IAM role configuration to use the Identity Center and Identity Store APIs? For each such AWS managed application that’s already deployed, check the managed application’s User Guide for updated KMS key permissions for IAM Identity Centre usage and update them as instructed to prevent application disruption.</li>
<li>For brevity, the KMS key policy statements in this post omit the <a href="https://docs.aws.amazon.com/singlesignon/latest/userguide/encryption-at-rest.html#iam-identity-center-encryption-context">encryption context</a>, which allows you <a href="https://docs.aws.amazon.com/singlesignon/latest/userguide/advanced-kms-policy.html#using-encryption-context-to-restrict-access">to restrict the use of the KMS key to Identity Center including a specific instance</a>. For your production scenarios, you can add a condition like this for Identity Center:
<pre class="lang-json">"Condition": {
   "StringLike": {
      "kms:EncryptionContext:aws:sso:instance-arn": "${identity_center_arn}",
      "kms:ViaService": "sso.*.amazonaws.com"
    }
}</pre>
<p>or this for Identity Store:</p>
<pre class="lang-json">"Condition": {
   "StringLike": {
      "kms:EncryptionContext:aws:identitystore:identitystore-arn": "${identity_store_arn}",
      "kms:ViaService": "identitystore.*.amazonaws.com"
    }
}</pre></li>
</ul><p><strong>Pricing and availability<br /></strong> Standard AWS KMS charges apply for key storage and API usage. Identity Center remains available at no additional cost.</p><p>This capability is now available in all AWS commercial Regions, AWS GovCloud (US), and AWS China Regions. To learn more, visit the <a href="https://docs.aws.amazon.com/singlesignon/latest/userguide/encryption-at-rest.html">IAM Identity Center User Guide</a>.</p><p>We look forward to learning how you use this new capability to meet your security and compliance requirements.</p><a href="https://linktr.ee/sebsto">— seb</a></section><aside id="Comments" class="blog-comments"><div data-lb-comp="aws-blog:cosmic-comments" data-env="prod" data-content-id="514a1b55-6150-4b6e-8a96-f2463fc24c2b" data-title="AWS IAM Identity Center now supports customer-managed KMS keys for encryption at rest" data-url="https://aws.amazon.com/blogs/aws/aws-iam-identity-center-now-supports-customer-managed-kms-keys-for-encryption-at-rest/"><p data-failed-message="Comments cannot be loaded… Please refresh and try again.">Loading comments…</p></div>
</aside>]]></summary>
    <link href="https://aws.amazon.com/blogs/aws/aws-iam-identity-center-now-supports-customer-managed-kms-keys-for-encryption-at-rest/"/>
    <updated>2025-10-06T19:52:00+02:00</updated>
  </entry>
  <entry>
    <id>https://aws.amazon.com/blogs/aws/aws-weekly-roundup-amazon-bedrock-aws-outposts-amazon-ecs-managed-instances-aws-builder-id-and-more-october-6-2025/</id>
    <title><![CDATA[AWS Weekly Roundup:  Amazon Bedrock, AWS Outposts, Amazon ECS Managed Instances, AWS Builder ID, and more (October 6, 2025)]]></title>
    <summary><![CDATA[<section class="blog-post-content lb-rtxt"><table id="amazon-polly-audio-table"><tbody><tr><td id="amazon-polly-audio-tab">
<p></p></td></tr></tbody>


</table><p>Last week, Anthropic’s Claude Sonnet 4.5—the world’s best coding model according to SWE-Bench – became available in <a href="https://docs.aws.amazon.com/amazonq/latest/qdeveloper-ug/command-line.html">Amazon Q command line interface (CLI)</a> and <a href="https://kiro.dev/">Kiro</a>. I’m excited about this for two reasons:</p><p>First, a few weeks ago I spent 4 intensive days with a global customer delivering an AI-assisted development workshop, where I experienced firsthand how <a href="https://aws.amazon.com/q/">Amazon Q</a> CLI boosts developer productivity. During the workshop, the customer was able to add a new feature in their application within a day using Amazon Q CLI, which would have traditionally taken them at least a couple of weeks. With Anthropic’s Claude Sonnet 4.5 in Amazon Q CLI, I know developer productivity will be enhanced further.</p><p>Second, I’ve started preparing for my code talk at <a href="https://reinvent.awsevents.com/">AWS re:Invent 2025</a>, where my co-speaker and I will show live coding to modernize a legacy codebase using Kiro. I can’t wait to use Anthropic’s Claude Sonnet 4.5 in Kiro to create a live demo. If you want to see this demo and over a thousand other sessions on cloud and AI, join us at <a href="https://reinvent.awsevents.com/">AWS re:Invent 2025</a> in Las Vegas from December 1–5.</p><p><strong>Last week’s launches</strong><br />Here are some launches that got my attention:</p><ul><li><a href="https://aws.amazon.com/blogs/aws/introducing-claude-sonnet-4-5-in-amazon-bedrock-anthropics-most-intelligent-model-best-for-coding-and-complex-agents/">Availability of Claude Sonnet 4.5 in Amazon Bedrock</a> – Anthropic’s most intelligent model, best for coding and complex agents, is now available in Amazon Bedrock. By using Claude Sonnet 4.5 in Amazon Bedrock, developers gain access to a fully managed service that not only provides a unified API for foundation models (FMs) but keeps their data under complete control with enterprise-grade tools for security, and optimization.</li>
<li><a href="https://aws.amazon.com/blogs/aws/announcing-aws-outposts-third-party-storage-integration-with-dell-and-hpe/">AWS Outposts supports third-party storage integration with Dell and HPE</a> – AWS Outposts third-party storage integration now includes <a href="https://www.dell.com/en-us/shop/storage-servers-and-networking-for-business/sf/power-store">Dell PowerStore</a> and <a href="https://www.hpe.com/us/en/storage/alletra.html">HPE Alletra Storage MP B10000</a> systems, joining the list of existing integrations with <a href="https://aws.amazon.com/blogs/aws/announcing-aws-outposts-third-party-storage-integration-with-dell-and-hpe/#:~:text=NetApp%20on%2Dpremises%20enterprise%20storage%20arrays">NetApp on-premises enterprise storage arrays</a> and <a href="https://aws.amazon.com/blogs/aws/announcing-aws-outposts-third-party-storage-integration-with-dell-and-hpe/#:~:text=Pure%20Storage%20FlashArray">Pure Storage FlashArray</a>. This integration serves three key purposes. First, it helps you maintain your existing storage infrastructure while migrating VMware workloads to AWS. Second, it helps you meet strict data residency requirements by keeping your data on premises while using AWS services. Third, it means you can use AWS Outposts with third-party storage arrays through AWS tooling.</li>
<li><a href="https://aws.amazon.com/blogs/aws/announcing-amazon-ecs-managed-instances-for-containerized-applications/">Amazon ECS Managed Instances now available</a> – Amazon ECS Managed Instances for containerized applications is a new fully managed compute option for Amazon ECS designed to eliminate infrastructure management overhead while giving you access to the full capabilities of Amazon EC2. ECS Managed Instances helps you quickly launch and scale your workloads while enhancing performance and reducing your total cost of ownership.</li>
<li><a href="https://aws.amazon.com/about-aws/whats-new/2025/10/application-map-generally-available-amazon-cloudwatch/">Application map is now generally available for Amazon CloudWatch</a> – Amazon CloudWatch now helps you monitor large-scale distributed applications by automatically discovering and organizing services into groups based on configurations and their relationships. With this new application performance monitoring (APM) capability, you can quickly visualize which applications and dependencies to focus on while troubleshooting your distributed applications.</li>
<li><a href="https://aws.amazon.com/about-aws/whats-new/2025/10/open-source-mcp-server-amazon-bedrock-agentcore/">Amazon Bedrock AgentCore Model Context Protocol (MCP) server now available</a> – With built-in support for runtime, gateway integration, identity management, and agent memory, the AgentCore MCP server is purpose-built to speed up creation of components compatible with Bedrock AgentCore. You can use the AgentCore MCP server for rapid prototyping, production AI solutions, or to scale your agent infrastructure.</li>
</ul><p><strong>Additional Updates<br /></strong> Here are some additional news items and blog posts that I found interesting:</p><ul><li><a href="https://aws.amazon.com/about-aws/whats-new/2025/10/aws-builder-id-sign-in-google/">AWS Builder ID now supports Sign in with Google</a> – You can now create an AWS Builder ID using sign in with Google. AWS Builder ID is a personal profile that provides access to AWS applications including Kiro, AWS Builder Center, AWS Training and Certification, AWS re:Post and AWS Startups.</li>
<li><a href="https://aws.amazon.com/about-aws/whats-new/2025/10/aws-api-mcp-server-v1-0-0-release/">AWS API MCP Server v1.0.0 release</a> – AWS API MCP server acts as a bridge between AI assistants and AWS services enabling foundation models to interact with any AWS API through natural language by creating and executing syntactically correct CLI commands. The AWS API MCP Server is open-source and available now on <a href="https://github.com/awslabs/mcp/tree/main/src/aws-api-mcp-server">AWS Labs GitHub repository</a>.</li>
<li><a href="https://aws.amazon.com/about-aws/whats-new/2025/10/aws-knowledge-mcp-server-generally-available/">AWS Knowledge MCP Server now generally available</a> – The AWS Knowledge server gives AI agents and MCP clients access to authoritative knowledge, including documentation, blog posts, What’s New announcements, and Well-Architected best practices, in an LLM-compatible format. With this release, the server also includes knowledge about the regional availability of AWS APIs and CloudFormation resources.</li>
<li><a href="https://aws.amazon.com/about-aws/whats-new/2025/09/aws-transform-terraform-vmware-network-automation/">AWS Transform now enables Terraform for VMware network automation</a> – AWS Transform now offers Terraform as an additional option to generate network infrastructure code automatically from VMware environments. The service converts your source network definitions into reusable Terraform modules, complementing current AWS CloudFormation and AWS Cloud Development Kit (CDK) support.</li>
</ul><p><strong>Upcoming AWS events</strong><br />Check your calendar and sign up for upcoming AWS events:</p><ul><li><a href="https://info.devpost.com/blog/aws-ai-agent-global-hackathon?trk=c4ea046f-18ad-4d23-a1ac-cdd1267f942c&amp;sc_channel=el">AWS AI Agent Global Hackathon</a> – This is your chance to dive deep into our powerful generative AI stack and create something truly awesome. From September 8th to October 20th, you have the opportunity to create AI agents using AWS suite of AI services, competing for over $45,000 in prizes and exclusive go-to-market opportunities.</li>
<li><a href="https://aws.amazon.com/startups/lp/aws-gen-ai-lofts?trk=c4ea046f-18ad-4d23-a1ac-cdd1267f942c&amp;sc_channel=el">AWS Gen AI Lofts</a> – You can learn AWS AI products and services with exclusive sessions, meet industry-leading experts, and have valuable networking opportunities with investors and peers. Register in your nearest city: <a href="https://aws.amazon.com/startups/lp/aws-gen-ai-loft-paris?trk=c4ea046f-18ad-4d23-a1ac-cdd1267f942c&amp;sc_channel=el">Paris</a> (October 7–21), <a href="https://aws.amazon.com/startups/lp/aws-gen-ai-loft-london?trk=c4ea046f-18ad-4d23-a1ac-cdd1267f942c&amp;sc_channel=el">London</a> (Oct 13–21), and <a href="https://aws.amazon.com/startups/lp/aws-gen-ai-loft-tel-aviv?trk=c4ea046f-18ad-4d23-a1ac-cdd1267f942c&amp;sc_channel=el">Tel Aviv</a> (November 11–19).</li>
<li><a href="https://aws.amazon.com/events/community-day/?trk=c4ea046f-18ad-4d23-a1ac-cdd1267f942c&amp;sc_channel=el">AWS Community Days</a> – Join community-led conferences that feature technical discussions, workshops, and hands-on labs led by expert AWS users and industry leaders from around the world: <a href="https://www.aws-community-day.de/">Munich</a> (October 7), <a href="https://awscommunity.eu/">Budapest</a> (October 16).</li>
</ul><p>You can browse <a href="https://aws.amazon.com/events/explore-aws-events?trk=c4ea046f-18ad-4d23-a1ac-cdd1267f942c&amp;sc_channel=el">all upcoming AWS events</a> and <a href="https://aws.amazon.com/startups/events?trk=c4ea046f-18ad-4d23-a1ac-cdd1267f942c&amp;sc_channel=el">AWS startup events</a>.</p><p>That’s all for this week. Check back next Monday for another <a href="https://aws.amazon.com/blogs/aws/tag/week-in-review/?trk=7c8639c6-87c6-47d6-9bd0-a5812eecb848&amp;sc_channel=el">Weekly Roundup</a>!</p><p>— <a href="https://www.linkedin.com/in/kprasadrao/">Prasad</a></p></section><aside id="Comments" class="blog-comments"><div data-lb-comp="aws-blog:cosmic-comments" data-env="prod" data-content-id="18f081b8-2200-45b1-9420-7cea5eb880df" data-title="AWS Weekly Roundup: Amazon Bedrock, AWS Outposts, Amazon ECS Managed Instances, AWS Builder ID, and more (October 6, 2025)" data-url="https://aws.amazon.com/blogs/aws/aws-weekly-roundup-amazon-bedrock-aws-outposts-amazon-ecs-managed-instances-aws-builder-id-and-more-october-6-2025/"><p data-failed-message="Comments cannot be loaded… Please refresh and try again.">Loading comments…</p></div>
</aside>]]></summary>
    <link href="https://aws.amazon.com/blogs/aws/aws-weekly-roundup-amazon-bedrock-aws-outposts-amazon-ecs-managed-instances-aws-builder-id-and-more-october-6-2025/"/>
    <updated>2025-10-06T15:42:00+02:00</updated>
  </entry>
  <entry>
    <id>https://aws.amazon.com/blogs/aws/announcing-amazon-ecs-managed-instances-for-containerized-applications/</id>
    <title><![CDATA[Announcing Amazon ECS Managed Instances for containerized applications]]></title>
    <summary><![CDATA[<section class="blog-post-content lb-rtxt"><table id="amazon-polly-audio-table"><tbody><tr><td id="amazon-polly-audio-tab">
<p></p></td></tr></tbody>


</table><p>Today, we’re announcing Amazon ECS Managed Instances, a new compute option for <a href="https://aws.amazon.com/ecs/">Amazon Elastic Container Service (Amazon ECS)</a> that enables developers to use the full range of <a href="https://aws.amazon.com/ec2">Amazon Elastic Compute Cloud (Amazon EC2)</a> capabilities while offloading infrastructure management responsibilities to <a href="https://aws.amazon.com">Amazon Web Service (AWS)</a>. This new offering combines the operational simplicity of offloading infrastructure with the flexibility and control of Amazon EC2, which means customers can focus on building applications that drive innovation, while reducing total cost of ownership (TCO) and maintaining AWS best practices.</p><p>Amazon ECS Managed Instances provides a fully managed container compute environment that supports a broad range of EC2 instance types and deep integration with AWS services. By default, it automatically selects the most cost-optimized EC2 instances for your workloads, but you can specify particular instance attributes or types when needed. AWS handles all aspects of infrastructure management, including provisioning, scaling, security patching, and cost optimization, enabling you to concentrate on building and running your applications.</p><p><strong>Let’s try it out</strong></p><p>Looking at the <a href="https://aws.amazon.com/console/">AWS Management Console</a> experience for creating a new Amazon ECS cluster, I can see the new option for using ECS Managed Instances. Let’s take a quick tour of all the new options.</p><p><img class="alignnone size-large wp-image-99478" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/09/24/Screenshot-2025-09-24-at-10.51.19%E2%80%AFAM-1024x502.png" alt="Creating a ECS cluster with Managed Instances" width="1024" height="502" /></p><p>After I’ve selected <strong>Fargate and Managed Instances</strong>, I’m presented with two options. If I select <strong>Use ECS default</strong>, Amazon ECS will choose general purpose instance types based on grouping together pending Tasks, and picking the optimum instance type based on cost and resilience metrics. This is the most straightforward and recommended way to get started. Selecting <strong>Use custom – advanced</strong> opens up additional configuration parameters, where I can fine-tune the attributes of instances Amazon ECS will use.</p><p><img class="alignnone size-large wp-image-99479" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/09/24/Screenshot-2025-09-24-at-12.59.44%E2%80%AFPM-1024x593.png" alt="Creating a ECS cluster with Managed Instances" width="1024" height="593" /></p><p>By default, I see <strong>CPU</strong> and <strong>Memory</strong> as attributes, but I can select from 20 additional attributes to continue to filter the list of available instance types Amazon ECS can access.</p><p><img class="alignnone size-large wp-image-99577" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/09/30/Screenshot-2025-09-30-at-4.05.55%E2%80%AFPM-1024x735.png" alt="ECS Managed Instances" width="1024" height="735" /></p><p>After I’ve made my attribute selections, I see a list of all the instance types that match my choices.</p><p><img class="alignnone size-large wp-image-99484" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/09/24/Screenshot-2025-09-24-at-12.59.57%E2%80%AFPM-1-1024x466.png" alt="Creating a ECS cluster with Managed Instances" width="1024" height="466" /></p><p>From here, I can create my ECS cluster as usual and Amazon ECS will provision instances for me on my behalf based on the attributes and criteria I’ve defined in the previous steps.</p><p><strong>Key features of Amazon ECS Managed Instances</strong></p><p>With Amazon ECS Managed Instances, AWS takes full responsibility for infrastructure management, handling all aspects of instance provisioning, scaling, and maintenance. This includes implementing regular security patches initiated every 14 days (due to instance connection draining, the actual lifetime of the instance may be longer), with the ability to schedule maintenance windows using Amazon EC2 event windows to minimize disruption to your applications.</p><p>The service provides exceptional flexibility in instance type selection. Although it automatically selects cost-optimized instance types by default, you maintain the power to specify desired instance attributes when your workloads require specific capabilities. This includes options for GPU acceleration, CPU architecture, and network performance requirements, giving you precise control over your compute environment.</p><p>To help optimize costs, Amazon ECS Managed Instances intelligently manages resource utilization by automatically placing multiple tasks on larger instances when appropriate. The service continually monitors and optimizes task placement, consolidating workloads onto fewer instances to dry up, utilize and terminate idle (empty) instances, providing both high availability and cost efficiency for your containerized applications.</p><p>Integration with existing AWS services is seamless, particularly with Amazon EC2 features such as EC2 pricing options. This deep integration means that you can maximize existing capacity investments while maintaining the operational simplicity of a fully managed service.</p><p>Security remains a top priority with Amazon ECS Managed Instances. The service runs on Bottlerocket, a purpose-built container operating system, and maintains your security posture through automated security patches and updates. You can see all the updates and patches applied to the Bottlerocket OS image on the <a href="https://bottlerocket.dev/en/os/">Bottlerocket website</a>. This comprehensive approach to security keeps your containerized applications running in a secure, maintained environment.</p><p><strong>Available now</strong></p><p>Amazon ECS Managed Instances is available today in US East (North Virginia), US West (Oregon), Europe (Ireland), Africa (Cape Town), Asia Pacific (Singapore), and Asia Pacific (Tokyo) AWS Regions. You can start using Managed Instances through the AWS Management Console, AWS Command Line Interface (AWS CLI), or infrastructure as code (IaC) tools such as AWS Cloud Development Kit (AWS CDK) and AWS CloudFormation. You pay for the EC2 instances you use plus a management fee for the service.</p><p>To learn more about Amazon ECS Managed Instances, visit the <a href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ManagedInstances.html">documentation</a> and get started simplifying your container infrastructure today.</p></section><aside id="Comments" class="blog-comments"><div data-lb-comp="aws-blog:cosmic-comments" data-env="prod" data-content-id="66bc96b8-a739-4b29-8410-937e412fd6fd" data-title="Announcing Amazon ECS Managed Instances for containerized applications" data-url="https://aws.amazon.com/blogs/aws/announcing-amazon-ecs-managed-instances-for-containerized-applications/"><p data-failed-message="Comments cannot be loaded… Please refresh and try again.">Loading comments…</p></div>
</aside>]]></summary>
    <link href="https://aws.amazon.com/blogs/aws/announcing-amazon-ecs-managed-instances-for-containerized-applications/"/>
    <updated>2025-09-30T18:46:00+02:00</updated>
  </entry>
  <entry>
    <id>https://aws.amazon.com/blogs/aws/announcing-aws-outposts-third-party-storage-integration-with-dell-and-hpe/</id>
    <title><![CDATA[Announcing AWS Outposts third-party storage integration with Dell and HPE]]></title>
    <summary><![CDATA[<section class="blog-post-content lb-rtxt"><table><tbody><tr><td>
<p></p></td></tr></tbody>


</table><p>Since <a href="https://aws.amazon.com/blogs/aws/announcing-second-generation-aws-outposts-racks-with-breakthrough-performance-and-scalability-on-premises/">announcing second-generation AWS Outposts racks</a> in April with breakthrough performance and scalability, we’ve continued to innovate on behalf of our customers at the edge of the cloud. Today, we’re expanding <a href="https://aws.amazon.com/outposts/">AWS Outposts</a> third-party storage integration program to include <a href="https://www.dell.com/en-us/shop/storage-servers-and-networking-for-business/sf/power-store">Dell PowerStore</a> and <a href="https://www.hpe.com/us/en/storage/alletra.html">HPE Alletra Storage MP B10000</a> systems, joining our list of existing integrations with <a href="https://www.netapp.com/data-management/ontap-data-management-software/">NetApp on-premises enterprise storage arrays</a> and <a href="https://www.purestorage.com/products/nvme/flasharray-x.html">Pure Storage FlashArray</a>. This program makes it easy for customers to use AWS Outposts with third-party storage arrays through AWS native tooling. The solution integration is particularly important for organizations migrating VMware workloads to AWS who need to maintain their existing storage infrastructure during the transition, and for those who must meet strict data residency requirements by keeping their data on-premises while using AWS services.</p><p><img class="alignright wp-image-99472" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/09/24/Outposts-compute-rack_Gen2_front_45.png" alt="Outposts compute rack_Gen2_front_45" width="165" height="343" />This announcement builds upon two significant storage integration milestones we achieved in the past year. In December 2024, we introduced <a href="https://aws.amazon.com/blogs/compute/new-simplifying-the-use-of-third-party-block-storage-with-aws-outposts/">the ability to attach block data volumes from third-party storage</a> arrays to Amazon EC2 instances on Outposts directly through the AWS Management Console. Then in July 2025, we enabled <a href="https://aws.amazon.com/blogs/compute/deploying-external-boot-volumes-with-aws-outposts/">booting Amazon EC2 instances directly</a> from these external storage arrays. Now, with the addition of Dell and HPE, customers have even more choice in how they integrate their on-premises storage investments with <a href="https://aws.amazon.com/outposts/">AWS Outposts</a>.</p><p><strong>Enhanced storage integration capabilities</strong></p><p>Our third-party storage integration supports both data and boot volumes, offering two boot methods: iSCSI SANboot and Localboot. The iSCSI SANboot option enables both read-only and read-write boot volumes, while Localboot supports read-only boot volumes using either iSCSI or NVMe-over-TCP protocols. With this comprehensive approach, customers can centrally manage their storage resources while maintaining the consistent hybrid experience that Outposts provides.</p><p>Through the <a href="https://aws.amazon.com/ec2/">Amazon EC2</a> Launch Instance Wizard in the AWS Management Console, customers can configure their instances to use external storage from any of our supported partners. For boot volumes, we provide AWS-verified <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html">AMIs</a> for <a href="https://www.microsoft.com/en-us/windows-server">Windows Server 2022</a> and <a href="https://www.redhat.com/en/technologies/linux-platforms/enterprise-linux">Red Hat Enterprise Linux 9</a>, with automation scripts available through <a href="https://github.com/aws-samples/sample-outposts-third-party-storage-integration">AWS Samples</a> to simplify the setup process.</p><p><strong>Support for various Outposts configurations</strong></p><p>All third-party storage integration features are supported on Outposts 2U servers and both generations of Outposts racks. Support for second-generation Outposts racks means customers can combine the enhanced performance of our latest EC2 instances on Outposts—including twice the vCPU, memory, and network bandwidth—with their preferred storage solutions. The integration works seamlessly with both our new simplified network scaling capabilities and specialized Amazon EC2 instances designed for ultra-low latency and high throughput workloads.</p><p><strong>Things to know</strong></p><p>Customers can begin using these capabilities today with their existing Outposts deployments or when ordering new Outposts through the <a href="https://aws.amazon.com/console/">AWS Management Console</a>. If you are using third-party storage integration with Outposts servers, you can have either your onsite personnel or a third-party IT provider install the servers for you. After the Outposts servers are connected to your network, AWS will remotely provision compute and storage resources so you can start launching applications. For Outposts rack deployments, the process involves a setup where AWS technicians verify site conditions and network connectivity before the rack installation and activation. Storage partners assist with the implementation of the third-party storage components.</p><p>Third-party storage integration for Outposts with all compatible storage vendors is available at no additional charge in all AWS Regions where Outposts is supported. See the FAQs for <a href="https://aws.amazon.com/outposts/servers/faqs/">Outposts servers</a> and <a href="https://aws.amazon.com/outposts/rack/faqs/">Outposts racks</a> for the latest list of supported Regions.</p><p>This expansion of our Outposts third-party storage integration program demonstrates our continued commitment to providing flexible, enterprise-grade hybrid cloud solutions, meeting customers where they are in their cloud migration journey. To learn more about this capability and our supported storage vendors, visit the <a href="https://aws.amazon.com/outposts/partners">AWS Outposts partner page</a> and our technical documentation for <a href="https://docs.aws.amazon.com/outposts/latest/server-userguide/outpost-third-party-block-storage.html">Outposts servers</a>, <a href="https://docs.aws.amazon.com/outposts/latest/network-userguide/outpost-third-party-block-storage.html">second-generation Outposts racks</a>, and <a href="https://docs.aws.amazon.com/outposts/latest/userguide/outpost-third-party-block-storage.html">first-generation Outposts racks.</a> To learn more about partner solutions, check out <a href="https://www.dell.com/en-us/blog/unleashing-hybrid-cloud-power-dell-powerstore-now-validated-for-aws-outposts/">Dell PowerStore integration with AWS Outposts</a> and <a href="https://community.hpe.com/t5/around-the-storage-block/hpe-and-aws-extend-the-value-of-aws-outposts-with-hpe-alletra-mp/ba-p/7255845">HPE Alletra Storage MP B10000 integration with AWS Outposts</a>.</p></section><aside class="blog-comments"><div data-lb-comp="aws-blog:cosmic-comments" data-env="prod" data-content-id="1e58f9eb-5997-48e6-8900-96ac4701d396" data-title="Announcing AWS Outposts third-party storage integration with Dell and HPE" data-url="https://aws.amazon.com/blogs/aws/announcing-aws-outposts-third-party-storage-integration-with-dell-and-hpe/"><p data-failed-message="Comments cannot be loaded… Please refresh and try again.">Loading comments…</p></div>
</aside>]]></summary>
    <link href="https://aws.amazon.com/blogs/aws/announcing-aws-outposts-third-party-storage-integration-with-dell-and-hpe/"/>
    <updated>2025-09-30T16:39:00+02:00</updated>
  </entry>
  <entry>
    <id>https://aws.amazon.com/blogs/aws/introducing-claude-sonnet-4-5-in-amazon-bedrock-anthropics-most-intelligent-model-best-for-coding-and-complex-agents/</id>
    <title><![CDATA[Introducing Claude Sonnet 4.5 in Amazon Bedrock: Anthropic’s most intelligent model, best for coding and complex agents]]></title>
    <summary><![CDATA[<table><tbody><tr><td>
<p></p></td></tr></tbody>


</table><p>Today, we’re excited to announce that <a href="https://www.anthropic.com/news/claude-sonnet-4-5">Claude Sonnet 4.5</a>, powered by Anthropic, is now available in <a href="https://aws.amazon.com/bedrock/">Amazon Bedrock</a>, a fully managed service that offers a choice of high- performing foundation models from leading AI companies. This new model builds upon Claude 4’s foundation to achieve state-of-the-art performance in coding and complex agentic applications.</p><p>Claude Sonnet 4.5 demonstrates advancements in agent capabilities, with enhanced performance in tool handling, memory management, and context processing. The model shows marked improvements in code generation and analysis, from identifying optimal improvements to exercising stronger judgment in refactoring decisions. It particularly excels at autonomous long-horizon coding tasks, where it can effectively plan and execute complex software projects spanning hours or days while maintaining consistent performance and reliability throughout the development cycle.</p><div id="attachment_99519" class="wp-caption aligncenter c6"><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/09/29/Sonnet_4-5_Eval_Blog.png"><img aria-describedby="caption-attachment-99519" class="wp-image-99519 size-full" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/09/29/Sonnet_4-5_Eval_Blog.png" alt="" width="2600" height="2288" /></a><p id="caption-attachment-99519" class="wp-caption-text">Source: <a href="https://www.anthropic.com/news/claude-sonnet-4-5">https://www.anthropic.com/news/claude-sonnet-4-5</a></p></div><p>By using Claude Sonnet 4.5 in Amazon Bedrock, developers gain access to a fully managed service that not only provides a unified API for foundation models but ensures their data stays under complete control with enterprise-grade tools for security, and optimization.</p><p>Claude Sonnet 4.5 also seamlessly integrates with <a href="https://aws.amazon.com/bedrock/agentcore/?trk=ac97e39c-d115-4d4a-b3fe-c695e0c9a7ee&amp;sc_channel=el">Amazon Bedrock AgentCore</a>, enabling developers to maximize the model’s capabilities for building complex agents. AgentCore’s purpose-built infrastructure complements the model’s enhanced abilities in tool handling, memory management, and context understanding. Developers can leverage complete session isolation, 8-hour long-running support, and comprehensive observability features to deploy and monitor production-ready agents from autonomous security operations to complex enterprise workflows.</p><p><strong>Business applications and use cases<br /></strong> Beyond its technical capabilities, Sonnet 4.5 delivers practical business value through consistent performance and advanced problem-solving abilities. The model excels at producing and editing business documents while maintaining reliable performance across complex workflows.</p><p>The model demonstrates strength in several key industries:</p><ul><li>Cybersecurity – Claude Sonnet 4.5 can be used to deploy agents that autonomously patch vulnerabilities before exploitation, shifting from reactive detection to proactive defense.</li>
<li>Finance – Sonnet 4.5 handles everything from entry-level financial analysis to advanced predictive analysis, helping transform manual audit preparation into intelligent risk management.</li>
<li>Research – Sonnet 4.5 can better handle tools, context, and deliver ready-to-go office files to drive expert analysis into final deliverables and actionable insights.</li>
</ul><p><strong>Sonnet 4.5 features in the Amazon Bedrock API</strong><br />Here are some highlights of Sonnet 4.5 in the Amazon Bedrock API:</p><p><strong>Smart Context Window Management</strong> – The new API introduces intelligent handling when AI models reach their maximum capacity. Instead of returning errors when conversations get too long, Claude Sonnet 4.5 will now generate responses up to the available limit and clearly indicate why it stopped. This eliminates frustrating interruptions and allows users to maximize their available context window.</p><p><strong>Tool Use Clearing for Efficiency</strong> – Claude Sonnet 4.5 enables automatic cleanup of tool interaction history during long conversations. When conversations involve multiple tool calls, the system can automatically remove older tool results while preserving recent ones. This keeps conversations efficient and prevents unnecessary token consumption, reducing costs while maintaining conversation quality.</p><p><strong>Cross-Conversation Memory</strong> – A new memory capability enables Sonnet 4.5 to remember information across different conversations through the use of a local memory file. Users can explicitly ask the model to remember preferences, context, or important information that persists beyond a single chat session. This creates more personalized and contextually aware interactions while keeping the information safe within the local file.</p><p>With these new capabilities for managing context, developers can build AI agents capable of handling long-running tasks at higher intelligence without hitting context limits or losing critical information as frequently.</p><p><strong>Getting started<br /></strong> To begin working with Claude Sonnet 4.5, you can access it through Amazon Bedrock using the correct model ID. A good practice is to use the <a href="https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_Converse.html?trk=ac97e39c-d115-4d4a-b3fe-c695e0c9a7ee&amp;sc_channel=el">Amazon Bedrock Converse API</a> to write code once and seamlessly switch between different models, making it easier to experiment with Sonnet 4.5 or any of the other models available in Amazon Bedrock.</p><p>Let’s see this in action with a simple example. I’m going to use the Amazon Bedrock Converse API to send a prompt to Sonnet 4.5. I start by importing the modules I’m going to use. For this short example, I only need <a href="https://aws.amazon.com/sdk-for-python/?trk=ac97e39c-d115-4d4a-b3fe-c695e0c9a7ee&amp;sc_channel=el">AWS SDK for Python (Boto3)</a> so I can create a BedrockRuntimeClient. I’m also importing the rich package so I can format my output nicely later on.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/09/23/import-modules.png"><img class="aligncenter size-full wp-image-99447" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/09/23/import-modules.png" alt="" width="322" height="76" /></a></p><p>Following best practices, I create a boto3 session and create an Amazon Bedrock client from it instead of creating one directly. This gives you explicit control over configuration, improves thread safety, and makes your code more predictable and testable compared to relying on the default session.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/09/23/creating-bedrock-client.png"><img class="aligncenter size-full wp-image-99448" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/09/23/creating-bedrock-client.png" alt="" width="668" height="115" /></a></p><p>I want to give the model something with a bit of complexity instead of asking a simple question to demonstrate the power of Sonnet 4.5. So I’m going to give the model the current state of an imaginary legacy monolithic application written in Java with a single database and ask for a digital transformation plan which includes a migration strategy, risk assessment, estimated timeline and key milestones and specific AWS services recommendations.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/09/23/full-prompt-2.png"><img class="aligncenter size-full wp-image-99451" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/09/23/full-prompt-2.png" alt="" width="1115" height="499" /></a></p><p>Because the prompt is quite long I put it in a text file locally and just load it up in code. I then set up the Amazon Bedrock converse payload setting the role to “user” to indicate that this is a message by the user of the application and add the prompt to the content.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/09/23/converse-request-payload.png"><img class="aligncenter size-full wp-image-99452" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/09/23/converse-request-payload.png" alt="" width="399" height="207" /></a></p><p>This is where the magic happens! We put it all together and call Claude Sonnet 4.5 using its model ID. Well, kind of. You can only access Sonnet 4.5 through an inference profile. This defines which AWS Regions will process your model requests and helps manage throughput and performance.</p><p>For this demo, I’ll be using one of Amazon Bedrock’s system-defined cross-Region inference profiles, which automatically routes requests across multiple Regions for optimal performance.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/09/29/sonnet-4.5-inference-profile-marked.png"><img class="aligncenter size-full wp-image-99523" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/09/29/sonnet-4.5-inference-profile-marked.png" alt="" width="1250" height="252" /></a></p><p>Now I just need to print to the screen to see the results. This is where I use the rich package I imported earlier just so we may have a nicely formatted output as I’m expecting a long response for this one. I also save the output to a file so I can have it handy as something to share with my teams.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/09/23/printing-results.png"><img class="aligncenter size-full wp-image-99455" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/09/23/printing-results.png" alt="" width="676" height="158" /></a></p><p>Ok, let’s check the results! As expected, Sonnet 4.5 worked through my requirements and provided extensive and deep guidance for my digital transformation plan that I could start putting into practice. It included an executive summary, a step-by-step migration strategy split into phases with time estimates, and even some code samples to seed the development process and start breaking things down into microservices. It also provided the business cases for introducing technology and recommended the correct AWS services for each scenario. Here are some highlights from the report.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/09/23/output-highlights.png"><img class="aligncenter size-full wp-image-99456" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/09/23/output-highlights.png" alt="" width="1920" height="1080" /></a></p><p>Claude Sonnet 4.5 is able to maintain consistency while delivering creative solutions making it an ideal choice for businesses seeking to use AI for complex problem-solving and development tasks. Its enhanced capabilities in following directions and using tools effectively translate into more reliable and innovative solutions across various business contexts.</p><p><strong>Things to know<br /></strong> Claude Sonnet 4.5 represents a significant step forward in agent capabilities, particularly excelling in areas where consistent performance and creative problem-solving are essential. Its enhanced abilities in tool handling, memory management, and context processing make it particularly valuable across key industries such as finance, research, and cybersecurity. Whether handling complex development lifecycles, executing long-running tasks, or tackling business-critical workflows, Claude Sonnet 4.5 combines technical excellence with practical business value.</p><p>Claude Sonnet 4.5 is available today. For <a href="https://docs.aws.amazon.com/bedrock/latest/userguide/models-supported.html?trk=ac97e39c-d115-4d4a-b3fe-c695e0c9a7ee&amp;sc_channel=el">detailed information about its availability</a> please visit the documentation.</p><p>To learn more about Amazon Bedrock explore our self-paced <a href="https://catalog.us-east-1.prod.workshops.aws/workshops/a4bdb007-5600-4368-81c5-ff5b4154f518/en-US?trk=ac97e39c-d115-4d4a-b3fe-c695e0c9a7ee&amp;sc_channel=el">Amazon Bedrock Workshop</a> and discover how to use available models and their capabilities in your applications.</p>]]></summary>
    <link href="https://aws.amazon.com/blogs/aws/introducing-claude-sonnet-4-5-in-amazon-bedrock-anthropics-most-intelligent-model-best-for-coding-and-complex-agents/"/>
    <updated>2025-09-29T17:56:00+02:00</updated>
  </entry>
</feed>
