AWS has released Strands Agents, an open source SDK for building and deploying AI agents with minimal code. The framework adopts a model-driven approach, allowing developers to use prompts and tools directly, without complex orchestration logic.
“Strands scales from simple to complex agent use cases, and from local development to deployment in production,” the company said in the press release. Teams across AWS, including Amazon Q Developer, AWS Glue, and VPC Reachability Analyzer, are already using Strands in production.
Strands is positioned as a lightweight alternative to existing frameworks that require elaborate workflow definitions. “Compared with frameworks that require developers to define complex workflows for their agents, Strands simplifies agent development by embracing the capabilities of state-of-the-art models to plan, chain thoughts, call tools, and reflect,” the announcement noted.
Strands Agents allows developers to define three components in code — the model, tools and a prompt.
It supports a wide range of models, including those from Amazon Bedrock, Anthropic, Meta (via Llama API), Ollama, and others through LiteLLM. Tools can be custom Python functions or pre-built utilities that interact with files, APIs, and AWS services.
The agent interacts with the model and tools in a loop until it completes the assigned task. “The Strands agentic loop takes full advantage of how powerful LLMs have become and how well they can natively reason, plan, and select tools,” the announcement said.
Strands also includes advanced tools to handle complex use cases. These include a retrieve tool for semantic search, a thinking tool to simulate deep analysis, and multi-agent tools for workflows and collaboration.
“By modelling sub-agents and multi-agent collaboration as tools, the model-driven approach enables the model to reason about if and when a task requires a defined workflow, graph, or swarm of sub-agents,” the company said.
Strands Agents is available on GitHub. Companies like Accenture, PwC, Meta, and Anthropic are already contributing. “Anthropic has already contributed support in Strands for using models through the Anthropic API, and Meta contributed support for Llama models through Llama API.”
The initiative stems from the Amazon Q Developer team’s own challenges with early agent frameworks. “Even though LLMs were getting dramatically better, those improvements didn’t mean we could build and iterate on agents any faster,” Clare Liguori, senior principal software engineer for AWS Agentic AI said, adding that what once took months now takes “days and weeks” with Strands.