7 min read
Laravel just shipped its AI SDK, and it’s one of those releases that feels bigger than a typical feature drop. AI isn’t being treated as an external integration anymore; it’s becoming part of the framework’s core story.
For years, if you wanted to build AI-powered features in Laravel, you had to wire things up yourself: choose a provider, handle responses, manage structure, deal with edge cases. It worked, but it wasn’t native. Now there’s an official, Laravel-style way to interact with LLMs and build agents directly inside your application.
That matters for two reasons. First, it lowers the barrier for developers who want to experiment or ship AI features in real products. Second, it sets a standard for how AI systems should be built in the Laravel ecosystem moving forward.
In this article, we’ll look at what the Laravel AI SDK actually brings to the table, walk through a technical deep dive using a real multi-agent example, and then explore what this shift means for the future of LarAgent.
At its core, the Laravel AI SDK is a native, Laravel-style interface for interacting with LLMs and building AI-powered features inside your application. It supports multiple providers out of the box - OpenAI, Anthropic, Gemini, and others, and abstracts them behind a clean API that feels consistent with the rest of the framework.
But it’s not only about simple text generation.
The SDK includes support for agents, tools, structured output, embeddings, queues, conversation persistence, and even multimodal capabilities like image and audio generation. In other words, it provides the primitives needed to build serious AI features, not just demo chatbots.
And because it’s designed with Laravel conventions in mind, it feels familiar. Configuration is clean. Commands are expressive. Agents are structured. The learning curve is significantly lower than stitching together third-party SDKs manually.
That’s what makes this release important: it aligns AI development with Laravel’s philosophy of productivity and clarity.
To understand what the Laravel AI SDK enables, it’s better to look at a real example instead of just listing features.
In our test, we built a small dual-agent system:
This isn’t a toy chatbot example. It’s closer to how production AI systems are created, multi-step, tool-enabled, and structured.
Let’s break down how the SDK handles this.
Laravel provides an Artisan command for generating agents:
php artisan make:agent DataExtractor --structured
This generates an agent class that already implements structured output support and includes placeholders for instructions, tools, and messages.
Inside the agent class, the most important method is instructions():
public function instructions(): string
{
return 'Extract structured data from the provided vacancy PDF
and return it in the specified format.';
}
This defines the system-level behavior. Clear and focused instructions reduce ambiguity and improve output reliability.
You can also define the model using attributes on the agent class:
#[Model('gpt-4o-mini')]
In our case, we used a smaller model for extraction since the task wasn’t complex reasoning; it was structured data extraction, which is pretty easy for LLMs.
One of the strongest features of the SDK is first-class structured output.
Instead of receiving free-form text and manually parsing it, you define a schema that the agent must follow:
public function schema(): array
{
return [
'company' => $schema->string()->required(),
'role' => $schema->string()->required(),
'seniority' => $schema->string()->enum(Seniority::cases())->required(),
'skills' => $schema->array()->items($schema->string())->required(),
];
}
This changes how you build AI features.
In production, unpredictable text responses are a liability. Structured output makes the system deterministic and machine-friendly.
Agents in the Laravel AI SDK can register tools. These tools allow the model to interact with application logic instead of trying to “imagine” answers.
For example:
public function tools(): array
{
return [
new WebSearch,
new SearchCandidates,
];
}
In our system, the second agent used:
This is where the SDK moves beyond simple text generation. Agents can perform actions, query data, and produce enriched results based on real application state.
And this reflects reality: production systems rarely rely on a single prompt. They orchestrate capabilities.
Another important piece is the state.
The SDK allows agents to remember previous messages and persist them in the database. It ships with migrations you can publish, enabling conversation storage with minimal setup.
If your agent needs to continue a workflow or maintain context across interactions, you can return previous messages from the messages() method:
public function messages(): array
{
return $this->conversationHistory();
}
This allows you to build:
Without implementing your own memory layer.
Under the hood, the SDK supports multiple providers such as OpenAI, Anthropic, and Gemini. Switching providers doesn’t require rewriting your architecture - it’s configuration-level.
It also supports more than text generation. You can generate images, audio, transcripts, and embeddings.
That matters because AI systems are increasingly multimodal. Laravel now provides a unified interface for all of it.
There are a few aspects of the SDK that stand out from a technical perspective.
First, multi-provider support means your architecture is not tightly coupled to a single AI vendor. Switching models or providers becomes a configuration decision rather than a rewrite.
Second, structured output and tool integration are first-class citizens. That signals a shift away from “chat-based AI” toward task-oriented, programmatic AI systems.
Third, conversation persistence is built-in. Agents can remember previous interactions and store message history in the database through published migrations. This makes stateful workflows natural rather than bolted on.
Combined, these elements push Laravel AI development toward something more mature, something closer to agentic systems rather than simple API wrappers.
When we began building LarAgent, none of this existed natively in Laravel. We had to build provider abstractions, structured handling, tool execution layers, and conversation persistence ourselves.
The Laravel AI SDK changes that landscape.
With official AI primitives now part of the framework, we can rely on Laravel for the foundational layer and focus LarAgent on higher-level concerns - orchestration, dynamic context management, evaluators, guardrails, trace debugging, monitoring dashboards, and production-oriented workflows.
In other words, instead of maintaining low-level AI plumbing, we can concentrate on building robust agentic systems designed for real-world environments, including regulated industries where reliability and traceability matter.
LarAgent 2.0 will be built fully on top of the Laravel AI SDK. And that’s not a compromise, it’s an upgrade.

The introduction of the Laravel AI SDK signals something bigger than a new feature. It formalizes AI as part of the Laravel ecosystem’s core direction.
As more developers experiment with agents, tools, and structured workflows, new conventions will emerge. New architectural patterns will stabilize. And the ecosystem will likely produce higher-level frameworks, orchestration layers, and production tooling on top of this foundation.
We’re excited to be part of that evolution, both by building on top of the SDK and contributing ideas back to the ecosystem.
The primitives are now official. The next stage is building real systems on top of them.
We also tested the Laravel AI SDK live and built a small dual-agent system to see how it works in practice. If you’d like to see the SDK in action, you can watch the full walkthrough here.

We are a 200+ people agency and provide product design, software development, and creative growth marketing services to companies ranging from fresh startups to established enterprises. Our work has earned us 100+ international awards, partnerships with Laravel, Vue, Meta, and Google, and the title of Georgia’s agency of the year in 2019 and 2021.
