I haven't used Jido for anything yet, but it's one of those projects I check in on once a month or so. BEAM does seem like a perfect fit for an agent framework, but the ecosystem seeming limited has held me back from going too far down that path. Excited to see 2.0!
Just a heads up, some of your code samples seem to be having an issue with entity escaping.
I really like the focus on “data and pure functions” from the beginning of the post.
I’ve read a lot on HN about how BEAM execution model is perfect for AI. I think a crucial part that’s usually missing in LLM-focused libraries is the robustness story in the face of node failures, rolling deployments, etc. There’s a misconception about Elixir (demonstrated in one of the claw comments below) that it provides location transparency - it ain’t so. You can have the most robust OTP node, but if you commit to an agent inside a long running process, it will go down when the node does.
Having clear, pure agent state between every API call step goes a long way towards solving that - put it in Mnesia or Redis, pick up on another node when the original is decommissioned. Checkpointing is the solution
My strongest opinion with Jido is that agents must be architecturally correct WITHOUT LLM's before they can be correct WITH LLM's
Jido core has zero LLM support for this reason.
There's nearing 40+ years of "Agent" research in CompSci, LLM's came along and we threw out all of it. I didn't like that so I spent time researching this history to do my best at considering it with Jido.
That said, I love LLM's - but they belong in the Jido AI package.
Fair enough! My comment is about agentic-focused libraries in general, it’s inaccurate of me to call all such libraries “LLM-focused”
Speaking of inaccuracies, BEAM does provide pretty good location transparency - but resource migration between nodes in particular is not part of the built-in goodies that OTP brings
Nice work shipping this. The BEAM's fault tolerance model makes a lot of sense for agent workloads, been thinking about similar tradeoffs on the orchestration side. Curious what the failure recovery looks like when an agent mid-run hits a bad LLM response vs. a process crash.
Love this! The timing couldn't be more perfect. I had to write my agent framework with a mix of gen servers and Oban. It's honestly a pain to deal with. This looks like it will really remove a lot of pain for development. Thank you so much!
I just LLM-built an A2A package which is a GenServer-like abstraction. I however missed that there already was another A2A implementation for Elixir. Anyway, I decided to leave it up because the package semantics were different enough. Here it is if anyone is interested: https://github.com/actioncard/a2a-elixir
I've been following this project for several months now and Elixir/BEAM is absolutely perfect for running agents. BEAM is so incredibly lightweight; IFYKYK. Theoretically you could run 1000s of agents on a single server. I'm looking forward to seeing what people who understand this build.
The core of Jido will run on a Raspberry Pi - we've even had people look at running Agents inside the BEAM where the BEAM is deployed on bare metal (embedded)
It'd be cool to see a screenshot of what 'observer' shows as the process tree with a few agents active.
Edit: for those not familiar with the BEAM ecosystem, observer shows all the running Erlang 'processes' (internal to the VM). Here are some examples screenshots on one of the first Google hits I found:
Very eager to read through your code! I read the first version and incorporated several of its ideas into our own internal elixir agent framework. (We make use of your ReqLLM package, thanks much for that!)
The point mikehostetler makes – 'agents must be architecturally correct WITHOUT LLMs before they can be correct WITH LLMs' – is underappreciated in most production deployments. We see this constantly: the failure mode isn't model quality, it's undefined operational boundaries and missing human-in-the-loop checkpoints for edge cases. Supervision trees as a first-class design primitive rather than an afterthought seems exactly right. What's your experience with teams that skip the architectural correctness step and go straight to LLM integration?
I don't have a good answer - I've seen a lot of agent deployments but the space is evolving quickly and it's difficult to meaningfully discuss patterns.
This will be solved - and I hope that Jido can be a meaningful participant in that wider conversation.
Although... the agent orchestration is really the easy part. It is just a loop. You can solve this in many different ways and yes some languages are more suitable for this than others. But still - very straightforward.
The hard part is making sure these agents can do useful things which requires connecting them to tools. Although just adding bash might seem like checking that box the reality is more complex when it comes to authentication (not only). It is even more problematic when you need to run this in some sort of distributed way where you need to inject context midway, abort or pause and do so with all the constraints in mind like timing issues for minted urls and tokens, etc. Btw, adding messages to the context while LLM is doing some other job (which you might want to do for all kinds of reasons) does not always work because the system is not deterministic. So you need to solve this somehow.
Even harder is coming up with useful ways to apply the technology. The technical side of things can be solved with good engineering but most of the applications of these agents are around pretty basic use-cases and the adoption is sort of stagnated. 99% of these agents are question/answer bots, task/calendar organisers, something to do with spam and the most useful one is coding assistants.
And so frankly I think the framework is irrelevant at this point unless one figures out how to do useful things.
Not sure if this will surprise you - but I 100% agree with this. I went through the journey that many others did - implementing the loop, then trying to make it useful, realizing the limitations, etc.
I came to similar conclusions - what does valuable agentic software look like? It's not OpenClaw (yet)
The game theory then, in my opinion, is to focus on the knowable frontier - implement tools we can trust - and continue working and sharing that work.
I am holding onto the optimistic case - valuable use cases beyond coding agents will emerge.
Elixir has a LangChain implementation by the same name. And in my opinion as a user of both, the Python version and the Elixir version, the Elixir version is vastly superior and reliable too.
This agentic framework can co-exist with LangChain if that's what you're wondering.
I went down this path a bit the other night, curious what OP's answer is. My mental model was that they could be complimentary? Jido for agent lifecycle, supervision, state management, etc, LangChain for the LLM interactions, prompt chains, RAG, etc. Looks like you could do everything in Jido 2.0, but if you like/are familiar with LangChain it seems like they could work well together.
Its an order of magnitude difference in what it can do, mostly because it can verify its own work. It can also use subagents which helps a lot. Large tasks I usually have done by subs with the main agent directing. This means the task they can take on can be much larger.
I'm in the same boat as waynesonfire and I'm afraid this doesn't answer the question sufficiently. What do you do with an agent? What's a concrete example vs. typing in a chat box.
Huh... excellent timing. I am working on a project that currently is handling this with bunch od npm tasks :)(I know), but it works.
Sidian Sidekicks, Obsidian vault reviewer agents.
I think Jido will be prefect for us and will help us organize and streamline not just our agent interactions but make them more clear, what is happening and which agent is doing what.
And on top of that, I get excuse to include Elixir in this project.
Let me guess, in the next 6 months, Elixir and Erlang becoming fashionable to build AI agents and then another hype cycle of AI usage and marketing of Elixir.
What's old is now rebranded, reheated and new again.
Elixir has always been fashionable to build high performance systems in. In fact, it is more suited for AI applications than any other language or framework because of the BEAM architecture and the flexibility of the language itself. I wish more people gave it a chance. You get insane performance at your fingertips with so much scalability out of the box and your code by default is less error prone compared to dynamic languages.
BEAM offers concrete benefits for agent frameworks: cheap preemptible processes, OTP behaviors like GenServer and Supervisor, ETS or Mnesia for fast state, and clustering tools that make managing thousands of stateful agents and restarts tractable.
The tradeoff is heavy ML still lives in Python and on GPUs, so run models as external services over gRPC or HTTP or use Nx with EXLA for smaller on-node work, and if you need native speed use Rustler NIFs or ports but never block the BEAM scheduler or the node will grind to a halt.
Just a heads up, some of your code samples seem to be having an issue with entity escaping.
I’ve read a lot on HN about how BEAM execution model is perfect for AI. I think a crucial part that’s usually missing in LLM-focused libraries is the robustness story in the face of node failures, rolling deployments, etc. There’s a misconception about Elixir (demonstrated in one of the claw comments below) that it provides location transparency - it ain’t so. You can have the most robust OTP node, but if you commit to an agent inside a long running process, it will go down when the node does.
Having clear, pure agent state between every API call step goes a long way towards solving that - put it in Mnesia or Redis, pick up on another node when the original is decommissioned. Checkpointing is the solution
Jido core has zero LLM support for this reason.
There's nearing 40+ years of "Agent" research in CompSci, LLM's came along and we threw out all of it. I didn't like that so I spent time researching this history to do my best at considering it with Jido.
That said, I love LLM's - but they belong in the Jido AI package.
Speaking of inaccuracies, BEAM does provide pretty good location transparency - but resource migration between nodes in particular is not part of the built-in goodies that OTP brings
https://github.com/openai/symphony
I'm not very familiar with the space, I follow Elixir goings on more than some of the AI stuff.
It is curious... and refreshing... to see Elixir & the BEAM popping up for these sorts of orchestration type workloads.
https://web.archive.org/web/20260305161030/https://jido.run/
I just LLM-built an A2A package which is a GenServer-like abstraction. I however missed that there already was another A2A implementation for Elixir. Anyway, I decided to leave it up because the package semantics were different enough. Here it is if anyone is interested: https://github.com/actioncard/a2a-elixir
The future is going to be wild
Edit: for those not familiar with the BEAM ecosystem, observer shows all the running Erlang 'processes' (internal to the VM). Here are some examples screenshots on one of the first Google hits I found:
https://fly.io/docs/elixir/advanced-guides/connect-observer-...
Teaser screenshot is here: https://x.com/mikehostetler/status/2025970863237972319
Agents, when wrapped with an AgentRuntime, are typically a single GenServer process. There are some exceptions if you need a larger topology.
I was curious about the actual BEAM processes though, that you see via the observer application in Erlang/Elixir.
It's use-case specific though - security is a much bigger topic then just "agents in containers"
The point of Jido isn't to solve this directly - it's to give you the tools to solve it for your needs.
Congrats on the release!
This will be solved - and I hope that Jido can be a meaningful participant in that wider conversation.
I used Claude to learn & refine the patterns, but it couldn’t write this level of OTP code at that time.
As models got better, I used them to find bugs and simplify - but the bones are roughly the same from that original design.
Although... the agent orchestration is really the easy part. It is just a loop. You can solve this in many different ways and yes some languages are more suitable for this than others. But still - very straightforward.
The hard part is making sure these agents can do useful things which requires connecting them to tools. Although just adding bash might seem like checking that box the reality is more complex when it comes to authentication (not only). It is even more problematic when you need to run this in some sort of distributed way where you need to inject context midway, abort or pause and do so with all the constraints in mind like timing issues for minted urls and tokens, etc. Btw, adding messages to the context while LLM is doing some other job (which you might want to do for all kinds of reasons) does not always work because the system is not deterministic. So you need to solve this somehow.
Even harder is coming up with useful ways to apply the technology. The technical side of things can be solved with good engineering but most of the applications of these agents are around pretty basic use-cases and the adoption is sort of stagnated. 99% of these agents are question/answer bots, task/calendar organisers, something to do with spam and the most useful one is coding assistants.
And so frankly I think the framework is irrelevant at this point unless one figures out how to do useful things.
I came to similar conclusions - what does valuable agentic software look like? It's not OpenClaw (yet)
The game theory then, in my opinion, is to focus on the knowable frontier - implement tools we can trust - and continue working and sharing that work.
I am holding onto the optimistic case - valuable use cases beyond coding agents will emerge.
https://github.com/agoodway/goodwizard
There’s a growing community showcase and I have a list of private/commercial references as well depending on your goals
(Probably complimentary but wanted to check)
https://hex.pm/packages/req_llm
ReqLLM is baked into the heart of Jido now - we don't support anything else
This agentic framework can co-exist with LangChain if that's what you're wondering.
https://github.com/brainlid/langchain
As LLM API's evolved, I needed more and built ReqLLM which is now embedded deeply into Jido.
I am an amateur, can you point me in the correct direction to understand BEAM and use JIDO 2.0 to start building? Please.
Thanks, Jose
https://jido.run/docs/getting-started/new-to-elixir
Sidian Sidekicks, Obsidian vault reviewer agents.
I think Jido will be prefect for us and will help us organize and streamline not just our agent interactions but make them more clear, what is happening and which agent is doing what.
And on top of that, I get excuse to include Elixir in this project.
Thanks for shipping.
Agree on operational boundaries - it took a long time to land where we did with the 2.0 release
Too much to say about this in a comment, but take a look at the "Concepts: Executor" section - it digs into the model here
Actions can enforce an output schema: https://hexdocs.pm/jido_action/schemas-validation.html#outpu...
Agents can as well - but it can be implemented a few different ways.
What's old is now rebranded, reheated and new again.
The tradeoff is heavy ML still lives in Python and on GPUs, so run models as external services over gRPC or HTTP or use Nx with EXLA for smaller on-node work, and if you need native speed use Rustler NIFs or ports but never block the BEAM scheduler or the node will grind to a halt.