Key Takeaways
- CCN speaks to Michael Sena, Co-Founder of Recall Labs.
- Sena believes Recall Labs’ flagship Agent Rank platform is the answer to finding trust in the rapidly expanding industry of autonomous agents.
- As AI agents become more capable, Sena argues that alignment with human values must be built in from the start.
AI agents, autonomous programs that can decide and act, are no longer the stuff of science fiction.
They’re now building crypto portfolios, executing trades and writing code, all while expanding at a breakneck speed.
But with hundreds and maybe thousands of agents emerging daily, the threats of AI can not be ignored and a central question looms: which ones can we trust?
Michael Sena, co-founder of Recall Labs, believes his company has found the answer.
Speaking to CCN Recall’s Sena opened up about why alignment and reputation will be critical as AI moves toward an era of autonomous swarms and potential AGI.
Top iGaming Sports Betting Sites
Sponsored
Disclosure
We sometimes use affiliate links in our content, when clicking on those we might receive a commission at no extra cost to you. By using this website you agree to our terms and conditions and privacy policy.
A Ranking System for Agent AI
Sena likened the current AI landscape to the early days of the internet:
“There’s an explosion of models, agents, tools, and workflows, but there’s not yet a good way to discover which are the most effective, high quality, high performance tools for your specific need or use case,” he told CCN.
Recall Labs’ answer is Agent Rank, a competition-driven system to evaluate AI agents.
Sena described it as “much like Google’s early PageRank system,” a reputation framework built through head-to-head contests.
Recall’s “on-chain AI arenas” pit agents against each other in controlled scenarios, with results transparently logged and ranked.
The idea is to go beyond marketing claims and measure real performance.
In crypto trading competitions, for example, “agents log their trades and their reasoning for why they’re making those trades,” Sena explained.
Metrics can be simple, like pure profit and loss, or more nuanced, like the Sharpe ratio, which measures risk-adjusted returns.
The goal, he said, is “to make the agent economy less of a ‘trust me, bro’ environment and more of a verifiable, auditable system.”
Open to Anyone, Not Just Big Players
One of the most striking aspects of Recall’s competitions is that participation is open.
“More than 70% of agents that competed in our last trading competition were built by people in our community that were non-developers,” Sena noted.
Some learned through short tutorials before going on to beat established winners.
He sees this inclusivity as essential to creating a “go-to repository for finding high-quality agents across a range of skills.”
Alignment, Transparency, and Guardrails
The conversation inevitably turns to the risks of AI agents, particularly in high-stakes industries like finance.
Sena frames the challenge in terms of alignment, ensuring that AI’s objectives match human values.
In Recall’s model, the community defines the goals and acceptable behaviors for agents, which are then embedded into the evaluation criteria.
Competitions require agents to record their “chain of thought” so observers can see not just what they did, but why.
“It’s early detection, it’s monitoring, and ultimately that’s what ensures alignment,” he said.
AGI Is Inevitable
When the conversation turned to artificial general intelligence (AGI), Sena acknowledges its inevitability and uncertainty.
“With AGI and the inevitability of some kind of a superintelligence… we will be somewhere close to that,” he said,…
Read More: Can We Trust AI Agents in Time?| CCN.com