Learning to Learn: A LATS-Based Framework for Self-Aware AI Pipelines

Learning to Learn: A LATS-Based Framework for Self-Aware AI Pipelines

📖 Summary

In this post, we introduce the LATSAgent, an implementation of LATS: Language Agent Tree Search Unifies Reasoning.. within the co_ai framework. Unlike prior agents that followed a single reasoning chain, this agent explores multiple reasoning paths in parallel, evaluates them using multidimensional scoring, and learns symbolic refinements over time. This is our most complete integration yet of search, simulation, scoring, and symbolic tuning bringing together all of our previous work on sharpening, pipeline reflection, and symbolic rules into a unified, intelligent reasoning loop.

Dimensions of Thought: A Smarter Way to Evaluate AI

Dimensions of Thought: A Smarter Way to Evaluate AI

📖 Summary

This post introduces a multidimensional reward modeling pipeline built on top of the CO_AI framework. It covers:

  • ✅ Structured Evaluation Setup How to define custom evaluation dimensions using YAML or database-backed rubrics.

  • 🧠 Automated Scoring with LLMs Using the ScoreEvaluator to produce structured, rationale-backed scores for each dimension.

  • 🧮 Embedding-Based Hypothesis Indexing Efficiently embedding hypotheses and comparing them for contrastive learning using similarity.

  • 🔄 Contrast Pair Generation Creating training pairs where one hypothesis outperforms another on a given dimension.

Programming Intelligence: Using Symbolic Rules to Steer and Evolve AI

Programming Intelligence: Using Symbolic Rules to Steer and Evolve AI

🧪 Summary

“What if AI systems could learn how to improve themselves not just at the level of weights or prompts, but at the level of strategy itself? In this post, we show how to build such a system, powered by symbolic rules and reflection.

The paper Symbolic Agents: Symbolic Learning Enables Self-Evolving Agents introduces a framework where symbolic rules guide, evaluate, and evolve agent behavior.

Adaptive Reasoning with ARM: Teaching AI the Right Way to Think

Adaptive Reasoning with ARM: Teaching AI the Right Way to Think

Summary

Chain-of-thought is powerful, but which chain? Short explanations work for easy tasks, long reflections help on hard ones, and code sometimes beats them both. What if your model could adaptively pick the best strategy, per task, and improve as it learns?

The Adaptive Reasoning Model (ARM) is a framework for teaching language models how to choose the right reasoning format direct answers, chain-of-thoughts, or code depending on the task. It works by evaluating responses, scoring them based on rarity, conciseness, and difficulty alignment, and then updating model behavior over time.

A Novel Approach to Autonomous Research: Implementing NOVELSEEK with Modular AI Agents

A Novel Approach to Autonomous Research: Implementing NOVELSEEK with Modular AI Agents

Summary

AI research tools today are often narrow: one generates summaries, another ranks models, a third suggests ideas. But real scientific discovery isn’t a single step—it’s a pipeline. It’s iterative, structured, and full of feedback loops.

In this post, I show how to build a modular AI system that mirrors this full research lifecycle. From initial idea generation to method planning, each phase is handled by a specialized agent working in concert.

The Self-Aware Pipeline: Empowering AI to Choose Its Own Path to the Goal

The Self-Aware Pipeline: Empowering AI to Choose Its Own Path to the Goal

🔧 Summary

Modern AI systems require more than just raw processing power they need contextual awareness, strategic foresight, and adaptive learning capabilities. In this post, we walk through how we implemented a self-aware pipeline system inspired by the Devil’s Advocate paper.

Unlike brittle, static workflows, this architecture empowers agents to reflect on their own steps, predict failure modes, and adapt their strategies in real time.


🧠 Grounding in Research

Devil’s Advocate (ReReST)

ReReST: Devil's Advocate: Anticipatory Reflection for LLM Agents introduces a self-training framework for LLM agents. The core idea is to have a “reflector” agent anticipate failures and revise the original plan before executing a powerful method for reducing hallucinations and improving sample quality. Our implementation draws heavily on these ideas to enable dynamic planning and feedback loops within the pipeline.

General Reasoner: The smarter Local Agent

General Reasoner: The smarter Local Agent

🔧 Summary

The General Reasoner paper shows how we can train LLMs to reason across domains using diverse data and a generative verifier. In this post, I walk through our open-source implementation showing how we built a modular reasoning agent capable of generating multiple hypotheses, evaluating them with an LLM-based judge, and selecting the best answer.


🧠 What We Built

We built a GeneralReasonerAgent that:

  • Dynamically generates multiple hypotheses using different reasoning strategies (e.g., cot, debate, verify_then_answer, etc.)
  • Evaluates each pair of hypotheses using either a local LLM judge or our custom MR.Q evaluator
  • Classifies the winning hypothesis using rubric dimensions
  • Logs structured results to a PostgreSQL-backed system

All of this was integrated with our existing co_ai framework, which includes:

Building a Self-Improving Chain-of-Thought Agent: Local LLMs Meet the CoT Encyclopedia

Building a Self-Improving Chain-of-Thought Agent: Local LLMs Meet the CoT Encyclopedia

Most AI systems generate answers. Ours examines how they think. This isn’t just prompt engineering this is structured reasoning at scale.

🔧 Summary

Large Language Models are transforming every field, yet their internal reasoning remains a formidable black box. We can get brilliant outputs, but without understanding how those conclusions were reached, we’re left guessing how to improve, debug, or even trust them. This opacity limits our ability to build truly reliable and self-improving AI systems.

Self-Improving Agents: Applying the Sharpening Framework to Local LLMs

Self-Improving Agents: Applying the Sharpening Framework to Local LLMs

This is the second post in a 100-part series, where we take breakthrough AI papers and turn them into working code building the next generation of AI, one idea at a time.

🔧 Summary

In my previous post, I introduced co_ai a modular implementation of the AI co-scientist concept, inspired by DeepMind’s recent paper Towards an AI Co-Scientist.

But now, we’re going deeper.

This isn’t just about running prompts through an agent system it’s about building something radically different: