Skip to main content

About FlowPad

AI agents are powerful.
Used in a silo, they’re less reliable.

FlowPad is the open-source observability and collaboration platform for AI engineering teams. It turns every Claude Code and Codex CLI session into a complete, shareable record, with full context tracing. Then bring a teammate or a trusted expert directly into your session to help you improve your AI agents and workflows.

The Problem: AI Agents Fail Silently

AI agents are getting more capable every month. But capability without control creates a new category of problems. When an agent runs for hours on a complex task, there’s no trace of what it did. When it burns through tokens on a loop, there’s no way to see why. When a skill doesn’t activate, the agent quietly improvises instead of following instructions. And most AI agent users and builders don’t know what good looks like, or how to diagnose complicated, invisible issues.

The skill ecosystem has exploded. Claude alone now has thousands of community-built skills, with tens of thousands more emerging across models. Without a system to organize, test, and enforce these skills, agents fall back on guesswork. Hooks that should trigger don’t. Skills that should activate get ignored. And you only find out after the tokens are spent. And whoever’s session it was fixes it alone.

The result: wasted compute, unreliable outputs, and no way to diagnose what went wrong. And every engineer figuring it out alone. It’s a silo.

The Solution: Skill-Powered Workflows. Human Experts in the Loop.

FlowPad’s solution is two things working together. Skill-powered workflows that turn unstructured prompting into traced, deterministic steps. And human experts you can loop in whenever the trace surfaces something worth a second look. Every execution produces a complete record of what happened, what tokens were used, and what the agent decided at each point.

The structure comes from observation and human expert insight. Every step the agent takes is captured as a discrete, named event: the skill it activated, the prompt it sent, the tool it called, the output it received. What was a free-form improvisation becomes a sequence you can read top to bottom. An expert reads it, spots what’s working and what isn’t, and shapes the next run. Run it a hundred times and you can see exactly where it drifts. That’s how an unstructured agent becomes a reliable system, modeling anything from a simple data pipeline to a multi-step analysis that runs for hours.

Skills and hooks are managed together. Skills define what the agent knows how to do. Hooks enforce when skills activate. FlowPad keeps both organized, versioned, and testable, so when you have hundreds or thousands of skills, each one fires exactly when it should. Across every session. Across the whole team.

Workflow Automation

Build a workflow once, run it any number of times. Any workflow you create in FlowPad becomes an automation, repeatable, schedulable, and consistent across every execution.

Debugging & Observability

See every step an agent takes. Trace every action, every token, every decision. When something goes wrong, FlowPad shows you exactly where and why. No guessing.

Expert Collaboration

Loop a teammate or trusted expert into your session. They read the trace, spot what’s working, and shape the workflow. One engineer’s fix becomes the team’s default.

Skill Organization at Scale

The number of skills available for AI agents is growing exponentially. Claude’s ecosystem alone has thousands of community-built skills, with tens of thousands more across the broader AI tooling landscape. Each skill is a specialized capability: a debugging workflow, a code review checklist, a data analysis pipeline, a deployment script.

The challenge isn’t building skills. It’s managing them. Which skills should activate for a given task? How do you test that a skill fires when it should? How do you prevent conflicts between skills? How do you version and update skills across a team?

FlowPad treats skills and hooks as first-class objects. Skills define capabilities. Hooks define activation conditions. Both are organized, versioned, and testable within the same platform. When you update a skill, you can trace its execution across workflows to verify it behaves correctly before deploying to your team.

From experiment to system.

Every workflow in FlowPad produces a complete execution trace. Every step records what it received, what it did, and what it returned. Token usage is tracked per-step, so you know exactly where costs come from. When an agent makes the wrong decision, you can replay the trace, identify the failure point, and fix it once.

Run it a hundred times, get the same result a hundred times. That’s the difference between an AI experiment and an AI system.

And the trace doesn’t stay yours alone. A teammate can pick it up. A trusted expert can step in, see what happened, and keep the work moving.

Free, Open Source, and Local

FlowPad is free, open source, and local for individuals and teams. You get unlimited sessions, full trace and debug tooling, and complete collaboration. No trial periods, no feature gates.

Paid plans add cloud-hosted workspaces, centralized logging, SSO, audit logs, and dedicated support, for organizations that need infrastructure and compliance at scale.

The Team

The team behind the platform.

Eran Shlomo, Cofounder & CEO at FlowPad

Eran Shlomo

Cofounder & CEO

Gadi Tunes, Cofounder & VP R&D at FlowPad

Gadi Tunes

Cofounder & VP R&D

Nir Levy, COO at FlowPad

Nir Levy

COO

Tzahi Mazuz, DevOps & Full Stack Dev at FlowPad

Tzahi Mazuz

DevOps & Full Stack Dev

Ami Levy, GTM & Strategy at FlowPad

Ami Levy

GTM & Strategy