Welcome to the documentation for our AI platform — your all-in-one hub for deploying, testing, and managing Model Context Protocol (MCP) servers. This platform is built for developers, teams, and enterprises looking to streamline how they define, interact with, and govern AI-powered tools and workflows.
The Model Context Protocol (MCP) is an open protocol that enables seamless integration between LLM applications and external data sources and tools. Whether you’re building an AI-powered IDE, enhancing a chat interface, or creating custom AI workflows, MCP provides a standardized way to connect LLMs with the context they need.
Test your MCP servers with different large language models (LLMs) in our interactive playground. Compare outputs, tweak prompts, and evaluate behavior across model providers in a single place.
Monitor and understand your MCPs using our open-source observability library. All logs and usage data are centralized on the platform, offering real-time insight and historical analysis — built like Sentry, but purpose-built for MCPs.