Building MCP in public: bringing conversational analytics to Backroom
Building MCP in public: bringing conversational analytics to Backroom
How conversational analytics with the Model Context Protocol (MCP) is transforming Backroom into a privacy-first AI assistant for e-commerce shop owners.
Following our first post, where we introduced the Model Context Protocol and why it matters, this is the second installment in our “building in public” series. Here we walk through what we have shipped since then, what you can already try in our proof of concept, and why this is a genuinely disruptive, privacy-first step for shop owners.
Quick reminder: MCP in a nutshell
MCP is the open, standard protocol that lets AI assistants plug into real systems and live data in a structured and secure way. It makes connectors reusable, composable, and discoverable so models can fetch live context or trigger actions without brittle, bespoke integrations. If you read the first post, you already know the core concepts; here, we focus on the progress since then and what it means for merchants.
What we’ve built since the last post
Over the last few weeks, the Backroom agent has evolved from a neat demo into a transparent, interactive conversational assistant that actually reasons about merchant questions, plans actions, and shows its work in real time. The main advances are:
- Smart replanning: the agent no longer blindly follows a single plan. When something does not go as expected, it adapts, re-plans, and tries alternate strategies so you get meaningful results instead of opaque failures.
- Intent-aware responses: the system distinguishes whether you want raw data, an interpretation, or an action suggestion. That makes answers shorter, more relevant, and less noisy for non-technical users.
- Streaming and process visibility: you can now watch the agent create a plan, call the necessary tools, and compose the final answer step by step. No more black box; every call, parameter, and response is visible for auditing and trust.
- Dynamic tools management: each MCP server is a live tool that wraps existing APIs such as our stats or index services. Tools are defined with machine-readable schemas and can be toggled or synced at runtime without restarting anything. That means new capabilities appear instantly and safely.
- Adaptive frontend: the chat interface in Backroom adapts to the structure of the response. If the agent returns daily KPIs, you get a time series; if it returns top products, you get a ranked list with IDs, clicks, add-to-carts, and conversion rates; and you can copy data from any step. The UI is not hardcoded; it renders what the tool returns.
These changes turn the assistant from a single-answer toy into an interactive analytics collaborator.
A concrete demo flow (what you will see)
Here is a real example of what shop owners can do in Backroom:
- Launch the assistant in Backroom.
- Ask a natural question, such as: “Show me the top queries with no results today” or “Why is conversion down in electronics this week?”
- Behind the scenes, the agent:
-
- Plans the steps it needs, including date calculations and which MCP tools to call.
- Calls the stats API and sessions tools with structured parameters.
- Streams each response so you can inspect parameters and raw outputs.
- Composes a final, plain language summary, relevant KPIs, and follow-up options.
You can then ask follow-ups in the same conversation, request data exports, or ask the assistant to show the raw step outputs. The goal is to replace digging through dashboards with a simple, contextual conversation.
Why this is disruptive for shop owners
Shop owners can get immediate answers in natural language, with clear explanations and actionable insights, without needing to learn dashboards or specific query languages. You can simply ask a question and get useful numbers alongside an interpretation of what those numbers mean.
This leads to faster decisions. Merchants can quickly identify underperforming products, uncover search queries that return no results, and take prompt action on merchandising or content fixes. The assistant can even suggest next steps that are aligned with their business goals.
Trust and explainability are built into the process. Since MCP calls are structured and logged, every statement in the assistant’s response can be traced back to the original tool outputs. This transparency removes guesswork and strengthens confidence in the insights provided.
By lowering the technical barrier, conversational analytics allow shop owners to access and understand their store’s search and behavior data without relying on analysts or engineers. In doing so, it democratizes insights and puts data-driven decision-making directly in the hands of shop owners.
A private-first approach: innovation without compromising data
From day one, we designed this as a private-first implementation of MCP. That means:
- Data stays under merchant control: MCP servers wrap your existing services and APIs. The assistant queries those services rather than requiring wholesale data export to third parties. You decide what each tool can access.
- Fine-grained isolation and scoped access: each MCP server only exposes what it needs to, reducing the blast radius and helping with compliance, auditing, and security. Logs and calls are explicit and inspectable.
- No need to rely on private training data to be useful: because the agent operates on live signals from your store, it can generate insights and recommendations without sending raw private logs to external systems. The combination of local connectors, structured tool responses, and on-demand prompts lets us build powerful generative experiences while keeping private data private.
In short, you get the benefits of genAI-driven analytics without surrendering control of sensitive data.
How we built this: the high-level plumbing
You do not need to be an engineer to appreciate the architecture, but here is the straightforward explanation:
Each feature takes the form of an MCP server, describing its functions through a JSON schema. This makes every tool both discoverable and self-describing, allowing the agent to understand exactly what it can do. Within this setup, the host contains the assistant and the orchestration logic, while the client translates the host’s intent into MCP messages. The server then executes the requested queries or actions, returning structured results. Because the messages follow a standard format, different assistants or tools can reuse the same servers seamlessly.
We have also introduced streaming and plan introspection, enabling the host to display step-by-step execution and adjust the plan as needed. This not only provides real-time visibility into the process but also builds trust in the system’s decisions. While these features reveal the inner workings of the MCP architecture, the agent itself is what becomes accessible and visible through the Backroom interface, where shop owners can directly interact and monitor outcomes.
In addition, tool toggles and live sync are managed through a dedicated configuration section, where MCP server behavior can be interacted with. This allows for activating or deactivating capabilities without redeployments, enabling quick iteration and the safe rollout of new analytics endpoints while keeping the system flexible and responsive.
What is next
We are moving quickly, but carefully. Short-term, we are focusing on:
- Making the conversation more natural and multi-turn so merchants can drill into results fluidly.
- Adding generative visualisations so insights are not only textual but also visual, with graphs created on demand.
- Rolling out the Backroom integration to more environments and customers so you can try the experience with real stores.
Medium term, we will extend from insight to task making: recommendations that can be applied automatically or semi-automatically via secured MCP-enabled actions, always with merchant consent. Long term, we envision multi-agent workflows that analyze, propose, execute, and validate changes across systems.
Questions, ideas, feedback
This project is public by design. We want to build this with real shop owners and real use cases, not in isolation. If you have ideas, suggestions, or questions about a specific report you need to ask naturally, we want to hear them. Your feedback helps shape the next steps and the controls we prioritise.
Thanks for following the series, and stay tuned for the future developments.