By Tigran Margaryan, OneTick Software Engineer
In this article, we’ll demonstrate how the MCP servers equip AI agents with a comprehensive set of tools and resources, enabling the AI to automatically select the most suitable one based on the user’s request. This intelligent system allows users to interact with the system using plain text or voice queries — whether to retrieve market or order data, run trading alerts and algorithms, or schedule recurring reports. Results are not only displayed directly, but also delivered as dynamically generated onetick-py code inside Jupyter notebooks, offering full visibility and flexibility for further analysis.
Model Context Protocol (MCP) is an open protocol that standardizes how applications provide context to LLMs, allowing AI models to connect to different data sources, tools, and external systems.
MCP clients are intelligent applications or agents that operate within this ecosystem to execute tasks with contextual understanding and tool integration. These clients serve as intermediaries between user input, LLM reasoning, and tool execution. Famous MCP clients include Cursor, Windsurf, Claude, etc.
The workflow begins when a user submits a request via the WebUI interface. This input is received by the AI Agent, which utilizes PostgreSQL for contextual memory. The agent matches the user’s request to available tool descriptions provided by the MCP server, selects the most suitable tool, executes it, and returns the result via the WebUI.
Below, we can see the architecture of this workflow:
Workflow Architecture Diagram
Each MCP server is designed to handle a distinct set of tasks through a well-defined toolset. Below, we outline the core servers and their respective capabilities:
This service interacts with JupyterLab via its API, allowing the agent to programmatically create, update, and run notebooks. Jupyter Manager tools are particularly useful in tasks related to the tools of the Scheduler and Surveillance Data Provider servers, providing the user with dynamic code for manual code inspection and execution.
The Scheduler server enables users to view, add, or remove scheduled jobs, which is particularly useful for automating recurring reports. When a scheduling request is received, such as “schedule this report daily at 9 AM”, the agent first validates that all required details (e.g., execution frequency) are provided. It then invokes the appropriate Scheduler tool to carry out the requested operation and returns a clear confirmation message reflecting the result.
Provides simplified access to market data and order records using the onetick.py library. This workflow is triggered when a user requests raw data, such as “get market data for AAPL for the last week and also calculate VWAP.” Then the agent verifies all necessary parameters (e.g., symbols, dates, exchange) are present in the user’s request. If not, it will ask for clarification. Then, after selection and execution of the appropriate tool, it returns the data to the user formatted as a clean Markdown table.
This server provides tools when the user’s request matches one of the available report templates (e.g., “Analyze price change outliers for the previous day”). The agent first checks for any missing parameters, such as periods or threshold values. It then uses the tools from the Jupyter Manager to create a new notebook, including an introductory Markdown cell, a code cell with the appropriate report logic, with execution results and visualizations. Once the notebook is complete, the agent returns a link to the user through WebUI.
We use OpenWebUI for integration, which is a modern, user-friendly platform for working with LLMs like Ollama and OpenAI APIs, which allows us to dynamically change the model and manage interactions with the MCP agent through a flexible and intuitive interface. It supports seamless setup via Docker or Kubernetes, provides advanced features such as role-based access control, document and web search via RAG, image generation, multi-model support, and custom agent creation.
As an example, let’s ask it to find price change outliers for the London Stock Exchange, specifying the desired parameters along with a custom z-score.
We can see that the agent first confirms all settings before initiating generation, then proceeds to create the notebook and display the results. The generated notebook, titled Price_Change_Outliers_Analysis, contains the complete onetick-py code, allowing users to review and rerun it dynamically.
Upon opening the notebook and reviewing the output, we observe that the stock BATS (British American Tobacco Plc) is identified as an outlier, as its z-score exceeds the specified threshold of 0.5.
Following the same logic, we can query various types of reports, such as different alert information (e.g., “Give me a list of all wash trade alerts for yesterday”) or order-related data (e.g., “List orders for the participant X on 2025–06–23”), and more.
This approach empowers users to generate a wide range of analytical reports effortlessly, making complex data exploration as simple as asking a question, enabling faster, more accessible insights across diverse use cases, turning natural language into fully automated, data-driven workflows.
One of the most powerful features of our architecture is the universal compatibility of the AI Agent. As shown in the workflow, the AI Agent operates independently of any specific MCP host. This decoupling allows it to seamlessly integrate with any MCP host, without requiring changes to its core logic.
This flexibility is especially valuable for users, as it eliminates the need to commit to a particular infrastructure provider. Instead, they’re free to choose or switch to the most suitable infrastructure for their needs, whether in development or production environments.
Let’s see how this universal compatibility works in practice.
In the example below, within a cursor-based environment, the user asks for details about a particular alert. The AI Agent receives the request and returns the expected information using the get_alert_details tool.
In the next example, the AI Agent uses tools from both the Market and Client Data Provider MCP Server and the Jupyter Manager MCP Server to generate code and display the results in a Jupyter Notebook, providing a direct link to the generated notebook, allowing users to open and review the output instantly.
We’re actively developing ways to make the MCP toolset more modular and extensible. One direction is enhancing the AI Agent’s capabilities by exposing more internal MCP tools. This allows the agent not only to retrieve data but also to perform operations and validate logic.
Another major tool for the OneTick team is the Coding Assistant, which is being designed to enable the AI Agent to handle a wide range of user requests — generating complex code, executing it, and returning results. As described in this article, this is not just a simple workflow, but the implementation of real-time code generation and intelligent automation. Its integration marks a significant step toward a more intelligent MCP stack.
Request a demo with our data experts today.
Best wishes,
Tigran Margaryan, OneTick Software Engineer