OneTick Blog

4 min read

The Quant’s AI Advantage: Moving Beyond Search to Agentic Coding

Jan 22, 2026 11:48:04 AM

By Alexander Serechenko, Senior Python Developer and LLM Team Lead at OneTick

In the past year, the OneTick team has been working to enhance documentation discovery by replacing brittle keyword searches with vector-based semantic search. In our recent December webinar, "The Quant’s AI Advantage," my colleague Peter Simpson and I demonstrated that we have moved beyond simple information retrieval. We are now entering the era of agentic workflows and integrated coding assistance.

While our foundation remains a robust vector-based search that handles complex natural language queries better than traditional methods, our latest advancements focus on bringing AI directly into the developer's workflow—whether that’s in the browser, a hosted environment, or your local IDE.

Read on to learn my key takeaways from the session, with a special focus on the new capabilities we’ve developed in the last quarter.


From "Search" to "Chat with Docs"

In March, we focused on how vector search helps users find the right documentation page. Now, we have implemented a full Retrieval-Augmented Generation (RAG) system that acts as a "Chat with Docs" interface. Users can ask complex questions like "How do I calculate a period VWAP?" and receive an AI-generated answer complete with code snippets, rather than just a list of links.

To support this, we have significantly expanded our knowledge base. Beyond standard API docs, we are now indexing:

  • OneTick Cloud Knowledge: This includes database schemas, table definitions, and symbology, allowing the AI to understand the specific metadata required to write accurate queries.
  • Slack History & Jira (Internal): For our internal support teams, we index historical Slack conversations to capture the "why" behind implementation decisions, helping solve complex debugging issues faster.

Game Changer: Model Context Protocol (MCP) Integration

The most significant leap forward since our March update is our work with the Model Context Protocol (MCP). We recognize that developers don't just want answers in a web browser; they want assistance where they write code.

We are actively working on an MCP server that integrates the OneTick Support Assistant directly into popular IDEs like Cursor, VS Code, and PyCharm.

  • How it works: When a user in Cursor asks for code to query OneTick data, the IDE agent communicates with our Support Assistant via MCP. The assistant provides the necessary context (syntax, schema, correct functions) to the IDE, which then generates the code directly in the user's editor.
  • Remote Execution: We are even testing capabilities to let this code execute remotely, streamlining the workflow from generation to result.

Live Demo: Iterative Coding and Self-Correction

During the webinar, I demonstrated these capabilities live in our SC(A)IL environment (a hosted JupyterLab solution).

**Important note: this section is specific to clients who are subscribed to OneTick Trade Surveillance, who could thereafter benefit from these capabilities using SC(A)IL.

I asked the assistant to "write code to query MSFT average price for November 2025".

The system didn't just retrieve a static example; it:

  1. Optimistically chose the correct database and tick type for the US equity symbol.
  2. Generated functioning Python code using onetick-py to calculate the aggregation.
  3. Performed Iterative Fixes: When I asked it to update the code for 15-minute bucketing, it modified the script on the fly. Crucially, when the LLM made a logical error (filtering out empty buckets incorrectly), we utilized the chat context to iteratively fix the code and even plot the results.

The recording can be found here.


Security and Trust

A major focus of the Q&A was how we prevent hallucinations and protect data. We adhere to a strict "trust and verify" approach. Every AI-generated answer provides reference links to the documentation used to generate it.

Furthermore, we maintain strict security boundaries. While our internal tools access Jira and Slack, the external-facing and client-specific assistants are restricted to public documentation and safe, client-specific schema contexts—never accessing the raw client data itself.


Conclusion

We are rapidly evolving from AI that finds answers to AI that does work. By integrating agentic capabilities into hosted environments and local IDEs via MCP, we are reducing the "plumbing" time for quants, allowing them to focus on strategy and analysis.

If you are ready to see how AI-driven analytics can accelerate your workflow, we invite you to explore OneTick.

Visit onetick.com, email info@onetick.com, or request a private demo here.

Contact us today to set up a personalized walkthrough of these new AI capabilities.

Best wishes,

Alexander Serechenko, Senior Python Developer and LLM Team Lead at OneTick

Alexander Serechenko
Written by Alexander Serechenko

Alexander Serechenko is a Senior Python Developer specializing in AI-driven solutions for time-series data analytics. With an expertise in machine learning, MLOps, and natural language processing, he leads the development of AI-powered search and automation tools. He holds a Master’s degree in cryptography and information security from the National Research Nuclear University MEPhI.

Post a Comment

Featured