By Alexander Serechenko, OneTick LLM Team Lead, and Peter Simpson, OneTick Product Owner
Developers and analysts working with financial firms face a common hurdle: navigating vast amounts of trade and market data scattered across internal systems. Traditional search methods, unfortunately, are often inadequate for this specialized domain, failing on everything from a simple typo (like misspelling "average") to complex, natural language questions.
To overcome these barriers and accelerate time-to-insight, OneTick has implemented specialized AI capabilities focused on Vector-Based Search and Retrieval-Augmented Generation (RAG).
Traditional keyword-based search is brittle and easily broken. For complex queries or long sentences written in natural language, this method struggles because it attempts to grab all the words and find them individually on pages, often providing non-related results and rendering the search "quite useless".
The solution lies in leveraging semantic similarity search, which uses Large Language Models (LLMs) to understand the meaning and context of a query.
The specialized team chose the OpenAI GPT 4.1 large embedding model after conducting detailed evaluations that assessed not only accuracy and precision but also performance metrics.
To ensure the reliability of this new approach, extensive experiments were conducted using real user queries, which were categorized into simple (e.g., searching for a function name) and complex (natural language questions) queries.
The vector-based search is the foundation for the next level of natural language assistance: Retrieval-Augmented Generation (RAG). This mechanism ensures that the LLM generates answers grounded strictly in verified organizational knowledge.
The RAG process works as follows:
This technology is currently deployed to replace the original keyword search functionality in the official public documentation search.
Furthermore, the technology is leveraged internally, where it helps engineers conduct complex searches, such as finding old Jira tickets simply by describing the ticket's content.
Looking ahead, the team is working on a "natural language query designer" (an internal ongoing project). This future tool is intended to be capable of receiving a natural language query, possessing knowledge about financial analytics, and then writing and executing code to query the OneTick database, thereby accelerating financial data analysis.
This implementation of specialized AI tools serves as an Engineering Knowledge Companion, acting as a critical bridge between stored organizational knowledge and natural conversation, helping engineers get immediate, accurate, and context-aware answers without relying on manually searching multiple platforms.
To learn more about OneTick, please visit onetick.com, email info@onetick.com, or request a private demo here.
Best wishes,
Alexander Serechenko, OneTick LLM Team Lead
Peter Simpson, OneTick Product Owner