OneTick Blog

6 min read

Support Assistant — Engineering Knowledge Companion

Oct 16, 2025 10:30:00 AM

By Tigran Margaryan, OneTick Software Engineer

In a busy engineering environment, it’s common for important knowledge to be scattered across many sources — internal documentation, Jira tickets, Confluence pages, Slack historical messages, GitLab repositories, and even emails. Finding the right answer often means searching through dozens of messages or waiting for someone to respond. This slows down progress, especially for new team members who are still learning how different parts of the system work.

The Support Assistant was created to make this process faster and easier. It acts as a smart conversational helper that understands questions about onetick-py, data structures, onetick DB specifics, etc., finds relevant internal information, and generates clear, human-like answers.

Instead of waiting for a teammate to reply on Slack or spending time looking through old tickets, engineers can ask the Support Assistant and get an immediate, well-structured response.

While large language models (LLMs) have no real understanding of onetick-py — lacking information about its correct syntax, APIs, or data structures — they often rely on generic reasoning or unreliable web search results, which leads to inaccurate or hallucinated answers. The Support Assistant directly addresses this limitation by utilizing an internal knowledge base that combines different data sources and best practices to generate accurate and context-aware answers.


HOW THE SUPPORT ASSISTANT WORKS

The Support Assistant relies on the concept of Retrieval-Augmented Generation (RAG) — a modern approach that combines the precision of information retrieval with the reasoning ability of large language models.


SUPPORT ASSISTANT RAG ARCHITECTURE

When a user asks a question, the assistant first transforms that query into an embedding, a numerical representation that captures the meaning and context of the request. This embedding is then compared to a large database of other embeddings representing internal knowledge, including different data sources such as internal documentation, Jira tickets, Confluence pages, Slack historical messages, and GitLab repositories.

Through this similarity search, the assistant identifies the pieces of information that are most relevant to the user’s question. The retrieved information is then added to a prompt, which provides additional information and structured input to the language model. The model, using its own reasoning capabilities, generates a clear, human-like response that directly addresses the user’s question and is entirely based on relevant knowledge from the index database.

If the user explicitly mentions a Jira ticket, the assistant recognizes that in real time, retrieves the corresponding ticket content, and automatically uses that information in the response. This allows the assistant to deliver context-aware answers related to specific issues as a human teammate would in a live conversation.

Finally, at the end of each answer, the Support Assistant also provides reference links to the related topics it used, allowing users to trace the context, verify details, or explore further with just one click.

In essence, the Support Assistant acts as a bridge between stored organizational knowledge and natural conversation. It understands the question, searches and loads relevant information, and then explains it in an accessible way — helping engineers get accurate answers instantly without having to search through multiple platforms or wait for a colleague’s reply.


Index Building and Source Coverage

The Support-Assistant relies on a Retrieval-Augmented Generation (RAG) architecture. At its core lies a multi-source document index, built to provide the model with context-related responses taken from verified internal knowledge. This index is rebuilt periodically to ensure embeddings and metadata reflect the latest knowledge from all sources.

Included Data Sources

During the indexing stage, Support-Assistant includes knowledge from several distinct sources.

Docs: Official onetick-py documentation.

Cloud Docs: Documentation of all database schemas and tables that onetick-py interacts with, detailing their structure, relationships, and metadata for schema-aware retrieval.

Examples: Code snippets, tutorials, and use-case examples showing onetick-py usage and different queries in practice.

Jira Tickets: Internal and support tickets related to the onetick-py project, including different fixes, discussions, and implementation details.

Slack Data: Internal historical messages from selected Slack channels containing real-world Q&A and team discussions related to onetick-py.


Available Interfaces of the Support-Assistant

The Support-Assistant is available through three main interfaces — Slack, Streamlit, and the official onetick.py public documentation — making it easily accessible in everyday team communication.

Using the Support Assistant in Slack

In Slack, the assistant can be used in two ways: within channels or through private messages.

In channels, the bot listens to questions posted by team members. When someone asks a question related to onetick-py or any related topic, the assistant automatically responds under the thread with a clear and concise answer.

Even when the RAG bot cannot fully answer the initial question — for example, when the issue is new or the knowledge base lacks relevant context — it still plays a valuable role by inducing the conversation between multiple teammates by providing a generated answer with a list of references useful for collective problem solving.

If it doesn’t have enough context or relevant information, it reacts with the 🤷‍♂️ emoji to indicate uncertainty.

Users can continue the conversation directly in the same thread by mentioning the bot (e.g., @Onetick Support Assistant) and asking a follow-up question. The assistant maintains the full conversation context, taking into account all previous messages in that thread to provide continuous answers.

In private messages, the idea is the same: the assistant works as a one-on-one helper, receiving your question directly and providing an immediate answer.

Dedicated Web Interface

To host a lightweight web application that serves as an alternative conversational platform for the Support-Assistant, we use the Streamlit framework, which provides a simple chat-based environment where users can ask questions and receive answers in real time. One of its main advantages is streaming response generation — it begins displaying the answer from the very first token, without waiting for the full response to be completed. This reduces the perceived waiting time from over 9 seconds to around 2 seconds, making interactions noticeably faster, as users can start reading the response while it’s still being streamed in real time.

Using the Support Assistant in public documentation search

A specific public Support Assistant variant is used to safely enhance our public documentation on the Search page, as shown here.

Click on the 🔍︎ icon in the header on the right, then type your question in the search bar. The answer is generated below the Search... input field:

Note: this implementation uses only public documentation pages (not the internal data from Jira or Slack), and you cannot provide feedback there.


Feedback and Continuous Improvement

After each internal interaction, our colleagues are encouraged to evaluate the assistant’s response by giving a thumbs-up 👍 or thumbs-down 👎. These evaluations play an important role in improving the system’s accuracy and reliability, helping to better align responses with our colleagues’ expectations.

Want to see OneTick’s Support Assistant in action?

Request a demo with our data experts today.

Best wishes,

Tigran Margaryan, OneTick Software Engineer

Topics: AI

Tigran Margaryan
Written by Tigran Margaryan

Tigran Margaryan is a Software Engineer at OneTick, contributing to the development of multi-agent systems for custom library code solutions and AI chatbots that leverage various data sources to streamline workflows. Previously, Tigran served as a Data Quality Engineer, ensuring optimal data management for quantitative research, surveillance, and back-testing.

Post a Comment

Featured