AI Chat Service with RAG & Memory
Multi-tenant AI chat platform with retrieval-augmented generation, conversation memory, and multi-provider AI. Build intelligent chatbots in minutes.
Everything you need for intelligent chat
From RAG retrieval to conversation memory, all in one platform.
RAG Pipeline
Retrieval-augmented generation with hybrid search (vector + keyword), reranking, and pipeline search for accurate answers from your knowledge base.
4-Layer Memory
Session memory, semantic memory, temporal memory, and entity extraction. A chatbot that truly remembers conversation context.
Multi-Provider AI
OpenAI, DeepSeek, Groq — choose the best provider per use case. Switch providers without code changes. Streaming response built-in.
Multi-Tenant
One deployment, many tenants. Each tenant has their own LLM config, knowledge base, and users. Data perfectly isolated.
Knowledge Base
Upload documents, auto-chunk, embed to vector database. Hybrid search combining semantic similarity and keyword matching.
Session Management
Conversation history, session summarization, and context window management. Long conversations stay coherent.
How it works
Three simple steps: configure, upload, chat.
Configure
Create a tenant, choose AI provider and model, set parameters like temperature and max tokens.
Upload Knowledge
Upload documents to the knowledge base. TUTUR auto-chunks, embeds, and indexes for retrieval.
Start Chatting
Use the API or dashboard. TUTUR retrieves context from your knowledge base and generates accurate answers.
Simple API, powerful results
Send a message, get intelligent answers with context from your knowledge base.
// Send a chat message with RAG context
POST /api/v1/tenants/{tenant_id}/chat
{
"message": "How does the refund policy work?",
"session_id": "sess_abc123",
"stream": true
}
// Response includes RAG sources
{
"response": "Based on your refund policy document...",
"sources": ["refund-policy.pdf", "faq.md"],
"retrieval_time_ms": 45
} Start building intelligent chat today
Free to start. Build your first chatbot in minutes.