
Don't get left behind! Effortlessly deliver state of the art intelligence in your own Delphi & C++Builder applications. Enhance your apps with AI to analyse data & generate data, allow AI to take automated control of your apps, let your customers access application functionality using natural language or to setup your own agentic flows with MCP.
Vendor
TMS Software
Company Website
TMS AI Studio enables developers to add AI superpower to their applications. It allows effortless delivery of state‑of‑the‑art intelligence in Delphi and C++Builder applications. Developers can enhance apps with AI to analyse data, generate data, allow AI to take automated control of applications, enable customers to access application functionality through natural language, or set up agentic flows with MCP. It supports LLM‑agnostic access to AI services such as OpenAI, Ollama, Claude, Mistral, Gemini, Grok, Perplexity, and DeepSeek. Using the standard MCP protocol, developers can build agentic flows, create MCP‑compliant servers, or build powerful agentic flows with their own MCP client. TMS AI Studio also makes it possible to set up tools to permit function calling, define functions accessible to the LLM, and allow it to smartly call application functions or MCP‑based functions. The product enables RAG (Retrieval Augmented Generation) to optimize AI performance based on custom knowledge bases. It also supports multimodal AI, allowing use of text prompts as well as images, PDFs, audio files, and more, including OCR, image/PDF information extraction, and audio‑to‑text or text‑to‑audio conversions.
Features
- Integrate AI in your own apps: Effortlessly add AI superpower to your Delphi & C++Builder apps to analyse and generate data, control your apps with natural language, process documents, build agents, and more using LLM REST APIs and MCP protocols.
- Easily use LLMs via REST API: Access LLMs through REST APIs to integrate intelligence, natural‑language processing, content generation, and automation directly into your applications.
- Agents via full MCP Client/Server protocol: Build agentic flows with the standard MCP protocol, create MCP‑compliant servers in record time, and integrate with any MCP client or third‑party MCP servers.
- LLM‑agnostic access to AI services: Leverage a wide range of LLM services such as OpenAI, Ollama, Claude, Mistral, Gemini, Grok, Perplexity, and DeepSeek, using them in an abstract way to switch or combine models effortlessly.
- Supported LLMs: Includes support for OpenAI (ChatGPT), Gemini, Grok, DeepSeek, Claude, Perplexity, Ollama, and Mistral, enabling broad compatibility with leading AI models and toolchains.
- Build AI agents with MCP: Use the MCP protocol to build agentic flows using Delphi, create MCP‑compliant servers, and build powerful agentic flows with your own MCP client using both internal and external MCP servers.
- Setup tools for function calling: Define functions with selectable parameters to allow the LLM to smartly call your application functions, MCP server functions, or third‑party functions through tool‑based interaction.
- RAG within reach: Take advantage of Retrieval Augmented Generation to optimize AI quality and performance using your application’s custom knowledge base.
- Multimodal AI: Go beyond text prompts by feeding the AI with images, PDFs, audio files, and more; perform OCR, extract information from images or PDF documents, convert audio to text, and vice versa.