Article #8 February 21, 2026

Introducing Thinking Prompt

Agentic Data Engineering in varCHAR

Mallesh Madapathi
Mallesh Madapathi
Founder & CEO, ThinkingDBx

Today we're introducing Thinking Prompt — a unified agentic workspace inside varCHAR that combines a Terminal, AI Chat, and Codegen into a single experience where AI agents don't just suggest — they take action.

Three Tabs, One Workspace

Thinking Prompt is built around three integrated tabs that cover the full lifecycle of data engineering work:

Terminal AI Chat Codegen
  • Terminal — Execute SSH commands on remote servers, inspect infrastructure, manage deployments
  • AI Chat — Converse with AI agents that have real tool access to your databases, servers, and MCP tools
  • Codegen — Manage AI-generated PySpark and SQL scripts, view execution logs, re-run jobs with one click

AI Agent with Real Tool Calling

Unlike simple AI assistants that only generate text, Thinking Prompt agents operate through a tool-calling loop — the LLM decides which tools to invoke, receives real results, reasons about them, and calls more tools until the task is complete.

Database Tools

List connections, discover schemas, query tables, batch multi-table fetches

SSH Tools

Execute commands on remote servers, read files from remote hosts

MCP Tools

Connect any Model Context Protocol server for extensible tool access

Code Generation

Generate, save, and execute SQL and PySpark scripts directly

Multi-Model BYOK

Thinking Prompt is model-agnostic. Connect your preferred LLM provider — or let different users choose different models. Each user configures their own API key, model, and context window size.

OpenAI Anthropic Google Gemini Ollama Groq HuggingFace OpenRouter + Custom Endpoints

Model Context Protocol (MCP)

Thinking Prompt includes a native MCP client, so the agent can connect to any MCP-compatible server and use its tools as if they were built-in. Configure connections via stdio or HTTP transport, mark them as auto-connect, and the agent's system prompt is automatically enriched with available tools.

  • Connect a Postgres MCP server for advanced database operations
  • Add a GitHub MCP server for repository management
  • Plug in custom MCP servers for proprietary data sources
  • Chain multiple MCP servers for cross-system workflows

PySpark Execution Engine

A production-grade PySpark runtime that runs AI-generated scripts in isolated subprocesses. The runtime auto-generates a wrapper that initializes SparkSession, injects credentials (auto-deleted after read), and provides helper functions like read_sql(), write_table(), and execute_sql().

Auto-Dependency Install

Missing packages detected and installed before execution

Real-time Log Streaming

stdout/stderr streamed to browser via WebSocket

Credential Isolation

Credentials written to temp file, deleted immediately after read

Cluster Support

Local, Spark standalone, YARN, Kubernetes

Cross-Database ETL

The agent builds execution plans that span multiple databases in a single operation. Read from PostgreSQL, transform with PySpark, write to MySQL, and create reporting views on Snowflake — all orchestrated autonomously.

PostgreSQL MySQL Oracle SQL Server Snowflake

Smart Intent Routing

Every user message is classified by the LLM into one of three execution paths, ensuring the right strategy for every request:

AGENT

Complex tasks requiring tool use — multi-step exploration, cross-database operations

CODEGEN

Tasks that need executable code — ETL scripts, data transformations, ML pipelines

SMART

Conversational responses using context from connected systems

Experience Thinking Prompt

Thinking Prompt is available now in varCHAR. Connect your AI provider, point it at your databases, and let agents do the engineering.

Learn More About varCHAR

Questions or feedback? Contact us at contact@thinkingdbx.com