Today we're introducing Thinking Prompt — a unified agentic workspace inside varCHAR that combines a Terminal, AI Chat, and Codegen into a single experience where AI agents don't just suggest — they take action.
Three Tabs, One Workspace
Thinking Prompt is built around three integrated tabs that cover the full lifecycle of data engineering work:
- Terminal — Execute SSH commands on remote servers, inspect infrastructure, manage deployments
- AI Chat — Converse with AI agents that have real tool access to your databases, servers, and MCP tools
- Codegen — Manage AI-generated PySpark and SQL scripts, view execution logs, re-run jobs with one click
AI Agent with Real Tool Calling
Unlike simple AI assistants that only generate text, Thinking Prompt agents operate through a tool-calling loop — the LLM decides which tools to invoke, receives real results, reasons about them, and calls more tools until the task is complete.
List connections, discover schemas, query tables, batch multi-table fetches
Execute commands on remote servers, read files from remote hosts
Connect any Model Context Protocol server for extensible tool access
Generate, save, and execute SQL and PySpark scripts directly
Multi-Model BYOK
Thinking Prompt is model-agnostic. Connect your preferred LLM provider — or let different users choose different models. Each user configures their own API key, model, and context window size.
Model Context Protocol (MCP)
Thinking Prompt includes a native MCP client, so the agent can connect to any MCP-compatible server and use its tools as if they were built-in. Configure connections via stdio or HTTP transport, mark them as auto-connect, and the agent's system prompt is automatically enriched with available tools.
- Connect a Postgres MCP server for advanced database operations
- Add a GitHub MCP server for repository management
- Plug in custom MCP servers for proprietary data sources
- Chain multiple MCP servers for cross-system workflows
PySpark Execution Engine
A production-grade PySpark runtime that runs AI-generated scripts in isolated subprocesses. The runtime auto-generates a wrapper that initializes SparkSession, injects credentials (auto-deleted after read), and provides helper functions like read_sql(), write_table(), and execute_sql().
Missing packages detected and installed before execution
stdout/stderr streamed to browser via WebSocket
Credentials written to temp file, deleted immediately after read
Local, Spark standalone, YARN, Kubernetes
Cross-Database ETL
The agent builds execution plans that span multiple databases in a single operation. Read from PostgreSQL, transform with PySpark, write to MySQL, and create reporting views on Snowflake — all orchestrated autonomously.
Smart Intent Routing
Every user message is classified by the LLM into one of three execution paths, ensuring the right strategy for every request:
Complex tasks requiring tool use — multi-step exploration, cross-database operations
Tasks that need executable code — ETL scripts, data transformations, ML pipelines
Conversational responses using context from connected systems
Experience Thinking Prompt
Thinking Prompt is available now in varCHAR. Connect your AI provider, point it at your databases, and let agents do the engineering.
Learn More About varCHARQuestions or feedback? Contact us at contact@thinkingdbx.com