Agents need to both think and act
The core objective of Agentic AI is to deliver systems capable of executing functions that are generally associated with human intelligence, such as reasoning and learning. The long-term goal is to make them exhibit human-level cognitive abilities, so agents can adapt to situations without requiring task-specific programming.
As Agentic AI evolves, we need agents that will both think and act. We need a database that supports both real-time operational transactional and ad-hoc analytical workloads. Intelligent transactions can’t wait for a complex analytical query to run in one system and have delayed results sent to another just to gain insight. Agents need to be able to ask ANY questions, not just predetermined queries and views. They can’t rely on relatively slow (and incredibly complex/brittle) data movement. They’ll need a single relational database for both transactions and live analytics, and we’ll need all of this at scale.
Over the years, many have tried to combine OLTP and OLAP into a single database solution. However, all previous attempts encountered the “triangle problem,” where trade-offs among three primary system constraints are required:
- Speed (Performance): The rate at which operations (transactions and analytics) can be processed.
- Scale (Size): The capacity to handle ever-increasing volumes of data and user concurrency.
- Efficiency (Cost/Budget): The operational cost relative to the resources consumed.
Optimization for all three constraints without compromise has proven to be a massive challenge, especially with the added constraint of strong distributed ACID compliance for a relational database.
Is a database an extremely complex storage system?
Some doubt the plausibility of a combined, distributed OLTP/OLAP relational database. It is an intriguing problem that some have attempted to solve. The Regatta team, however, has taken a fundamentally different approach and has rethought the solution from the ground up. Unlike other recent database innovations that look at a database as solely a compute problem, Regatta relies heavily on innovations at the storage layer of the database combined with a wholly new and extremely efficient concurrency control protocol.
This shift in approach is the natural progression of the team’s experience over the last couple of dozen years. We are pioneers of the “Software-Defined” revolution. While at ScaleIO, we proved that intelligent software could replace specialized hardware and turn commodity servers into enterprise-grade storage. At XtremIO, we helped normalize the idea of the “All-Flash Data Center.” We have a fairly large amount of experience designing and building scale-out, efficient distributed storage solutions.
This collective experience in designing efficient, scalable systems provides the foundation for a novel storage-centric design in RegattaDB, which allows it to address the constraints of the traditional “triangle problem”.
RegattaDB: Distributed SQL for Agentic AI
RegattaDB allows you to execute complex, ad-hoc analytics on live transactional data. It delivers a distributed SQL database where guaranteed, cross-node, consistent transactions and complex analytical queries can co-exist, all without compromising performance and with extreme efficiency. Further, it incorporates vector database capabilities and is a distributed database that helps you scale easily and guarantee resilience.
This combination of capabilities delivers a database that allows our modern AI agents to not just transact, but to reason using current and complete context. It enables these core Agentic AI functions that were once unthinkable:
- Transactions: Extremely efficient, scale-out coordination of actions
RegattaDB implements a patented transaction/concurrency control model that efficiently guarantees serializable isolation for distributed transactions. This helps agents coordinate with each other and more importantly to ensure there are no conflicts between them. Further, it dramatically improves performance and cost efficiency of the database, while still being able to deliver easy scale and guaranteed resilience.
TPC-C Benchmark: As proof of these gains, we recently ran the TPC-C benchmark against 200,000 warehouses within the 99% threshold of 12.86 tpmc using only 10 nodes with 14 cores each. This typically represents cost savings of about 2,000% over the published results from other distributed databases.
- Contextual Reasoning: Complex analytics on live transactional data
The unique transactional model implements a locking mechanism for isolation that does not block analytical queries. It maintains intelligent point-in-time images and coordinates with the storage layer of the database to efficiently manage these. This ability to analyze live transactional data in real time provides instant and complete context to agents so they can make immediate, informed decisions.
20B row join without indexes and with 50,000 concurrent transactions per second: To demonstrate single OLTP/OLAP capabilities, we executed a complex JOIN of two 10 billion row tables where the left and right sub-rows all live in different nodes and evaluation of the JOIN requires predicate processing between the left and the right. In parallel to the JOIN, we also pushed 50,000 updates per second randomly across the rows. We ran this experiment on a cluster of 50 small instances, each with 64G and 4 cores. While most databases would be challenged to execute the JOIN alone, RegattaDB returned results in 174 seconds and successfully completed all the updates in parallel.
- Free Thinking: Ad-hoc, Natural Language Queries
RegattaDB provides incredible performance for analytical queries and can even deliver acceptable performance for those queries that do not take advantage of explicit indexes or predicates. This is critical for free thinking agents and allows individuals (or agents) to “dream up” ad-hoc queries without deep knowledge of database constructs. And with a native MCP server, RegattaDB allows any person or agent to use natural language to create and execute these random queries on live “fresh” transactional data.
Agentic AI demands cost efficiencies
The triangle challenge is real, and RegattaDB has addressed its challenges. It is also extremely resource efficient and high performant. It presents a Distributed SQL database without the gabby consensus protocols, so it is much cheaper to run than traditional Distributed SQL solutions, yet you still gain the benefits of consistent transactions at high performance, easy node-based scale, natural resilience and reduced data architecture complexity.
And with single node databases, they generally require you to overprovision server resources to allow for traffic spikes and growth. RegattaDB allows you to use all cluster-wide free resources to accommodate these concerns, so there is no need to overprovision an individual node, delivering on average a 3-4x better footprint efficiency.
This single OLAP/OLTP database also helps eliminate costly and brittle ETL and add-on services that increase complexity and latency in your data architecture. It eliminates the tangled mess of data movement and reduces the time it takes for your team to gain insight from business data. It’ll save time troubleshooting these layers and reduce operational costs. Finally, it will allow for net new agentic applications to be created that will more closely mirror human intelligence so you can dream bigger, grow your business and outpace competition.
Agentic AI demands complete context
Humans can reason because we can act on live data. Every action can be input into the next. Agents will need to act this way. At its simplistic best, RegattaDB allows your agents to perform ad-hoc queries on live transactional data so they can reason and act, all with complete context… more human like.