

The AI Analytics Tool Built for Auditability: Info-Tech Research Group's Review
Blog Articles
Published
When your finance team runs a quarterly close and an AI analytics tool surfaces a number that doesn't look right, the first thing your auditors ask isn't "is this system intelligent?" It's "show us how you got there."
For most general-purpose AI tools built on large language models, that question lands in uncomfortable silence. This isn't a corner case. It's the central tension in enterprise AI adoption right now. Organizations in financial services, government, and other regulated industries have been promised productivity gains from AI analytics — and many are seeing them in pilots. But when it comes time to put those outputs in front of a regulator, a board, or an external auditor, the architecture underneath starts to matter in ways that weren't visible during procurement.
That distinction — between a system that produces a plausible output and one that produces a traceable one — is exactly what Info-Tech Research Group set out to examine in a May 2026 independent research note on Chata.ai's deterministic AI platform. Analysts Shashi Bellamkonda and Igor Ikonnikov evaluated the architecture, data handling model, cost structure, and deployment approach, and concluded that deterministic AI represents the correct architectural bet for regulated environments.
Their central argument isn't about features. It's about what the architecture produces by default. Audit traceability, repeatable outputs, and compliance-grade logging aren't things you configure on top of a deterministic system. They're what a deterministic system is.
This article walks through what Info-Tech found, why it matters for compliance-sensitive buyers, and the four questions every data team should ask before committing to any AI analytics tool.
Key Takeaways
Info-Tech Research Group analysts Shashi Bellamkonda and Igor Ikonnikov researched Chata.ai's no-hallucination architecture as fit for regulated environments.
Accuracy degrades structurally in LLM workflows: a 10-step chain drops to roughly 60% accuracy even at 95% per-step reliability.
CPU-based inference eliminates the GPU cost curves that have become a boardroom concern at scale.
A full audit trail is a structural outcome of deterministic architecture — not a compliance add-on.
How Info-Tech Research Group Evaluated Chata.ai
Analysts Shashi Bellamkonda and Igor Ikonnikov conducted the evaluation independently, positioning their findings within Info-Tech's broader coverage of enterprise AI analytics tools.
Their scope covered five dimensions: architecture and data handling, cost model, deployment flexibility, privacy posture, and fit with target verticals — specifically financial services, banking, government, healthcare, supply chain, and transportation. These aren't adjacent markets. They're sectors where an incorrect number carries regulatory weight, and where "the model was confident" offers no legal protection.
[ORIGINAL DATA] Four evaluation questions Info-Tech says every stakeholder should ask before selecting an AI analytics tool:
When I look at a computed number, can your team prove exactly how it was computed?
Does the system produce the same output for the same query, every time?
Does customer data enter the model's training process at any point?
What is the full cost of running this at scale — and how does that change with query volume?
For most general-purpose LLM-based AI analytics tools, at least two of those four questions have no satisfying answer. That's not a criticism of how those tools are built. It's a description of what they were built for.
What the Research Found — Architecture Over Governance
The core architectural insight in the Info-Tech note is worth stating plainly, because it reframes how buyers should think about AI analytics tools in regulated contexts.
Most enterprise AI governance frameworks approach the problem the same way: take a general-purpose model, add guardrails, logging, and output filters, and call the result compliant. Info-Tech's analysis pushes back on that model — not by criticising governance as a concept, but by identifying where it can't reach. Compliance-grade logging bolted onto a probabilistic system still can't tell you why the model chose one output over another. The non-determinism isn't in the governance layer. It's in the weights.
Chata.ai's approach, as the research describes it, is structurally different. The model trains on schema — the structure and relationships of a customer's data — not on the data itself. Customer data never enters the training process. Queries execute as deterministic computations against live structured data, which means the same query returns the same result every time, with a complete trace of how it was computed.
Zero-Data-Movement Architecture
Info-Tech Research Group's May 2026 analysis notes that Chata.ai trains its model on data schema rather than data content. Customer data does not move into or through the training pipeline. Every query executes as a deterministic computation against structured data, producing a full audit trail as a structural output — not as a logging feature added after the fact.
Source: Info-Tech Research Group / SoftwareReviews, May 2026
The accuracy compounding problem is where this becomes concrete for data teams. A single LLM query might hit 95% accuracy — acceptable in many contexts. But AI analytics tools are rarely single-step. A 10-step workflow running at 95% per step compounds to roughly 60% end-to-end accuracy. For a financial reconciliation, a regulatory report, or a supply chain alert, that degradation isn't a product limitation to work around. It's a disqualifying characteristic.
On inference cost, the research cites a vendor-supplied figure — explicitly flagged as unverified — showing a meaningful cost differential between CPU-based deterministic inference and GPU-dependent LLM inference at scale. The disclosure matters. It tells you the analysts found the claim plausible enough to include, but honest enough not to rubber-stamp.
Why the CPU Cost Advantage Matters Right Now
The cost argument for deterministic AI analytics tools has sharpened considerably in 2025 and 2026 — not because the technology changed, but because the procurement context did.
Organizations that ran early AI pilots on GPU infrastructure often did so with innovation budgets that absorbed the cost without much scrutiny. Production deployments at enterprise scale don't have that buffer. When query volume grows from a handful of analysts to hundreds of business users asking questions daily, the GPU inference cost curve becomes a line item that finance teams notice. In some organizations, it's already a reason to pull back deployments that are technically working.
CPU-based deterministic inference doesn't carry that curve. The system executes structured computations rather than running probabilistic inference through large model weights, so the cost profile scales differently — and more predictably. The Info-Tech note flags this as a commercially relevant advantage, while noting that the specific differential is vendor-supplied and not independently verified. That's the right framing: a plausible structural claim buyers should test against their own volume projections, not a marketing figure to accept at face value.
The timing observation in the research is pointed. This cost advantage has arrived at exactly the moment when enterprise AI budgets are being scrutinized rather than expanded. For buyers in regulated verticals who were already looking for architectural reasons to prefer deterministic AI analytics tools, the cost argument is now a second, independent reason to make the same choice.
Which Industries Are Deploying Deterministic AI Analytics Tools — and Why?
The verticals named in the Info-Tech research aren't a random sample. Financial services, banking, government, healthcare, supply chain, and transportation share a structural characteristic: in each of them, an incorrect number produced by an AI analytics tool can trigger a regulatory consequence, not just an operational inconvenience.
That's the common thread the research identifies. It isn't that these industries are more sophisticated in their AI adoption. It's that the failure mode of a wrong answer is qualitatively different. A hallucinated insight in a consumer recommendation engine is an annoyance. A hallucinated figure in a Basel III capital adequacy report, a government procurement audit, or a pharmaceutical supply chain compliance filing is a material event.
The Info-Tech note also highlights two capabilities that explain the "why now" for these verticals. First, proactive analytics and composite alerting — the ability to monitor multiple data streams simultaneously and surface anomalies before they become incidents. This shifts the use case from reactive querying to continuous oversight, which is exactly what risk and compliance functions are paid to do.
Second, the legacy system bridge approach. Most organizations in financial services, government, and healthcare are not running on modern cloud-native infrastructure. They have Oracle databases, SAP systems, and decades of accumulated technical debt. An AI analytics tool that requires a clean modern data stack is, practically speaking, a tool that doesn't function in most regulated enterprises. The deterministic approach in the research connects to existing infrastructure by reading schema, not by requiring migration.
The Four Questions Every Buyer Should Ask Before Choosing an AI Analytics Tool
The four evaluation questions in the Info-Tech research work because they target architecture, not features. Most AI analytics tool evaluations focus on the UI, the integrations list, the supported data sources, and the price. Those things matter. But they don't expose the structural properties that determine whether a tool is appropriate for a compliance-sensitive environment.
Here's how to use them as a due diligence framework — not a checklist, but a way of probing what's underneath the demo.
1. Can you prove how any computed number was produced? Ask this directly, then ask for a live demonstration — not a screenshot or a slide. If the vendor can't show you a complete computation trace in the product itself, the answer is no regardless of what the documentation says.
2. Does the system produce identical outputs for identical queries? Non-determinism isn't always obvious. Run the same query three times at different times of day if possible, and compare results. For LLM-based tools, variation is expected and by design. For a tool marketed as deterministic, any variation is worth investigating.
3. Does customer data enter the model's training process? If the model is fine-tuned or continuously updated on customer data, the outputs reflect that data — and the training process may itself be subject to data governance obligations. Schema-only training avoids this entirely.
4. What does the cost curve look like at your actual query volume? Ask for pricing at 10x your current volume, not just current usage. The difference between GPU-based and CPU-based inference becomes visible at scale, not at pilot stage.

The Audit Trail Question
Info-Tech Research Group's May 2026 analysis frames a single question as the most important test for any AI analytics tool in a regulated environment: "When I look at a computed number, can your team prove exactly how it was computed?" For probabilistic systems, the architecture does not permit a complete answer. For deterministic systems, the answer exists as a structural property of every query.
Source: Info-Tech Research Group / SoftwareReviews, May 2026
Frequently Asked Questions About AI Analytics Tools and Compliance
Can AI analytics tools provide a full audit trail for compliance?
Only if the architecture produces one by design. LLM-based AI analytics tools generate outputs through probabilistic inference — a complete computation trace doesn't exist to retrieve. Deterministic AI analytics tools execute structured queries against data in place, producing a full audit trail as a structural output of every query. Info-Tech Research Group's May 2026 analysis identifies this architectural distinction as the key differentiator for regulated environments.
What industries benefit most from deterministic AI analytics tools?
Financial services, banking, government, healthcare, supply chain, and transportation are the primary verticals. The common factor is that an incorrect number produced by an AI analytics tool can trigger a regulatory consequence — making auditability an operational requirement, not a feature request.
How does Chata.ai prevent hallucinations in data analysis?
Chata.ai trains on data schema — the structure and relationships of customer data — not on the data itself. Queries execute as deterministic computations against live structured data, eliminating the probabilistic inference step where hallucinations originate.
What should regulated industries look for in an AI analytics platform?
Four criteria from Info-Tech Research Group's evaluation framework: a provable computation trace for every output; repeatable results for identical queries; a training process that doesn't involve customer data; and a cost model that remains predictable at scale. A deterministic architecture satisfies all four by design. Most general-purpose LLM-based AI analytics tools satisfy one or two.
The Architectural Question Is the Only Evaluation Question That Matters
The Info-Tech Research Group analysis doesn't conclude that all AI analytics tools built on LLMs are wrong for enterprise use. It concludes something more specific and more useful: for regulated environments where every computed number must be traceable to its source, the architecture underneath the tool is what determines whether compliance is possible at all.
Governance layers, compliance add-ons, and audit logging features are not architectural equivalents to a system that produces traceability by design. They're compensating controls — downstream of the problem they're trying to solve.
If your compliance team needs to answer "how was this computed?" for any output your AI analytics tool produces, the architecture question is the only evaluation question that matters. The rest is product selection within a valid category — or, if the architecture doesn't support it, product selection from the wrong category entirely.
Read the full Info-Tech Research Group analysis to review the analysts' evaluation methodology and findings directly. Independent research: Info-Tech Research Group / SoftwareReviews, May 2026.
Analysts: Shashi Bellamkonda, Igor Ikonnikov.
When your finance team runs a quarterly close and an AI analytics tool surfaces a number that doesn't look right, the first thing your auditors ask isn't "is this system intelligent?" It's "show us how you got there."
For most general-purpose AI tools built on large language models, that question lands in uncomfortable silence. This isn't a corner case. It's the central tension in enterprise AI adoption right now. Organizations in financial services, government, and other regulated industries have been promised productivity gains from AI analytics — and many are seeing them in pilots. But when it comes time to put those outputs in front of a regulator, a board, or an external auditor, the architecture underneath starts to matter in ways that weren't visible during procurement.
That distinction — between a system that produces a plausible output and one that produces a traceable one — is exactly what Info-Tech Research Group set out to examine in a May 2026 independent research note on Chata.ai's deterministic AI platform. Analysts Shashi Bellamkonda and Igor Ikonnikov evaluated the architecture, data handling model, cost structure, and deployment approach, and concluded that deterministic AI represents the correct architectural bet for regulated environments.
Their central argument isn't about features. It's about what the architecture produces by default. Audit traceability, repeatable outputs, and compliance-grade logging aren't things you configure on top of a deterministic system. They're what a deterministic system is.
This article walks through what Info-Tech found, why it matters for compliance-sensitive buyers, and the four questions every data team should ask before committing to any AI analytics tool.
Key Takeaways
Info-Tech Research Group analysts Shashi Bellamkonda and Igor Ikonnikov researched Chata.ai's no-hallucination architecture as fit for regulated environments.
Accuracy degrades structurally in LLM workflows: a 10-step chain drops to roughly 60% accuracy even at 95% per-step reliability.
CPU-based inference eliminates the GPU cost curves that have become a boardroom concern at scale.
A full audit trail is a structural outcome of deterministic architecture — not a compliance add-on.
How Info-Tech Research Group Evaluated Chata.ai
Analysts Shashi Bellamkonda and Igor Ikonnikov conducted the evaluation independently, positioning their findings within Info-Tech's broader coverage of enterprise AI analytics tools.
Their scope covered five dimensions: architecture and data handling, cost model, deployment flexibility, privacy posture, and fit with target verticals — specifically financial services, banking, government, healthcare, supply chain, and transportation. These aren't adjacent markets. They're sectors where an incorrect number carries regulatory weight, and where "the model was confident" offers no legal protection.
[ORIGINAL DATA] Four evaluation questions Info-Tech says every stakeholder should ask before selecting an AI analytics tool:
When I look at a computed number, can your team prove exactly how it was computed?
Does the system produce the same output for the same query, every time?
Does customer data enter the model's training process at any point?
What is the full cost of running this at scale — and how does that change with query volume?
For most general-purpose LLM-based AI analytics tools, at least two of those four questions have no satisfying answer. That's not a criticism of how those tools are built. It's a description of what they were built for.
What the Research Found — Architecture Over Governance
The core architectural insight in the Info-Tech note is worth stating plainly, because it reframes how buyers should think about AI analytics tools in regulated contexts.
Most enterprise AI governance frameworks approach the problem the same way: take a general-purpose model, add guardrails, logging, and output filters, and call the result compliant. Info-Tech's analysis pushes back on that model — not by criticising governance as a concept, but by identifying where it can't reach. Compliance-grade logging bolted onto a probabilistic system still can't tell you why the model chose one output over another. The non-determinism isn't in the governance layer. It's in the weights.
Chata.ai's approach, as the research describes it, is structurally different. The model trains on schema — the structure and relationships of a customer's data — not on the data itself. Customer data never enters the training process. Queries execute as deterministic computations against live structured data, which means the same query returns the same result every time, with a complete trace of how it was computed.
Zero-Data-Movement Architecture
Info-Tech Research Group's May 2026 analysis notes that Chata.ai trains its model on data schema rather than data content. Customer data does not move into or through the training pipeline. Every query executes as a deterministic computation against structured data, producing a full audit trail as a structural output — not as a logging feature added after the fact.
Source: Info-Tech Research Group / SoftwareReviews, May 2026
The accuracy compounding problem is where this becomes concrete for data teams. A single LLM query might hit 95% accuracy — acceptable in many contexts. But AI analytics tools are rarely single-step. A 10-step workflow running at 95% per step compounds to roughly 60% end-to-end accuracy. For a financial reconciliation, a regulatory report, or a supply chain alert, that degradation isn't a product limitation to work around. It's a disqualifying characteristic.
On inference cost, the research cites a vendor-supplied figure — explicitly flagged as unverified — showing a meaningful cost differential between CPU-based deterministic inference and GPU-dependent LLM inference at scale. The disclosure matters. It tells you the analysts found the claim plausible enough to include, but honest enough not to rubber-stamp.
Why the CPU Cost Advantage Matters Right Now
The cost argument for deterministic AI analytics tools has sharpened considerably in 2025 and 2026 — not because the technology changed, but because the procurement context did.
Organizations that ran early AI pilots on GPU infrastructure often did so with innovation budgets that absorbed the cost without much scrutiny. Production deployments at enterprise scale don't have that buffer. When query volume grows from a handful of analysts to hundreds of business users asking questions daily, the GPU inference cost curve becomes a line item that finance teams notice. In some organizations, it's already a reason to pull back deployments that are technically working.
CPU-based deterministic inference doesn't carry that curve. The system executes structured computations rather than running probabilistic inference through large model weights, so the cost profile scales differently — and more predictably. The Info-Tech note flags this as a commercially relevant advantage, while noting that the specific differential is vendor-supplied and not independently verified. That's the right framing: a plausible structural claim buyers should test against their own volume projections, not a marketing figure to accept at face value.
The timing observation in the research is pointed. This cost advantage has arrived at exactly the moment when enterprise AI budgets are being scrutinized rather than expanded. For buyers in regulated verticals who were already looking for architectural reasons to prefer deterministic AI analytics tools, the cost argument is now a second, independent reason to make the same choice.
Which Industries Are Deploying Deterministic AI Analytics Tools — and Why?
The verticals named in the Info-Tech research aren't a random sample. Financial services, banking, government, healthcare, supply chain, and transportation share a structural characteristic: in each of them, an incorrect number produced by an AI analytics tool can trigger a regulatory consequence, not just an operational inconvenience.
That's the common thread the research identifies. It isn't that these industries are more sophisticated in their AI adoption. It's that the failure mode of a wrong answer is qualitatively different. A hallucinated insight in a consumer recommendation engine is an annoyance. A hallucinated figure in a Basel III capital adequacy report, a government procurement audit, or a pharmaceutical supply chain compliance filing is a material event.
The Info-Tech note also highlights two capabilities that explain the "why now" for these verticals. First, proactive analytics and composite alerting — the ability to monitor multiple data streams simultaneously and surface anomalies before they become incidents. This shifts the use case from reactive querying to continuous oversight, which is exactly what risk and compliance functions are paid to do.
Second, the legacy system bridge approach. Most organizations in financial services, government, and healthcare are not running on modern cloud-native infrastructure. They have Oracle databases, SAP systems, and decades of accumulated technical debt. An AI analytics tool that requires a clean modern data stack is, practically speaking, a tool that doesn't function in most regulated enterprises. The deterministic approach in the research connects to existing infrastructure by reading schema, not by requiring migration.
The Four Questions Every Buyer Should Ask Before Choosing an AI Analytics Tool
The four evaluation questions in the Info-Tech research work because they target architecture, not features. Most AI analytics tool evaluations focus on the UI, the integrations list, the supported data sources, and the price. Those things matter. But they don't expose the structural properties that determine whether a tool is appropriate for a compliance-sensitive environment.
Here's how to use them as a due diligence framework — not a checklist, but a way of probing what's underneath the demo.
1. Can you prove how any computed number was produced? Ask this directly, then ask for a live demonstration — not a screenshot or a slide. If the vendor can't show you a complete computation trace in the product itself, the answer is no regardless of what the documentation says.
2. Does the system produce identical outputs for identical queries? Non-determinism isn't always obvious. Run the same query three times at different times of day if possible, and compare results. For LLM-based tools, variation is expected and by design. For a tool marketed as deterministic, any variation is worth investigating.
3. Does customer data enter the model's training process? If the model is fine-tuned or continuously updated on customer data, the outputs reflect that data — and the training process may itself be subject to data governance obligations. Schema-only training avoids this entirely.
4. What does the cost curve look like at your actual query volume? Ask for pricing at 10x your current volume, not just current usage. The difference between GPU-based and CPU-based inference becomes visible at scale, not at pilot stage.

The Audit Trail Question
Info-Tech Research Group's May 2026 analysis frames a single question as the most important test for any AI analytics tool in a regulated environment: "When I look at a computed number, can your team prove exactly how it was computed?" For probabilistic systems, the architecture does not permit a complete answer. For deterministic systems, the answer exists as a structural property of every query.
Source: Info-Tech Research Group / SoftwareReviews, May 2026
Frequently Asked Questions About AI Analytics Tools and Compliance
Can AI analytics tools provide a full audit trail for compliance?
Only if the architecture produces one by design. LLM-based AI analytics tools generate outputs through probabilistic inference — a complete computation trace doesn't exist to retrieve. Deterministic AI analytics tools execute structured queries against data in place, producing a full audit trail as a structural output of every query. Info-Tech Research Group's May 2026 analysis identifies this architectural distinction as the key differentiator for regulated environments.
What industries benefit most from deterministic AI analytics tools?
Financial services, banking, government, healthcare, supply chain, and transportation are the primary verticals. The common factor is that an incorrect number produced by an AI analytics tool can trigger a regulatory consequence — making auditability an operational requirement, not a feature request.
How does Chata.ai prevent hallucinations in data analysis?
Chata.ai trains on data schema — the structure and relationships of customer data — not on the data itself. Queries execute as deterministic computations against live structured data, eliminating the probabilistic inference step where hallucinations originate.
What should regulated industries look for in an AI analytics platform?
Four criteria from Info-Tech Research Group's evaluation framework: a provable computation trace for every output; repeatable results for identical queries; a training process that doesn't involve customer data; and a cost model that remains predictable at scale. A deterministic architecture satisfies all four by design. Most general-purpose LLM-based AI analytics tools satisfy one or two.
The Architectural Question Is the Only Evaluation Question That Matters
The Info-Tech Research Group analysis doesn't conclude that all AI analytics tools built on LLMs are wrong for enterprise use. It concludes something more specific and more useful: for regulated environments where every computed number must be traceable to its source, the architecture underneath the tool is what determines whether compliance is possible at all.
Governance layers, compliance add-ons, and audit logging features are not architectural equivalents to a system that produces traceability by design. They're compensating controls — downstream of the problem they're trying to solve.
If your compliance team needs to answer "how was this computed?" for any output your AI analytics tool produces, the architecture question is the only evaluation question that matters. The rest is product selection within a valid category — or, if the architecture doesn't support it, product selection from the wrong category entirely.
Read the full Info-Tech Research Group analysis to review the analysts' evaluation methodology and findings directly. Independent research: Info-Tech Research Group / SoftwareReviews, May 2026.
Analysts: Shashi Bellamkonda, Igor Ikonnikov.
More Updates

See How Chata.ai Helps Teams
Act Faster

See How Chata.ai Helps Teams
Act Faster



