Back to Articles
Essay May 5, 2026 14 min read

The Inference Economy: Tabular AI, Convergent Systems, and the Future of Work

By Fredrik Bratten

Cover image for: The Inference Economy: Tabular AI, Convergent Systems, and the Future of Work

Key Takeaways

  • AI is moving from content generation toward operational inference over structured data - the inference economy.
  • Tabular foundation models like TabPFN are reshaping which structured-data problems need bespoke machine-learning pipelines.
  • The realistic operational AI stack combines LLMs, tabular models, graphs, time-series, retrieval, agents, workflows, and human governance.
  • The labor-market change is task compression first; routine judgment work is most exposed, and entry-level apprenticeship paths need redesign.
  • AI systems that touch real workflows need evaluation, traceability, and escalation design from the start. Governance becomes part of the architecture.

Who this is for

Engineering and product leaders, AI practitioners, and operators evaluating how AI moves from content generation toward operational inference inside real workflows, and what that shift requires from systems, governance, and people.

Most public discussion about artificial intelligence still revolves around chatbots, image generators, copilots, and coding assistants. These systems matter, but they do not describe the whole direction of travel. A quieter shift is happening around AI systems that operate on tables, logs, transactions, alerts, tickets, customer records, workflows, and operational events. This piece is an attempt to understand how those AI systems can move from isolated tools toward operational systems that remain inspectable, governable, and human-centered.

That shift matters because most organizations do not run only on text. They run on structured state.

Customer data lives in CRM systems. Financial activity lives in transaction tables. Security posture lives in telemetry, alerts, assets, users, hashes, IPs, and timestamps. Operations teams work through tickets, dashboards, queues, incidents, and status fields. Business decisions are often made from rows and columns long before they become reports or slide decks.

This is where tabular AI becomes important.

From content generation to operational inference

The first broad wave of generative AI helped people produce and interpret content. It made it easier to summarize documents, write drafts, generate code, create images, and ask natural-language questions.

The next phase is different. AI is moving closer to the operating core of organizations.

In the older pattern, a human reads data, interprets the situation, makes a decision, and updates a system. In the emerging pattern, an AI system evaluates a business event, ranks or predicts its significance, prepares a next step, and routes the case to a person, workflow, or agentic process.

That is the beginning of what we can call the inference economy.

Inference is the moment a model applies learned patterns to new input. It may produce a classification, prediction, score, recommendation, summary, or proposed action. In a chatbot, inference happens when a user asks a question. In an operational AI system, inference can happen whenever the organization changes state.

Examples include:

  • A new security alert appears.
  • A payment looks unusual.
  • A customer record changes.
  • A support ticket is opened.
  • A machine sensor crosses a threshold.
  • A lead enters the pipeline.
  • A user account behaves differently from its baseline.
  • A project status field changes.

Each event becomes something the system can evaluate.

This changes the economics of AI. The important question is no longer only "How many people used the chatbot today?" It becomes "How many business events did AI evaluate today?"

Stanford's 2025 AI Index reported that the inference cost for a system performing at GPT-3.5 level on MMLU dropped from $20 per million tokens in November 2022 to $0.07 per million tokens by October 2024, a more than 280-fold reduction. The same report notes annual hardware cost declines of about 30% and energy-efficiency improvements of about 40%. [Stanford HAI, AI Index 2025]

Cheaper inference does not simply reduce cost. It changes which workflows become economically viable.

Why tabular AI matters

Tabular data is structured data arranged in rows and columns. It includes numerical values, categories, timestamps, identifiers, missing values, labels, and relationships between fields. This is the format behind spreadsheets, databases, CRM exports, security alerts, invoices, forecasts, and operational records.

For many years, tabular machine learning has been dominated by methods such as XGBoost, LightGBM, and CatBoost. These methods remain strong because real business data is messy, mixed, incomplete, and often smaller than the datasets used to train large language or vision models.

But tabular foundation models are now challenging the assumption that every structured-data problem needs a bespoke machine-learning pipeline.

TabPFN is one of the most visible examples. The Nature paper on TabPFN describes a foundation model for tabular data that supports classification, regression, categorical variables, missing values, and outliers, and that generalizes to new datasets in a single forward pass without per-task retraining. [Hollmann et al., Nature 2025]

The field is also moving past the original small-data framing. Prior Labs' TabPFN-2.5 report scaled in-context learning to 50,000 data points and 2,000 features, roughly 20 times the data-cell capacity of TabPFNv2. Public documentation now positions TabPFN-2.6 for workloads up to 100,000 samples and 2,000 features, while some model cards remain more conservative at 50,000 samples and 2,000 features depending on the deployment path. SAP also describes TabPFN-2.6 as the top-performing model on TabArena, a benchmark for tabular foundation models. [Prior Labs TabPFN-2.5 report; Prior Labs model documentation; SAP Prior Labs announcement]

A separate research line, TabICL, takes a related route using a column-then-row attention design, with TabICLv2 generalizing to million-scale tabular inference under 50GB GPU memory through CPU and disk offloading. [TabICL; TabICLv2]

The point is not that tabular foundation models replace traditional machine-learning pipelines today. They do not. The point is that the boundary is moving. Structured-data prediction is becoming more reusable, more model-driven, and more directly connectable to enterprise workflows.

This matters because business AI will not be solved by language models alone.

Recent market signal: SAP and Prior Labs

The SAP/Prior Labs acquisition is an important signal because it connects tabular foundation models directly to enterprise software.

SAP announced in May 2026 that it had reached an agreement to acquire Prior Labs, describing the company as a pioneer in tabular foundation models. SAP also said it plans to invest more than €1 billion over four years to scale Prior Labs into a frontier AI lab focused on structured business data. [SAP Prior Labs announcement]

This is more than a conventional AI acquisition. SAP's core domain is enterprise state: finance, supply chain, HR, procurement, planning, assets, and business processes. A move into tabular foundation models suggests that enterprise AI is shifting from chat interfaces toward structured prediction, business-process context, and operational decision support.

The European framing also matters. Prior Labs is headquartered in Freiburg, with offices in Berlin and New York, and SAP is German. SAP frames the deal as building "a globally leading frontier AI lab in Europe." That makes it one of the clearest European frontier-AI bets in the tabular foundation model category, especially in a market narrative often dominated by US LLM labs and Chinese model releases.

The same pattern shows up across other early-May 2026 signals: AI infrastructure spending continues to pull chips and memory upward, inference economics are becoming a distinct market category, AI governance is fragmenting into sector-specific guidance, and enterprise AI is being framed as an operating-model question rather than a tooling upgrade. SAP/Prior Labs is one expression of that broader convergence, not an isolated event.

Convergent technology, not one-model thinking

The future of operational AI is not one giant model doing everything.

A more realistic stack combines several kinds of systems:

  • Large language models for interpretation, explanation, synthesis, planning, and communication.
  • Tabular models for prediction over structured records.
  • Graph systems for relationships between entities, assets, users, processes, organizations, and events.
  • Time-series models for change over time, forecasting, and anomaly detection.
  • Retrieval systems for policies, documents, prior cases, and domain knowledge.
  • Agents for coordinating steps across tools and systems.
  • Workflow engines for enforcing process, approvals, and escalation.
  • Human governance for responsibility, judgment, and exception handling.

This is the practical meaning of convergent technology. The World Economic Forum's 2026 report on technology convergence frames value creation around the way technologies reshape processes, shift bottlenecks, and change where value and risk sit across ecosystems. [World Economic Forum, Technology Convergence 2026]

That framing is useful because it avoids a common mistake: treating LLMs as the whole AI stack.

LLMs are powerful, but enterprise operating systems are not made of language alone. They are made of records, relationships, policies, workflows, permissions, exceptions, and accountability.

This is where the research direction becomes more concrete. Adaptivearts.ai is investigating the same problem through a small set of research artifacts, protocols, and supporting systems.

Adaptivearts Gate. An experimental policy gate that sits between AI agents and the systems they act on. The intended pattern is that every action passes through declared policy, approvals leave an audit trail, and refusals are traceable.

Protocols.

  • 5PP (Five-Point Prompt Verification Protocol). A pre-flight check applied before an instruction reaches a model, confirming intent, scope, constraints, success criteria, and stop conditions. The purpose is to reduce failure modes caused by under-specified prompts.
  • DIAL-4. A runtime pattern that keeps at least one opposing branch alongside a model output and forces a reconciliation step before action is taken. It is designed to reduce single-pass overconfidence in agentic workflows.

Systems.

  • evaluation-mcp. A quality-gate service for scoring intermediate pipeline outputs against declared criteria, so failures surface inside the pipeline rather than only at the user-facing end.
  • PAAF (Project Audit and Analysis Framework). An audit harness for checking whether what was built matches what was specified, surfacing drift between intent and implementation.

None of these are presented as finished answers. They are research artifacts on the same question: how AI-supported workflows can be made inspectable, evaluable, and governable rather than merely fast.

What changes in the job market

The labor-market impact is not simply "AI replaces jobs." That framing is too crude. The more accurate pattern is task compression.

AI affects tasks before it affects job titles.

The IMF estimated in 2024 that almost 40% of global employment is exposed to AI, with exposure around 60% in advanced economies because those economies contain more cognitive-task-oriented jobs. The same IMF analysis emphasizes that some exposed jobs are complemented by AI while others face reduced labor demand. [IMF, Gen-AI and the Future of Work]

The World Economic Forum's Future of Jobs Report 2025 estimated that 22% of jobs in its dataset may face structural change by 2030, with 170 million roles created and 92 million displaced, for a net increase of 78 million roles. The scope matters: this is based on surveyed employers representing more than 14 million workers across 22 industry clusters and 55 economies, not a literal count of all global jobs. [World Economic Forum, Future of Jobs Report 2025]

The practical effect is uneven. Some workers become more productive. Some roles change. Some hiring slows. Some work disappears. Some new work emerges elsewhere.

The most exposed layer is routine judgment work.

That includes work where people repeatedly inspect structured information, compare records, identify exceptions, assign priority, write summaries, and update systems. This pattern appears in finance, compliance, customer support, HR, sales operations, cybersecurity, logistics, administration, and project management.

In cybersecurity, for example, AI can help rank alerts, enrich events, summarize likely causes, compare against prior incidents, and suggest escalation paths. In finance, it can flag abnormal invoices, suspicious transactions, or risk patterns. In customer operations, it can score churn risk, route tickets, predict SLA breaches, or identify high-value leads.

The human does not disappear from these workflows. But the first pass becomes increasingly automated.

That leads to smaller teams handling larger volumes, more exception-based work, and more emphasis on people who can verify, govern, and improve the system.

The apprenticeship problem

The most serious labor-market issue may be entry-level work.

Junior roles have often been built around first-pass tasks: preparing reports, cleaning data, summarizing meetings, triaging tickets, writing simple code, checking records, and escalating obvious exceptions.

These are precisely the tasks AI systems can increasingly support or perform.

That creates an apprenticeship problem: if AI absorbs much of the junior work, organizations need new ways for people to become senior.

This is not only a labor-market issue. It is an organizational learning issue.

Companies that automate aggressively without redesigning learning paths risk weakening their own future expertise. They may gain short-term efficiency while reducing the number of people who understand the domain deeply enough to govern complex systems later.

A healthier model gives junior workers new forms of supervised exposure:

  • validating AI outputs,
  • inspecting edge cases,
  • documenting failure modes,
  • writing test cases,
  • improving workflow instructions,
  • comparing predictions with outcomes,
  • learning when not to trust the model.

That is less glamorous than "AI transformation," but it is probably more important.

What changes in companies and the economy

AI changes company structure because it changes the cost of coordination, analysis, and execution.

A small team with strong AI-supported workflows can do work that previously required a larger team. This does not mean every organization shrinks. Some will use the productivity gain to expand, improve quality, increase service coverage, or enter new markets. But the baseline expectation for productivity rises.

The OECD notes that AI can improve productivity, job quality, and occupational safety, while also creating risks around automation, loss of agency, bias, discrimination, privacy, and transparency. [OECD, AI and work]

The economic divide therefore becomes less about who has access to AI tools and more about who can connect AI to real work.

Organizations that benefit most tend to have:

  • clean enough data,
  • accessible systems,
  • clear process ownership,
  • domain expertise,
  • feedback loops,
  • governance structures,
  • people who can translate between technical and operational realities.

Organizations without those foundations often remain stuck at the demo layer. They may have access to powerful models, but they struggle to convert capability into durable operational improvement.

This is why operational AI is not just a model problem. It is a systems problem.

Governance becomes part of the architecture

As AI moves closer to operational decision-making, governance becomes more than a compliance wrapper.

Governance determines which actions can be automated, which require approval, which outputs must be explained, which logs must be retained, which failures trigger review, and which humans remain accountable.

Recent signals point in this direction. In early May 2026 alone, a US court sanctioned a supervising attorney over an AI-assisted filing that contained a fabricated citation; India's securities regulator announced it would issue an advisory on emerging AI risks for market intermediaries; Colorado moved to revise its first-generation AI statute and Connecticut passed legislation covering employment, anti-discrimination, and chatbot child-safety provisions; and reporting from Fortune described a Chinese court ruling that companies cannot terminate workers solely to replace them with AI systems. [Reuters, legal AI filing; Reuters, SEBI AI advisory; Axios Denver, Colorado AI rules; CT Insider, Connecticut AI legislation; Fortune, China AI labor case]

These are not separate stories. They are signs that AI governance is becoming operational, jurisdictional, and domain-specific.

This supports a simple conclusion: AI systems that touch real workflows need evaluation, traceability, and escalation design from the start.

For Adaptivearts.ai as a research initiative, this is the more interesting problem. Not "how do we automate everything?" but "how do we decide what should be automated, under which constraints, with what evidence, and with which human responsibilities preserved?"

The future human role

The durable human role is not the person who performs every task manually. It is the person who can understand the domain, configure the system, verify the output, handle exceptions, communicate trade-offs, and take responsibility.

That role combines several forms of competence:

  • domain knowledge,
  • AI operation,
  • workflow design,
  • verification,
  • communication,
  • governance,
  • judgment under uncertainty.

This is also where the "general contractor" model of AI work becomes useful. The human is not merely a prompt writer and not merely a passive reviewer. The human coordinates specialized systems, validates their outputs, understands dependencies, and remains accountable for the result.

The work becomes less about doing every step by hand and more about designing, supervising, and improving the system that performs the steps.

Conclusion

The next phase of AI is not only about more capable chatbots. It is about the convergence of models, structured data, workflows, agents, and governance.

Tabular AI matters because much of organizational reality is stored in structured data.

Inference matters because business events can be evaluated continuously.

Agents matter because predictions only become useful when connected to action.

Governance matters because operational AI without accountability becomes fragile.

The economic impact will not be uniform. Some tasks will be automated, some roles will be augmented, and some labor-market pathways will come under pressure. The biggest near-term change is likely to be the compression of routine judgment work and the redesign of entry-level learning paths.

The question is how organizations can convert data, workflows, and expertise into AI-supported operating systems that remain inspectable, governable, and human-centered.

That is the more durable research problem behind the inference economy.

References

Share this article

Tags

#Tabular AI#Inference Economy#Operational AI#Convergent Technology#AI Governance#TabPFN#Future of Work