Stop AI from Guessing: Appier Enables Agents to Assess Confidence Before Acting

---
Stop AI from Guessing: Appier Enables Agents to Assess Confidence Before Acting

New Framework Boosts Reliability, Cost Efficiency, and Scalability for Enterprise AI

SINGAPORE, March 24, 2026 /PRNewswire/ -- As an AI-native Agentic AI-as-a-Service (AaaS) company, Appier today announced its latest research paper, On Calibration of Large Language Models: From Response to Capability, as part of its ongoing investment in advanced AI innovation. The study introduces Capability Calibration[1]—a new framework designed to address the overconfidence and hallucination challenges of large language models (LLMs) by enabling AI systems to better assess their own ability to solve a given task.

This research equips AI agents with a critical capability: estimating the likelihood of solving a problem before generating an answer. By introducing a quantifiable self-assessment mechanism, AI systems can make more reliable decisions and allocate computational resources more efficiently—improving the reliability, cost efficiency, and scalability of enterprise AI deployments.

From Response Accuracy to Problem-Solving Capability
Traditional LLM calibration focuses on response-level confidence, estimating whether a single generated answer is correct. However, because LLM outputs are inherently stochastic, the same query may produce different responses across multiple attempts. Therefore, a single response often fails to reflect the model's true capability.

In practice, organizations are less concerned with whether one answer is correct and more interested in whether a model can consistently solve the task. Appier's capability calibration framework addresses this by shifting evaluation from single-response confidence to the model's expected success rate for a given query. This moves the evaluation target from a single answer to the model's broader problem-solving capability, providing a more practical measure of real-world performance.

Teaching AI Agents to "Know Their Limits"
"AI agents should not only generate answers but also understand the limits of their own capabilities," said Chih-Han Yu, CEO and Co-Founder of Appier. "With capability calibration, an agent can estimate its probability of success before responding and allocate resources intelligently. Simple queries can be handled quickly, while complex tasks can automatically leverage stronger models or additional compute. This transforms AI from a passive tool into a system that actively manages resources, optimizes costs, and improves decision quality—an essential foundation for scaling enterprise-grade AI agents."

Experimental Results: High-Quality Calibration at Low Cost
The research clarifies the theoretical relationship between capability calibration and traditional response calibration[2], and evaluates multiple confidence estimation approaches across three large language models and seven datasets covering knowledge-intensive and reasoning-intensive tasks. Methods tested include:

  • Verbalized confidence[3]: The model explicitly states its confidence, in text or as a percentage.
  • P(True)[4]: Estimates the probability that the answer is correct based on generation signals.
  • Linear probes[5]: Use internal model signals to assess whether it truly understands.

Results show that the linear probe method provides the best balance between cost and performance, with computational cost even lower than generating a single token while maintaining reliable confidence estimation.

Two Key Applications: Improving Inference Efficiency and Resource Allocation
The framework enables two practical use cases. First, pass@k[6] prediction, a widely used metric for evaluating LLMs in complex tasks. Capability-calibrated confidence estimates the probability that a model will produce at least one correct answer after k attempts, without actually generating multiple responses. Second, inference resource allocation, where computational resources are dynamically distributed based on predicted task difficulty. Harder problems receive more attempts, allowing more tasks to be solved within the same compute budget.

Building a Decision Foundation for Trustworthy AI Agents
Capability calibration enables AI agents to establish a stable and quantifiable confidence signal before taking action. This allows agents to determine whether they can solve a task independently, when to call external tools, and when to seek human assistance—helping AI systems operate more reliably in uncertain environments.

Advancing Capability Calibration to Power Agentic AI Applications
Looking ahead, Appier's AI research team will continue advancing capability calibration by improving model evaluation methods and expanding the framework to applications such as model routing, human–AI collaboration, and trustworthy AI systems. Leveraging Appier's deep expertise in AI and marketing technology, these research advances will be translated into product capabilities, accelerating the deployment of Agentic AI in advertising and marketing decision-making and helping enterprises operate more efficiently in an increasingly complex digital landscape.

About Appier
Appier (TSE: 4180) is an AI-native Agentic AI as a Service (AaaS) company that empowers business decision-making with cutting-edge AdTech and MarTech solutions. Founded in 2012 with the vision of "Making AI Easy by making software intelligent," Appier endeavors to help businesses turn AI into ROI with its Ad Cloud, Personalization Cloud, and Data Cloud solutions. Now Appier has 17 offices across APAC, the US and EMEA, and is listed on the Tokyo Stock Exchange. Visit www.appier.com for more company information, and visit ir.appier.com/en/ for more IR information.

[1] Capability Calibration – A method for evaluating an AI model's overall problem-solving ability by estimating the probability that it will successfully answer a given query, rather than judging a single response.

[2] Response Calibration – A traditional AI evaluation approach that measures a model's confidence in the correctness of a single generated response.

[3] Verbalized Confidence – A method where the model explicitly states its confidence in the correctness of an answer in natural language, such as a percentage or confidence level.

[4] P(True) – A technique that estimates the probability that an answer is correct by analyzing the token probability distribution generated by the model.

[5] Linear Probe – A lightweight linear classifier trained on a model's internal representations to analyze whether the model has learned specific knowledge or capabilities, and to estimate confidence.

[6] pass@k – A common AI evaluation metric estimating the probability that a model produces at least one correct answer within k attempts, reflecting the need to explore multiple reasoning paths in complex tasks.

For media queries, please email pr@appier.com


Source: Appier Related Stocks: Tokyo:4180

Related News

menu
menu