Tech News, Magazine & Review WordPress Theme 2017
  • Blog
  • Der Digital Schamane
    • Ikigai: Das japanische Geheimnis für ein erfülltes  Leben
    • Entfesseln Sie Ihr innovatives Potenzial mit den Denkhüten von de Bono
    • Enthüllen Sie die Geheimnisse Ihres inneren Teams: Eine einfacher Leitfaden
    • Die Kunst der kollegialen Fallberatung: Förderung einer Kultur der Zusammenarbeit und des Lernens
    • Vom Träumen zur Wirklichkeit: Die Kraft der Walt Disney Methode!
  • Spiele
Dienstag, 16. Dezember 2025
No Result
View All Result
  • Blog
  • Der Digital Schamane
    • Ikigai: Das japanische Geheimnis für ein erfülltes  Leben
    • Entfesseln Sie Ihr innovatives Potenzial mit den Denkhüten von de Bono
    • Enthüllen Sie die Geheimnisse Ihres inneren Teams: Eine einfacher Leitfaden
    • Die Kunst der kollegialen Fallberatung: Förderung einer Kultur der Zusammenarbeit und des Lernens
    • Vom Träumen zur Wirklichkeit: Die Kraft der Walt Disney Methode!
  • Spiele
No Result
View All Result
Arbeit 4.0 und KI: die Zukunft ist jetzt!
No Result
View All Result

Google’s new framework helps AI agents spend their compute and tool budget more wisely

by bendee983@gmail.com (Ben Dickson)
12. Dezember 2025
140 10
Home AI
Share on FacebookShare on Twitter

In a new paper that studies tool-use in large language model (LLM) agents, researchers at Google and UC Santa Barbara have developed a framework that enables agents to make more efficient use of tool and compute budgets. The researchers introduce two new techniques: a simple „Budget Tracker“ and a more comprehensive framework called „Budget Aware Test-time Scaling.“ These techniques make agents explicitly aware of their remaining reasoning and tool-use allowance.

As AI agents rely on tool calls to work in the real world, test-time scaling has become less about smarter models and more about controlling cost and latency.

For enterprise leaders and developers, budget-aware scaling techniques offer a practical path to deploying effective AI agents without facing unpredictable costs or diminishing returns on compute spend.

The challenge of scaling tool use

Traditional test-time scaling focuses on letting models „think“ longer. However, for agentic tasks like web browsing, the number of tool calls directly determines the depth and breadth of exploration.

This introduces significant operational overhead for businesses. „Tool calls such as webpage browsing results in more token consumption, increases the context length and introduces additional time latency,“ Zifeng Wang and Tengxiao Liu, co-authors of the paper, told VentureBeat. „Tool calls themselves introduce additional API costs.“

The researchers found that simply granting agents more test-time resources does not guarantee better performance. „In a deep research task, if the agent has no sense of budget, it often goes down blindly,“ Wang and Liu explained. „It finds one somewhat related lead, then spends 10 or 20 tool calls digging into it, only to realize that the entire path was a dead end.“

Optimizing resources with Budget Tracker

To evaluate how they can optimize tool-use budgets, the researchers first tried a lightweight approach called „Budget Tracker.“ This module acts as a plug-in that provides the agent with a continuous signal of resource availability, enabling budget-aware tool use.

The team hypothesized that „providing explicit budget signals enables the model to internalize resource constraints and adapt its strategy without requiring additional training.“

Budget Tracker operates purely at the prompt level, which makes it easy to implement. (The paper provides full details on the prompts used for Budget Tracker, which makes it easy to implement.)

In Google’s implementation, the tracker provides a brief policy guideline describing the budget regimes and corresponding recommendations for using tools. At each step of the response process, Budget Tracker makes the agent explicitly aware of its resource consumption and remaining budget, enabling it to condition subsequent reasoning steps on the updated resource state.

To test this, the researchers experimented with two paradigms: sequential scaling, where the model iteratively refines its output, and parallel scaling, where multiple independent runs are conducted and aggregated. They ran experiments on search agents equipped with search and browse tools following a ReAct-style loop. ReAct (Reasoning + Acting) is a popular method where the model alternates between internal thinking and external actions. To trace a true cost-performance scaling trend, they developed a unified cost metric that jointly accounts for the costs of both internal token consumption and external tool interactions.

They tested Budget Tracker on three information-seeking QA datasets requiring external search, including BrowseComp and HLE-Search, using models such as Gemini 2.5 Pro, Gemini 2.5 Flash, and Claude Sonnet 4. The experiments show that this simple plug-in improves performance across various budget constraints.

„Adding Budget Tracker achieves comparable accuracy using 40.4% fewer search calls, 19.9% fewer browse calls, and reducing overall cost … by 31.3%,“ the authors told VentureBeat. Finally, Budget Tracker continued to scale as the budget increased, whereas plain ReAct plateaued after a certain threshold.

BATS: A comprehensive framework for budget-aware scaling

To further improve tool-use resource optimization, the researchers introduced Budget Aware Test-time Scaling (BATS), a framework designed to maximize agent performance under any given budget. BATS maintains a continuous signal of remaining resources and uses this information to dynamically adapt the agent’s behavior as it formulates its response.

BATS uses multiple modules to orchestrate the agent’s actions. A planning module adjusts stepwise effort to match the current budget, while a verification module decides whether to „dig deeper“ into a promising lead or „pivot“ to alternative paths based on resource availability.

Given an information-seeking question and a tool-call budget, BATS begins by using the planning module to formulate a structured action plan and decide which tools to invoke. When tools are invoked, their responses are appended to the reasoning sequence to provide the context with new evidence. When the agent proposes a candidate answer, the verification module verifies it and decides whether to continue the current sequence or initiate a new attempt with the remaining budget.

The iterative process ends when budgeted resources are exhausted, at which point an LLM-as-a-judge selects the best answer across all verified answers. Throughout the execution, the Budget Tracker continuously updates both resource usage and remaining budget at every iteration.

The researchers tested BATS on the BrowseComp, BrowseComp-ZH, and HLE-Search benchmarks against baselines including standard ReAct and various training-based agents. Their experiments show that BATS achieves higher performance while using fewer tool calls and incurring lower overall cost than competing methods. Using Gemini 2.5 Pro as the backbone, BATS achieved 24.6% accuracy on BrowseComp compared to 12.6% for standard ReAct, and 27.0% on HLE-Search compared to 20.5% for ReAct.

BATS not only improves effectiveness under budget constraints but also yields better cost–performance trade-offs. For example, on the BrowseComp dataset, BATS achieved higher accuracy at a cost of approximately 23 cents compared to a parallel scaling baseline that required over 50 cents to achieve a similar result.

According to the authors, this efficiency makes previously expensive workflows viable. „This unlocks a range of long-horizon, data-intensive enterprise applications… such as complex codebase maintenance, due-diligence investigations, competitive landscape research, compliance audits, and multi-step document analysis,“ they said.

As enterprises look to deploy agents that manage their own resources, the ability to balance accuracy with cost will become a critical design requirement.

„We believe the relationship between reasoning and economics will become inseparable,“ Wang and Liu said. „In the future, [models] must reason about value.“

bendee983@gmail.com (Ben Dickson)

Next Post

Why most enterprise AI coding pilots underperform (Hint: It's not the model)

Please login to join discussion

Recommended.

LinkedIn founder Reid Hoffman unveils ‘super agency’ vision at TED AI conference, takes subtle shot at Elon Musk

25. Oktober 2024

„Mit KI starten und nicht nur darüber reden“

5. Februar 2025

Trending.

KURZGESCHICHTEN: Sammlung moderner Kurzgeschichten für die Schule

24. März 2025

Cohere’s Rerank 4 quadruples the context window over 3.5 to cut agent errors and boost enterprise search accuracy

11. Dezember 2025

UNTERRICHT: Thematische Lieder im Unterricht

19. November 2025

Microsoft remakes Windows for an era of autonomous AI agents

18. November 2025

How AI is introducing errors into courtrooms

20. Mai 2025
Arbeit 4.0 und KI: die Zukunft ist jetzt!

Menü

  • Impressum
  • Datenschutzerklärung

Social Media

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • Home
  • Review
  • Apple
  • Applications
  • Computers
  • Gaming
  • Microsoft
  • Photography
  • Security