Tech News, Magazine & Review WordPress Theme 2017
  • Blog
  • Der Digital Schamane
    • Ikigai: Das japanische Geheimnis für ein erfülltes  Leben
    • Entfesseln Sie Ihr innovatives Potenzial mit den Denkhüten von de Bono
    • Enthüllen Sie die Geheimnisse Ihres inneren Teams: Eine einfacher Leitfaden
    • Die Kunst der kollegialen Fallberatung: Förderung einer Kultur der Zusammenarbeit und des Lernens
    • Vom Träumen zur Wirklichkeit: Die Kraft der Walt Disney Methode!
  • Spiele
Donnerstag, 27. November 2025
No Result
View All Result
  • Blog
  • Der Digital Schamane
    • Ikigai: Das japanische Geheimnis für ein erfülltes  Leben
    • Entfesseln Sie Ihr innovatives Potenzial mit den Denkhüten von de Bono
    • Enthüllen Sie die Geheimnisse Ihres inneren Teams: Eine einfacher Leitfaden
    • Die Kunst der kollegialen Fallberatung: Förderung einer Kultur der Zusammenarbeit und des Lernens
    • Vom Träumen zur Wirklichkeit: Die Kraft der Walt Disney Methode!
  • Spiele
No Result
View All Result
Arbeit 4.0 und KI: die Zukunft ist jetzt!
No Result
View All Result

ScaleOps‘ new AI Infra Product slashes GPU costs for self-hosted enterprise LLMs by 50% for early adopters

by carl.franzen@venturebeat.com (Carl Franzen)
20. November 2025
149 1
Home AI
Share on FacebookShare on Twitter

ScaleOps has expanded its cloud resource management platform with a new product aimed at enterprises operating self-hosted large language models (LLMs) and GPU-based AI applications.

The AI Infra Product announced today, extends the company’s existing automation capabilities to address a growing need for efficient GPU utilization, predictable performance, and reduced operational burden in large-scale AI deployments.

The company said the system is already running in enterprise production environments and delivering major efficiency gains for early adopters, reducing GPU costs by between 50% and 70%, according to the company. The company does not publicly list enterprise pricing for this solution and instead invites interested customers to receive a custom quote based on their operation size and needs here.

In explaining how the system behaves under heavy load, Yodar Shafrir, CEO and Co-Founder of ScaleOps, said in an email to VentureBeat that the platform uses “proactive and reactive mechanisms to handle sudden spikes without performance impact,” noting that its workload rightsizing policies “automatically manage capacity to keep resources available.”

He added that minimizing GPU cold-start delays was a priority, emphasizing that the system “ensures instant response when traffic surges,” particularly for AI workloads where model load times are substantial.

Expanding Resource Automation to AI Infrastructure

Enterprises deploying self-hosted AI models face performance variability, long load times, and persistent underutilization of GPU resources. ScaleOps positioned the new AI Infra Product as a direct response to these issues.

The platform allocates and scales GPU resources in real time and adapts to changes in traffic demand without requiring alterations to existing model deployment pipelines or application code.

According to ScaleOps, the system manages production environments for organizations including Wiz, DocuSign, Rubrik, Coupa, Alkami, Vantor, Grubhub, Island, Chewy, and several Fortune 500 companies.

The AI Infra Product introduces workload-aware scaling policies that proactively and reactively adjust capacity to maintain performance during demand spikes. The company stated that these policies reduce the cold-start delays associated with loading large AI models, which improves responsiveness when traffic increases.

Technical Integration and Platform Compatibility

The product is designed for compatibility with common enterprise infrastructure patterns. It works across all Kubernetes distributions, major cloud platforms, on-premises data centers, and air-gapped environments. ScaleOps emphasized that deployment does not require code changes, infrastructure rewrites, or modifications to existing manifests.

Shafrir said the platform “integrates seamlessly into existing model deployment pipelines without requiring any code or infrastructure changes,” and he added that teams can begin optimizing immediately with their existing GitOps, CI/CD, monitoring, and deployment tooling.

Shafrir also addressed how the automation interacts with existing systems. He said the platform operates without disrupting workflows or creating conflicts with custom scheduling or scaling logic, explaining that the system “doesn’t change manifests or deployment logic” and instead enhances schedulers, autoscalers, and custom policies by incorporating real-time operational context while respecting existing configuration boundaries.

Performance, Visibility, and User Control

The platform provides full visibility into GPU utilization, model behavior, performance metrics, and scaling decisions at multiple levels, including pods, workloads, nodes, and clusters. While the system applies default workload scaling policies, ScaleOps noted that engineering teams retain the ability to tune these policies as needed.

In practice, the company aims to reduce or eliminate the manual tuning that DevOps and AIOps teams typically perform to manage AI workloads. Installation is intended to require minimal effort, described by ScaleOps as a two-minute process using a single helm flag, after which optimization can be enabled through a single action.

Cost Savings and Enterprise Case Studies

ScaleOps reported that early deployments of the AI Infra Product have achieved GPU cost reductions of 50–70% in customer environments. The company cited two examples:

  • A major creative software company operating thousands of GPUs averaged 20% utilization before adopting ScaleOps. The product increased utilization, consolidated underused capacity, and enabled GPU nodes to scale down. These changes reduced overall GPU spending by more than half. The company also reported a 35% reduction in latency for key workloads.

  • A global gaming company used the platform to optimize a dynamic LLM workload running on hundreds of GPUs. According to ScaleOps, the product increased utilization by a factor of seven while maintaining service-level performance. The customer projected $1.4 million in annual savings from this workload alone.

ScaleOps stated that the expected GPU savings typically outweigh the cost of adopting and operating the platform, and that customers with limited infrastructure budgets have reported fast returns on investment.

Industry Context and Company Perspective

The rapid adoption of self-hosted AI models has created new operational challenges for enterprises, particularly around GPU efficiency and the complexity of managing large-scale workloads. Shafrir described the broader landscape as one in which “cloud-native AI infrastructure is reaching a breaking point.”

“Cloud-native architectures unlocked great flexibility and control, but they also introduced a new level of complexity,” he said in the announcement. “Managing GPU resources at scale has become chaotic—waste, performance issues, and skyrocketing costs are now the norm. The ScaleOps platform was built to fix this. It delivers the complete solution for managing and optimizing GPU resources in cloud-native environments, enabling enterprises to run LLMs and AI applications efficiently, cost-effectively, and while improving performance.”

Shafrir added that the product brings together the full set of cloud resource management functions needed to manage diverse workloads at scale. The company positioned the platform as a holistic system for continuous, automated optimization.

A Unified Approach for the Future

With the addition of the AI Infra Product, ScaleOps aims to establish a unified approach to GPU and AI workload management that integrates with existing enterprise infrastructure.

The platform’s early performance metrics and reported cost savings suggest a focus on measurable efficiency improvements within the expanding ecosystem of self-hosted AI deployments.

carl.franzen@venturebeat.com (Carl Franzen)

Next Post

Google's upgraded Nano Banana Pro AI image model hailed as 'absolutely bonkers' for enterprises and users

Please login to join discussion

Recommended.

Wie gestalte ich einen MOOC? 5 Erkenntnisse zur Nutzung interaktiver Inhalte in Online-Kursen auf dem KI-Campus

3. Februar 2025

Credit where credit’s due: Inside Experian’s AI framework that’s changing financial access

28. März 2025

Trending.

KURZGESCHICHTEN: Sammlung moderner Kurzgeschichten für die Schule

24. März 2025

iOS gets an AI upgrade: Inside Apple’s new ‘Intelligence’ system

29. Juli 2024

From pilot to scale: Making agentic AI work in health care

28. August 2025

Chinese universities want students to use more AI, not less

28. Juli 2025

Python data validator Pydantic launches model agnostic, AI agent development platform

4. Dezember 2024
Arbeit 4.0 und KI: die Zukunft ist jetzt!

Menü

  • Impressum
  • Datenschutzerklärung

Social Media

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • Home
  • Review
  • Apple
  • Applications
  • Computers
  • Gaming
  • Microsoft
  • Photography
  • Security