HP Exec Warns AI Success Depends on Data Governance, Local Compute, and Smarter Infrastructure
HP Exec Warns AI Success Depends on Data Governance, Local Compute, and Smarter Infrastructure
Ahead of the upcoming AI & Big Data Expo North America 2026 in San Jose, HP’s AI & Data Science Business Development Manager, Jerome Gabryszewski, shared insights on the growing challenges enterprises face in preparing data, managing AI infrastructure, and balancing cloud versus local compute strategies.
According to Gabryszewski, many organizations already possess massive amounts of first-party data, but struggle to turn it into meaningful business value at enterprise scale. One of the biggest obstacles is not AI itself, but the fragmented and outdated data environments companies have built over decades.
He explained that businesses often underestimate the “architectural debt” behind their systems, including inconsistent data structures, siloed departments, and legacy infrastructure that was never designed for interoperability. As a result, governance and integration issues frequently become larger challenges than the automation technology itself.
Managing AI Risks and Continuous Learning
HP also warned about the risks tied to continuously updating AI systems. As AI models learn and adapt over time, organizations face growing threats from concept drift and data poisoning.
Gabryszewski said enterprises should treat AI model updates like software deployments, requiring validation checkpoints before changes reach production systems. He emphasized the importance of data provenance—knowing exactly where training data originates and who has access to it.
According to HP, the companies handling AI governance most effectively are not always the most technically advanced, but the ones that integrate governance directly into their broader risk management strategies.
The Push Toward Local AI Compute
Drawing from HP’s hardware background, Gabryszewski highlighted the increasing demand for powerful local AI infrastructure capable of supporting autonomous AI workflows without relying entirely on cloud services.
He pointed to HP’s Z-series workstations and newer AI-focused systems such as the ZGX Nano and ZGX Fury, which are designed to run large language models locally while handling tasks like fine-tuning, inference, and data processing on-premises.
The company argues that local compute is becoming critical as enterprises deal with rising cloud AI costs, latency concerns, and stricter data governance requirements.
Cloud Costs Continue to Rise
HP believes the rapid growth of generative AI spending is creating structural cost problems for enterprises.
The company estimates that enterprise GenAI spending reached $37 billion in 2025, while many organizations exceeded projected costs by more than 25 percent. Although individual inference costs are falling, overall spending continues to climb because AI usage is scaling even faster.
To manage costs, HP recommends a three-tier infrastructure strategy:
- Cloud platforms for large-scale training and frontier models
- On-premises systems for predictable, high-volume inference workloads
- Edge computing for latency-sensitive tasks
Gabryszewski said businesses should avoid using expensive cloud infrastructure for every stage of experimentation and instead reserve cloud resources for workloads that truly require hyperscale capabilities.
Keeping Sensitive Data Secure
Another major concern for enterprises is how to make proprietary data “AI-ready” without exposing sensitive information to external platforms.
HP advocates for Retrieval-Augmented Generation (RAG) systems running on local infrastructure. In this setup, AI models retrieve relevant information from internal knowledge bases during queries without permanently training on or transmitting that data externally.
The company says this approach allows organizations to keep proprietary information on-premises while still benefiting from advanced AI capabilities. Role-based permissions can also ensure employees only access information they are authorized to see.
According to Gabryszewski, companies succeeding in enterprise AI are “bringing the intelligence to the data, not the other way around.”
The Future of Enterprise IT
Looking ahead, HP predicts the role of enterprise IT teams will change significantly as autonomous AI systems become more common.
Referencing comments from Jensen Huang, Gabryszewski said the focus of IT work is shifting away from repetitive operational tasks and toward governance, oversight, and strategic decision-making.
Research from Gartner suggests that by the end of 2026, around 40 percent of enterprise applications will include embedded AI agents.
As these systems take on more operational responsibilities, IT teams will increasingly be responsible for determining which AI agents are trusted to make decisions, how those systems are governed, and whether organizations maintain sufficient visibility and control over their infrastructure.
HP argues that locally controlled infrastructure will play a key role in maintaining that visibility, particularly as enterprises seek greater oversight over AI-driven operations in the years ahead.
