HP Executive Says Enterprise AI Success Depends on Local Compute, Data Governance, and Smarter Infrastructure
HP Executive Says Enterprise AI Success Depends on Local Compute, Data Governance, and Smarter Infrastructure
Ahead of the AI & Big Data Expo North America 2026 at the San Jose McEnery Convention Center, Jerome Gabryszewski, AI & Data Science Business Development Manager at HP Inc., shared insights on the growing challenges enterprises face with AI infrastructure, data management, and cloud computing costs. According to Gabryszewski, many organizations underestimate the “architectural debt” hidden within their data systems, including fragmented ownership, inconsistent data structures, and outdated infrastructure that was never built for interoperability. He explained that before companies can automate AI workflows effectively, they must first resolve governance and integration issues across their data environments.
HP also warned that continuously updating AI systems introduces major risks such as concept drift and data poisoning. Gabryszewski said enterprises should treat AI model updates like software deployments, using validation gates, automated drift detection, and human oversight before retraining models. He emphasized that companies with the strongest AI governance frameworks are often the ones best positioned to scale AI safely.
On the hardware side, HP believes enterprise AI increasingly requires powerful local compute infrastructure rather than relying entirely on the cloud. Gabryszewski highlighted HP’s Z-series systems, including the ZGX Nano and ZGX Fury, which are designed to run large language models locally for fine-tuning, inference, and autonomous AI workflows. According to HP, the goal is not simply more compute power, but reducing latency, improving governance, and keeping sensitive data under organizational control.
The company also addressed the rising cost of generative AI. HP estimates enterprise GenAI spending reached $37 billion in 2025, with many organizations exceeding their AI budgets by more than 25 percent. Gabryszewski argued that businesses should separate experimental AI workloads from production deployments and adopt a three-tier infrastructure strategy: cloud services for burst training and frontier models, on-premises systems for predictable inference workloads, and edge compute for latency-sensitive applications.
HP further emphasized the importance of keeping proprietary enterprise data secure. The company recommends Retrieval-Augmented Generation (RAG) systems running on local infrastructure, allowing AI models to retrieve internal information without exposing sensitive data to external cloud providers. According to Gabryszewski, successful organizations are “bringing the intelligence to the data, not the other way around.”
Looking ahead, HP predicts enterprise IT teams will increasingly shift away from routine operational work toward managing and governing AI agents. Referencing comments from Jensen Huang, Gabryszewski said the future role of IT teams will focus less on repetitive tasks and more on deciding which AI systems can be trusted, how they are governed, and whether the infrastructure supporting them remains secure and transparent. Research from Gartner suggests that by the end of 2026, around 40 percent of enterprise applications could include embedded AI agents, accelerating the need for stronger governance and infrastructure oversight.
