logo
사건

HPE Introduces AI Grid to Connect AI Factories and Distributed Inference Clusters Using NVIDIA Reference Architecture

인증
중국 Beijing Qianxing Jietong Technology Co., Ltd. 인증
중국 Beijing Qianxing Jietong Technology Co., Ltd. 인증
고객 검토
베이징 첸싱 지에통 테크 주식회사의 영업 사원은 매우 전문적이고 참을성 있습니다. 그들은 빨리 인용을 제공할 수 있습니다. 제품의 품질과 패키징은 또한 매우 좋습니다. 우리의 협력은 매우 매끄럽습니다.

—— 《Festfing DV》LLC

내가 긴급히 인텔 CPU와 토시바 SSD를 찾고 있었늘 때, 베이징 첸싱 지에통 기술 주식회사로부터의 샌디는 나에게 많은 도움을 주었고, 나에게 빨리 필요로 한 제품을 가져다 주었습니다. 나는 정말로 그녀를 압니다.

—— 고양이 엔

베이징 첸싱 지에통 기술 주식회사의 샌디는 내가 서버를 구입할 때 제시간에 나에게 구성 오류를 상기시킬 수 있는 매우 주의깊은 판매원을 있습니다. 엔지니어들은 또한 매우 전문적이고, 빠르게 테스팅 프로세스를 완료할 수 있습니다.

—— 스트렐킨 미하일 블라드미로비치

베이징 첸싱지에통과의 협업에 매우 만족합니다. 제품 품질이 훌륭하고, 배송도 항상 제 시간에 이루어집니다. 영업팀은 전문적이고, 인내심이 많으며, 모든 질문에 매우 친절하게 답변해 줍니다. 그들의 지원에 진심으로 감사드리며, 장기적인 파트너십을 기대합니다. 강력 추천합니다!

—— Ahmad Navid

품질: 제 공급업체와의 좋은 경험. 미크로틱 RB3011은 이미 사용되었지만 매우 좋은 상태로 모든 것이 완벽하게 작동합니다. 통신은 빠르고 원활했습니다.그리고 제 모든 걱정은 빠르게 해결되었습니다.매우 신뢰할 수 있는 공급자

—— 제란 콜레시오

제가 지금 온라인 채팅 해요

HPE Introduces AI Grid to Connect AI Factories and Distributed Inference Clusters Using NVIDIA Reference Architecture

April 15, 2026
HPE has unveiled the HPE AI Grid, an all-inclusive infrastructure offering aligned with the NVIDIA AI Grid reference architecture. This solution is engineered to securely link AI factories and distributed inference clusters across regional and remote edge locations, with HPE positioning it specifically for service providers tasked with deploying and managing thousands of distributed inference sites as a unified, coordinated system.

최신 회사 사례 HPE Introduces AI Grid to Connect AI Factories and Distributed Inference Clusters Using NVIDIA Reference Architecture  0

HPE AI Grid image
HPE presents the AI Grid as a targeted solution for AI-native applications, which are increasingly demanding predictable latency, consistent deterministic performance, and distributed deployment capabilities. The company asserts that the platform delivers ultra-low latency at scale, complemented by zero-touch provisioning, integrated orchestration, and automated security features—all designed to simplify lifecycle management across large, geographically scattered deployments.

Rami Rahim, Executive Vice President, President, and General Manager of Networking at HPE, outlined the strategy as bringing intelligence closer to the point where data is generated and utilized, noting that network infrastructure serves as a critical enabler for real-time AI services. Chris Penrose, NVIDIA’s Global Vice President of Telco, underscored the value of an AI Grid in connecting geographically dispersed clusters and dynamically allocating workloads based on performance, cost, and latency requirements. Under this collaboration, HPE provides multicloud routing and edge infrastructure, while NVIDIA delivers accelerated compute and networking components.

Full-Stack Hardware and Networking Foundation

HPE highlights that the HPE AI Grid offers a unified hardware and software platform tailored to support service-provider operational models, including multi-tenancy and cloud-native security protocols. At the core of its architecture are HPE Juniper’s telco-grade networking capabilities, which encompass multicloud routing and coherent optics for long-haul and metro connectivity. Additionally, HPE emphasizes integrated firewalls, WAN automation, and orchestration tools that enable zero-touch deployment and continuous lifecycle management for distributed AI infrastructure.

최신 회사 사례 HPE Introduces AI Grid to Connect AI Factories and Distributed Inference Clusters Using NVIDIA Reference Architecture  1

HPE ProLiant Compute DL380a Gen12
On the compute front, HPE is combining edge and rack servers with NVIDIA’s accelerated computing technologies and a high-performance networking and I/O stack. The platform supports NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs, along with NVIDIA BlueField DPUs, Spectrum-X Ethernet switches, and ConnectX SuperNICs—all critical components for delivering high-efficiency AI processing. To further streamline deployment, the stack includes NVIDIA AI blueprints that expedite the rollout of inference services across distributed sites.

Service Provider Focus

HPE is targeting the AI Grid at applications that require predictable latency and reliable connectivity, such as retail personalization, predictive maintenance, edge healthcare services, and carrier-grade AI offerings. The company notes that the platform enables operators to convert existing sites with power and connectivity into RAN-ready AI grid nodes, effectively expanding the scope of inference deployment without requiring each location to be managed as an independent system.

Field Trials and Partner Ecosystem

As part of its broader HPE AI Grid announcement, Comcast revealed new AI field trials on its distributed network, focused on delivering real-time edge inferencing capabilities. These trials leverage Comcast’s nationwide, highly distributed architecture to test AI workloads running close to customers, unlocking faster, more responsive experiences for next-generation AI applications. HPE noted that early trials featured HPE ProLiant servers running small language models from Personal AI— a member of HPE’s Unleash AI partner program—on NVIDIA GPUs, delivering AI-powered “front desk” services tailored for small businesses. These services include greeting customers, managing appointments, answering questions, and supporting daily operations for small enterprises.

Beijing Qianxing Jietong Technology Co., Ltd.
Sandy Yang/Global Strategy Director
WhatsApp / WeChat: +86 13426366826
Email: yangyd@qianxingdata.com
Website: www.qianxingdata.com/www.storagesserver.com
Business Focus:
ICT Product Distribution/System Integration & Services/Infrastructure Solutions
With 20+ years of IT distribution experience, we partner with leading global brands to deliver reliable products and professional services.
“Using Technology to Build an Intelligent World”Your Trusted ICT Product Service Provider!
연락처 세부 사항
Beijing Qianxing Jietong Technology Co., Ltd.

담당자: Ms. Sandy Yang

전화 번호: 13426366826

회사에 직접 문의 보내기 (0 / 3000)