GB200 NVL4應該確定將成為2H25的主角之一
有別於主流產品GB300 NVL72標準機櫃,從Dell和HPE的產品發表看來,GB200 NVL4可能被設計成刀鋒伺服器Blade Server的方式,高密集的大量的放在一個機櫃
比NVL72更GPU密集,之前NVL144/288/576訊息都不明確,現在正式出現在產品規劃中,HPE發佈NVL244、Dell發佈NVL144機種
每片GB200 NVL4主機板有2顆Grace CPU(無socket)+4顆Blackwell GPU(上次提到從照片看來4個GPU模組都有socket,socket數量=GPU數量)
HPE使用GB200 NVL4做成Blade Server高密集GPU機櫃HPE Cray Supercomputing EX154n Accelerator Blade,單一機櫃可達244顆Blackwell GPU(不知道為何不是72/144/288等倍數),表示有61片GB200 NVL4主機板,2025年底前上市
Dell使用GB200 NVL4組成50U的IR7000機櫃,144顆GPU,表示有36片GB200 NVL4主機板
HPE Cray Supercomputing EX154n Accelerator Blade – To drastically reduce the time it takes to complete a supercomputing workload, the HPE Cray Supercomputing EX154n Accelerator Blade can accommodate up to 224 NVIDIA Blackwell GPUs in a single cabinet. Featuring the NVIDIA GB200 Grace Blackwell NVL4 Superchip, each accelerator blade holds four NVIDIA NVLink™-connected Blackwell GPUs unified with two NVIDIA Grace CPUs over NVIDIA NVLink-C2C.
https://www.hpe.com/us/en/newsroom/press-release/2024/11/hpe-expands-direct-liquid-cooled-supercomputing-solutions-introduces-two-ai-systems-for-service-providers-and-large-enterprises.html
Dell plans to support the upcoming NVIDIA GB200 Grace Blackwell NVL4 Superchip with a new Dell PowerEdge XE server designed for the Dell IR7000, supporting up to 144 GPUs per rack in a 50OU standard rack. The IR7000 rack supports large-scale HPC and AI workloads requiring high power and liquid cooling with the ability for near 100% heat capture.
https://www.dell.com/en-us/dt/corporate/newsroom/announcements/detailpage.press-releases~usa~2024~11~dell-ai-factory.htm#/filter-on/Country:en-us
https://www.facebook.com/share/p/19du2J2mbF/
NVIDIA GB200是一款最新的超級AI晶片,旨在滿足日益增長的AI計算需求。它結合了兩顆B200 GPU和一顆Grace CPU,並採用了先進的NVLink-C2C技術,實現高頻寬和低延遲的數據傳輸。GB200的設計標誌著AI技術進入了「兆級參數」時代,能夠處理數兆參數規模的語言模型,顯著提升了大型模型的運算性能和能源效率。主要特點
技術優勢
應用場景GB200伺服器適用於各種AI應用,包括:
隨著AI技術的不斷發展,預計到2025年,GB200將成為全球頂尖雲端服務商(如Google、微軟、亞馬遜等)的基礎架構之一。分析師預測到2025年,GB200機架出貨量將達到4萬台,其中NVL72約1萬台,而NVL36約3萬台。總體而言,NVIDIA GB200伺服器不僅是一項技術突破,更是推動AI市場應用的重要力量。