-
Lenovo ThinkSystem SR675 V3 – 3U GPU-Dense Rack Server with AMD EPYC™ & NVIDIA HGX™
Keywords
Lenovo ThinkSystem SR675 V3, AMD EPYC 9004, AMD EPYC 9005, DDR5 memory, GPU rack server, NVIDIA HGX H200, Lenovo Neptune cooling, PCIe Gen 5, hybrid liquid cooling, LiCO orchestration
Description
The Lenovo ThinkSystem SR675 V3 delivers industry-leading GPU performance by combining up to two 5th-Generation AMD EPYC™ 9005 processors (128 cores per socket) with support for eight double-width GPUs connected via NVLink or a dedicated NVIDIA HGX™ H200 4-GPU module featuring hybrid liquid-air cooling Lenovo Documentation. With 24 DIMM sockets, the server scales to 3 TB of TruDDR5 memory at up to 6400 MHz (using 5th-Gen EPYC CPUs) or 6000 MHz (using 4th-Gen EPYC CPUs), delivering unparalleled memory bandwidth for data-intensive AI training and HPC simulationsLenovo Press.
Storage flexibility includes up to eight hot-swap 2.5″ SAS/SATA/NVMe drives on the base module or up to six EDSFF E1.S/E3.S NVMe SSDs in the dense module, while front- or rear-mounted configurations let you optimize capacity or performance according to workload demandsLenovo Press. Six PCIe Gen 5 x16 slots plus an OCP NIC 3.0 slot enable high-speed networking and accelerator add-ons, and dual 2600 W Titanium PSUs with full N+N redundancy ensure maximum uptime and energy efficiency in mission-critical environments.
Lenovo’s management ecosystem—XClarity Controller 2, Confluent integration, and the HPC & AI Software Stack with the LiCO Web portal—provides agent-free deployment, monitoring, and workflow automation for both AI and HPC workloads, abstracting cluster orchestration complexity and accelerating time to insight
Key Features
-
Dual AMD EPYC™ Processors: Supports 4th/5th-Gen EPYC 9004/9005 series, up to 128 cores per socket for massive parallelism Lenovo Documentation
-
Up to 3 TB DDR5 Memory: 24× DIMMs at up to 6400 MHz (5th-Gen) or 6000 MHz (4th-Gen) for maximum bandwidth Lenovo Press
-
GPU-Dense Architecture: Base module: 4× double-width GPUs; Dense module: 8× double-width GPUs; HGX module: NVIDIA HGX H200 4× SXM5 GPUs with NVLink Lenovo Press
-
Lenovo Neptune™ Hybrid Cooling: Closed-loop liquid-to-air cooling for HGX GPUs, reducing noise and powerLenovo Press
-
Flexible Storage: Up to 8× 2.5″ hot-swap SAS/SATA/NVMe or 6× EDSFF NVMe SSDs, plus 2× 2.5″ M.2 boot SSDs (BOSS) Lenovo Press
-
6× PCIe Gen 5 Slots + OCP NIC 3.0: High-throughput I/O expansion for networking or accelerators
-
Enterprise Management: Lenovo XClarity Controller 2, Confluent, LiCO Web portal for HPC/AI orchestration
-
Redundant Titanium PSUs: Four N+N hot-swap PSUs (up to 2600 W) and ASHRAE A2 support for energy efficiency
Configuration
Component Specification Form Factor 3U rack (42.8 mm × 482.4 mm × 677 – 807 mm) Processors 1 × or 2 × AMD EPYC™ 9004/9005 Series (LGA 6096; up to 128 cores/socket; TDP up to 400 W) Lenovo Documentation Memory 24 × DDR5 DIMM sockets (TruDDR5: 64 GB–128 GB; up to 3 TB; 6000 – 6400 MHz) Lenovo Press GPU Options Base: 4 × PCIe Gen 5 GPUs; Dense: 8 × PCIe Gen 5 GPUs; HGX: 4 × SXM5 GPUs with NVLink and Neptune cooling Lenovo Press Storage Base/Dense: Up to 8 × 2.5″ SAS/SATA/NVMe; Dense: up to 6 × E1.S/E3.S NVMe SSDs; HGX: up to 4 × 2.5″ NVMe; BOSS: 2 × M.2 SSDs Lenovo Press I/O Expansion Up to 6 × PCIe Gen 5 x16 (2 front, 4 rear); 1 × OCP NIC 3.0 Networking Front/rear high-speed fabric options; OCP NIC 3.0 supports x16/x8/x4 configurations Power & Cooling 4 × N+N hot-swap Titanium PSUs (up to 2600 W); ASHRAE A2; variable-speed fans; Neptune liquid hybrid on HGX module Management Lenovo XClarity Controller 2; Confluent; Lenovo HPC & AI Software Stack; LiCO Web portal Compatibility
-
Operating Systems: Red Hat Enterprise Linux, SUSE Linux Enterprise Server, Microsoft Windows Server, VMware ESXi, Alma Linux, Rocky Linux, Canonical Ubuntu LTS Lenovo Press
-
Accelerator Ecosystem: NVIDIA Hopper (H100/H200), Lovelace (L40), Ampere (A100), AMD Instinct™ MI Series Lenovo Press
-
Management Integrations: Microsoft System Center, VMware vCenter, BMC Truesight, Red Hat Ansible Modules, Nagios, IBM Tivoli, Micro Focus Operations Lenovo
Usage Scenarios
-
AI Model Training & Inference: HGX H200’s 4× SXM5 GPUs with NVLink deliver scalable multi-instance GPU support (up to seven MIG partitions) for elastic data-center deployments Lenovo Press.
-
High-Performance Computing (HPC): Dual-socket AMD EPYC and 3 TB DDR5 memory accelerate large-scale simulations, genomics, and weather modeling in space-constrained racks Lenovo Documentation.
-
Visualization & Rendering: Up to eight double-width GPUs deliver real-time ray tracing and VR workloads, while Neptune cooling maintains quiet, efficient operation in development studios Lenovo Press.
-
Scalable Cluster Orchestration: LiCO Web portal and Confluent management simplify cluster provisioning, multi-user workload scheduling, and automated scaling for both AI and HPC environments.
Frequently Asked Questions
Q1: What is the maximum GPU density in the SR675 V3?
A1: In the dense module, it supports up to eight double-width, full-height, full-length GPUs over a PCIe Gen 5 fabric; the HGX module supports four SXM5 GPUs interconnected via NVLink Lenovo Press.Q2: How does Lenovo Neptune™ hybrid cooling work?
A2: Lenovo Neptune™ uses a closed-loop liquid-to-air heat exchanger on HGX GPUs to remove heat more efficiently than air alone, reducing PSU load, acoustic noise, and thermal hotspots Lenovo Press.Q3: Can I mix 4th-Gen and 5th-Gen EPYC CPUs?
A3: No—processor generations must match per node. To achieve 6400 MHz DDR5, use only 5th-Gen EPYC CPUs; 4th-Gen CPUs operate up to 6000 MHz Lenovo Press.Q4: What high-speed networking options are available?
A4: An OCP NIC 3.0 slot supports dual 10/25/100 GbE adapters, and rear fabric modules offer choice of InfiniBand, Omni-Path, or Ethernet to scale cluster interconnects Lenovo Press.Q5: How is system management simplified at scale?
A5: Lenovo XClarity Controller 2 provides embedded agent-free management; Confluent integration and the LiCO portal enable 1:many orchestration, workflow templates for AI/HPC, and centralized monitoringสินค้าที่เกี่ยวข้องกับรายการนี้
-
เซิร์ฟเวอร์ 2U Rack ที่ติดตั้ง Int... - หมายเลขชิ้นส่วน: el Xeon Platinum 828...
- ความพร้อมใช้งาน:In Stock
- สภาพ:ปรับปรุงใหม่
- ราคาหลักคือ:$222.00
- ราคาของคุณ: $111.00
- คุณประหยัด $111.00
- แชทตอนนี้ ส่งอีเมล
-
ราคาต่ำ Lenovo ThinkSystem SR675 V3 AMD EPYC 9274F... - หมายเลขชิ้นส่วน: SR675 V3...
- ความพร้อมใช้งาน:In Stock
- สภาพ:ใหม่จากโรงงาน
- ราคาหลักคือ:$2,405.00
- ราคาของคุณ: $1,499.00
- คุณประหยัด $906.00
- แชทตอนนี้ ส่งอีเมล
-
ThinkSystem SR675 V3 Server: เซิร์ฟเวอร์ 3U ความหน... - หมายเลขชิ้นส่วน: SR675 V3...
- ความพร้อมใช้งาน:In Stock
- สภาพ:ใหม่จากโรงงาน
- ราคาหลักคือ:$3,486.00
- ราคาของคุณ: $2,499.00
- คุณประหยัด $987.00
- แชทตอนนี้ ส่งอีเมล
-
การขายดีที่สุด ThinkSystem SR675 V3 3U Server พร้อ... - หมายเลขชิ้นส่วน: SR675 V3...
- ความพร้อมใช้งาน:In Stock
- สภาพ:ใหม่จากโรงงาน
- ราคาหลักคือ:$3,257.00
- ราคาของคุณ: $2,099.00
- คุณประหยัด $1,158.00
- แชทตอนนี้ ส่งอีเมล
-
คุณภาพดี AMD EPYC 9334 Lenovo ThinkSystem SR675 V3... - หมายเลขชิ้นส่วน: SR675 V3...
- ความพร้อมใช้งาน:In Stock
- สภาพ:ใหม่จากโรงงาน
- ราคาหลักคือ:$4,580.00
- ราคาของคุณ: $3,099.00
- คุณประหยัด $1,481.00
- แชทตอนนี้ ส่งอีเมล
-
- Dell R6625 Server
- Dell R6515 Server
- Dell R740XD Server
- Dell R940 Server
- Dell R660 Server
- Dell R630 Server
- Other Dell Server
- Dell R330 Server
- Dell R430 Server
- Dell R440 Server
- Dell R450 Server
- Dell R530 Server
- Dell R640 Server
- Dell R650 Server
- Dell R6525 Server
- Dell R6615 Server
- Dell R720 Server
- Dell R730 Server
- Dell R730XD Server
- Dell R740 Server
- Dell R750 Server
- Dell R820 Server
- Dell R830 Server
- Dell R840 Server
- Dell R930 Server
- Dell R350 Server
- Dell R760 Server
Limited-Time Savings on Inspur NF5466M6 4U Rack Server PN NF5466M6 – High-Density AI & Storage Solution
Keywords
Inspur NF5466M6, Inspur NF5466M5, Inspur NF5468M6, Inspur NF8480M6, Inspur NF8260M5, dual-socket rack server, 4U GPU server, custom rackmount PC, high-performance Inspur servers, enterprise rackmount solutions
Description
Inspur’s NF5466M6 rack server delivers a perfect balance of compute, memory, storage, and expansion—ideal for AI, virtualization, and large-scale data environments. With dual-socket support for the latest Intel® Xeon® Gold processors, the NF5466M6 unlocks massive parallelism for demanding workloads.
Building on that foundation, the NF5466M5 and NF5468M6 variants offer tailored configurations: NF5466M5 focuses on storage density with up to 24 hot-swap bays, while NF5468M6 adds GPU-optimized trays for AI inference. The broader NF series includes 2U GPU servers such as the NF8260M5 and NF5280M6, plus the NF8480M6 2U rack PC host—ensuring an Inspur solution for every enterprise need.
All Inspur NF rack servers—from NF8260M5 to NF8480M6 to NF5466M6—ship ready for customization. Choose your CPUs, memory, drives, and GPU options, then deploy quickly thanks to Inspur’s modular, hot-swappable design and comprehensive management tools.
Key Features
-
Dual-Socket Performance: Supports two Intel Xeon Gold or Silver CPUs for up to 64 cores.
-
Massive Memory: Up to 2 TB DDR4 RDIMM across 32 slots (NF5466M6).
-
Storage Density: Up to 24×3.5″ hot-swap bays (NF5466M5/6) or 16×2.5″ NVMe (NF8480M6).
-
GPU-Ready: NF5468M6 and NF8260M5 provide up to 8 full-height PCIe GPU slots.
-
Flexible Form Factors: 2U (NF8260M5, NF8480M6), 4U (NF5466M6, NF5466M5), and blade-style options.
-
Redundancy & Reliability: Hot-swap PSUs, fans, and drives; ECC memory; integrated hardware monitoring.
-
Easy Management: Inspur InCloud or Redfish API for remote deployment and firmware updates.
Configuration
Model | Form Factor | CPU Options | Memory Capacity | Storage Bays | GPU Slots |
---|---|---|---|---|---|
NF5466M6 | 4U | 2× Intel Xeon Gold (up to 28 cores ea.) | Up to 2 TB DDR4 | 24×3.5″ hot-swap | 4× FH PCIe |
NF5466M5 | 4U | 2× Intel Xeon Silver/Gold | Up to 1 TB DDR4 | 24×3.5″ hot-swap | 2× FH PCIe |
NF5468M6 | 4U | 2× Intel Xeon Silver/Gold | Up to 1 TB DDR4 | 16×3.5″ + GPU trays | 8× FH PCIe |
NF8480M6 | 2U | 1× Intel Xeon Gold 5315Y/6330A | Up to 512 GB DDR4 | 8×2.5″ NVMe | 2× FH PCIe |
NF8260M5 | 2U | 2× Intel Xeon Gold | Up to 512 GB DDR4 | 8×2.5″ SAS/SATA | 8× FH PCIe |
NF5280M6 | 2U | 1× Intel Xeon Silver/Gold | Up to 256 GB DDR4 | 8×2.5″ SAS/SATA | 4× LP PCIe |
NF5270M6 | 2U | 2× Intel Xeon Silver | Up to 256 GB DDR4 | 8×2.5″ or 4×3.5″ | – |
Compatibility
All Inspur NF series servers share a common chassis design, power architectures, and management interfaces, allowing you to mix NF8466, NF8480, NF8260, and NF5466 models in the same rack. PCIe Gen4 slots and standard OCP networking bays ensure you can deploy the latest add-in cards and 25/40 GbE adapters. Each model supports Linux (RHEL, Ubuntu) and Windows Server, plus container-orchestration via Kubernetes.
Usage Scenarios
-
AI Training & Inference
Leverage the NF5468M6’s eight GPU slots for large-scale deep-learning frameworks. Its high memory bandwidth ensures data pipelines remain saturated under peak workloads. -
Virtualized Cloud & VDI
Deploy clusters of NF5466M6 servers in your private cloud. Dual-socket CPUs and up to 2 TB RAM allow hundreds of VMs or thousands of containers to run concurrently. -
Enterprise Storage & Backup
Use NF5466M5’s 24 hot-swap bays for large-capacity backup and archive solutions. Combine HDDs and SSDs in hybrid arrays to balance performance and cost. -
High-Performance HPC
In 2U form factor, the NF8260M5 combines GPU acceleration—up to eight cards—with dual-CPU compute, ideal for scientific simulations and financial modeling.
Frequently Asked Questions (FAQs)
-
Which Inspur model is best for GPU-heavy workloads?
For maximum GPU density, choose the NF5468M6 (8× full-height GPU slots) or NF8260M5 (8× cards in 2U) for large parallel training tasks. -
Can I mix HDDs and NVMe drives?
Yes. The NF5466M6 chassis supports hybrid configurations—mix 3.5″ HDDs and 2.5″ NVMe drives—while NF8480M6 is optimized for NVMe-only arrays. -
What remote-management tools are available?
Inspur InCloud provides a web UI and RESTful Redfish API. Out-of-band management via IPMI is standard across all NF series models. -
Are these servers covered by on-site support?
Yes. Inspur offers factory-warranty with optional 3-year or 5-year on-site response SLAs, including parts, labor, and firmware upgrades.
Supermicro X14, GPU & AMD Server Portfolio PN X14-SYS – Exclusive Rackmount Solutions Sale
Keywords
Supermicro X14 servers, Supermicro GPU servers, Supermicro AMD servers, SYS-112C-TN, AS-4124GO-NART+, AS-1115CS-TNR-G1, enterprise rack servers, high-performance compute, AI ready servers, cloud datacenter servers
Description
Our curated Supermicro server portfolio brings together the latest X14 generation, GPU-accelerated platforms, and AMD-powered systems in one place. Whether you’re building a cloud datacenter or deploying AI inference nodes, these Supermicro X14 servers deliver industry-leading performance and density.
Explore 1U and 2U chassis optimized for storage-only (WIO), hyperconverged workloads (Hyper), or cloud-scale deployments (CloudDC). Each X14 SuperServer features the next-gen Intel Xeon Scalable processors, PCIe Gen5 expansion, and flexible I/O trays for NVMe, U.2, or U.3 drives.
For GPU-heavy applications, our 4U and 5U Supermicro GPU servers—such as the AS-4124GO-NART+ and AS-5126GS-TNRT2—support up to eight double-wide GPUs, advanced cooling, and 4× 100GbE or HDR InfiniBand networking. Meanwhile, our AMD lineup—from the AS-1115CS-TNR-G1 1U Gold Series to the 2U AS-2015HS-TNR SuperServer—offers unparalleled memory bandwidth and core counts for virtualization and HPC.
Key Features
-
X14 Generation Platforms: Intel Xeon Scalable Gen 4 support with PCIe Gen5 slots.
-
Flexible Chassis Options: 1U CloudDC, Hyper, WIO; 2U Hyper and WIO SuperServers.
-
GPU-Optimized Solutions: 4U AS-4124GO-NART+ & 5U AS-5126GS-TNRT2 for AI/ML training.
-
High-Core AMD Configurations: 1U and 2U Gold Series AMD EPYC servers.
-
Advanced Cooling & Redundancy: Hot-swap fans, PSUs, and tool-less drive trays.
-
Enterprise Networking: OCP 3.0 slots, 100GbE and HDR InfiniBand options.
Configuration
Category | Model Series | Form Factor | CPU Family | Max GPUs | Drive Bays |
---|---|---|---|---|---|
X14 Servers | SYS-112C-TN, SYS-112H-TN, SYS-122H-TN, SYS-112B-WR | 1U | Intel Xeon Scalable Gen4 | – | Up to 4× U.3 or 8×2.5″ |
SYS-212H-TN, SYS-222H-TN, SYS-522B-WR | 2U | Intel Xeon Scalable Gen4 | – | Up to 12× U.3 or 24×2.5″ | |
GPU Servers | AS-4124GO-NART+ | 4U | Intel Xeon Scalable | 4–8 | 12× U.3 + GPU trays |
AS-4125GS-TNRT2, AS-5126GS-TNRT, AS-5126GS-TNRT2 | 4U/5U | Intel Xeon Scalable H13/H14 | 8 | 16× U.3 + GPU trays | |
AMD Servers | AS-1115CS-TNR-G1, AS-1115HS-TNR-G1, AS-1125HS-TNR-G1 | 1U | AMD EPYC™ 7003/7004 Series | – | Up to 8×2.5″ |
AS-2015CS-TNR-G1, AS-2015HS-TNR | 2U | AMD EPYC™ 7003/7004 Series | – | Up to 12×2.5″ |
Compatibility
All Supermicro X14, GPU, and AMD servers use standard 19″ rack rails and share hot-swap PSUs, fans, and EEPROM management modules. The X14 and GPU platforms support OCP 3.0 NICs, enabling seamless integration of 25/50/100 GbE or InfiniBand cards. AMD Gold Series servers are fully compatible with Linux distributions (RHEL, Ubuntu) and container orchestration via Kubernetes.
Usage Scenarios
-
Cloud Data Centers
Deploy the 1U CloudDC SYS-112C-TN with dual Intel Xeon Gen4 CPUs and up to 8 NVMe drives for high-density tenant hosting. -
AI & GPU Accelerated Workloads
Use the 4U AS-4124GO-NART+ SuperServer with 4–8 high-wattage GPUs for model training in TensorFlow or PyTorch environments. -
High-Performance Computing (HPC)
Leverage AMD EPYC Gold Series AS-2015HS-TNR in 2U to run large-scale simulations and data analytics with high core counts and memory bandwidth. -
Edge & Enterprise Virtualization
Utilize the 1U Hyper SYS-112H-TN or AS-1115CS-TNR-G1 AMD server at branch offices for cost-effective virtual desktop and application hosting.
Frequently Asked Questions (FAQs)
-
Which Supermicro model is best for GPU-heavy AI training?
The AS-4124GO-NART+ (4U) and AS-5126GS-TNRT2 (5U) support up to eight double-wide GPUs and advanced liquid-air hybrid cooling for sustained AI workloads. -
Can I mix Intel and AMD servers in one rack?
Yes. All X14 and AMD Gold Series servers share rack-mount hardware, power, and management modules. Use centralized BMC or IPMI for unified control. -
What storage options are supported on X14 WIO models?
The SYS-112B-WR and SYS-522B-WR support up to 8 or 12 U.3 NVMe drives respectively, offering sub-millisecond latency for real-time analytics. -
How do I enable high-speed networking?
Install an OCP 3.0 100GbE or HDR InfiniBand adapter into the designated mezzanine slots on X14 and GPU servers for low-latency, high-bandwidth connectivity.