-
Lenovo ThinkSystem SR860 V3 – Scalable 4U Rack Server with Up to 4× Intel® Xeon® Scalable CPUs
Keywords
[Lenovo ThinkSystem SR860 V3](#Lenovo ThinkSystem SR860 V3), [4th Gen Intel Xeon Scalable](#4th Gen Intel Xeon Scalable), [4-socket server](#4-socket server), [DDR5 memory](#DDR5 memory), [NVMe storage](#NVMe storage), GPU-dense, [PCIe 5.0](#PCIe 5.0), [Lenovo Neptune cooling](#Lenovo Neptune cooling), [XClarity management](#XClarity management), [4U rack server](#4U rack server)
Description
The [Lenovo ThinkSystem SR860 V3](#Lenovo ThinkSystem SR860 V3) is a 4U, 4-socket rack server designed for mission-critical workloads—from SAP HANA in-memory computing to AI training and large-scale virtualization Lenovo Press. Powered by up to four 4th Generation Intel® Xeon® Scalable processors (up to 60 cores each, 3.7 GHz) in a mesh topology with three UPI links, it delivers massive parallel compute with up to 480 threads Lenovo Press.
With 64 DDR5 DIMM slots supporting speeds up to 4800 MT/s and 16 TB total capacity, the SR860 V3 provides the highest memory bandwidth available for data-intensive analytics and in-memory databases Lenovo Press. Its flexible storage architecture offers up to 48× 2.5″ drive bays (24 AnyBay NVMe/SAS/SATA), plus rear 7 mm or M.2 boot drives, eliminating I/O bottlenecks and enabling sub-millisecond response times Lenovo Press.
For GPU-accelerated workloads, the SR860 V3 supports up to four double-width 350 W GPUs (e.g., NVIDIA A100/H100) or eight single-width 75 W GPUs, with optional Lenovo Neptune™ liquid-to-air hybrid cooling maintaining optimal thermals under full load.
Enterprise-grade management is delivered via Lenovo XClarity Controller 2 and the XClarity Administrator suite—automating deployment, firmware updates, and health monitoring to reduce provisioning time by up to 95 % Lenovo Press.
Key Features
-
Up to 4× 4th Gen Intel® Xeon® Scalable CPUs: Mesh topology, 3 UPI links, 240 cores/480 threads max Lenovo Press
-
16 TB DDR5 Memory: 64 DIMM slots, 4800 MT/s for highest-speed in-memory workloads Lenovo Press
-
Massive GPU Support: Four 350 W double-width or eight 75 W single-width GPUs for AI/HPC acceleration
-
48× 2.5″ Drive Bays & NVMe AnyBay: Up to 24 direct-attach NVMe drives, eliminating storage bottlenecks
-
Lenovo Neptune™ Cooling: Hybrid liquid-to-air heat exchanger for GPU modules, reducing noise and power
-
PCIe 5.0 & OCP 3.0: Up to 16 Gen5/Gen4 PCIe slots plus two OCP 3.0 for 1–100 GbE networking
-
Lenovo XClarity Management: Embedded XClarity Controller 2, Administrator, Integrator plugins, Redfish API Lenovo Press
-
Enterprise-grade Reliability: Predictive Failure Analysis, light-path diagnostics, RoT/PFR security Lenovo Press
Configuration
Component Specification Form Factor 4U rack, 42.8 × 482.4 × 807 mm Processors 2–4× 4th Gen Intel® Xeon® Scalable (up to 350 W, 60 cores/CPU, mesh UPI) Lenovo Press Memory 64× DDR5 slots (64–256 GB TruDDR5 UDIMMs, up to 4800 MT/s, 16 TB max) Lenovo Press Drive Bays Front: 48× 2.5″ AnyBay (NVMe/SAS/SATA) or direct NVMe; Rear: 2× 7 mm or 2× M.2 boot SSDs Lenovo Press GPU Options Up to 4× double-width 350 W GPUs or 8× single-width 75 W GPUs; Lenovo Neptune cooling for HGX modules I/O Expansion Up to 16× PCIe slots (12× Gen5 + 4× Gen4) or 18× Gen4; 2× OCP 3.0 slots Lenovo Press Networking OCP 3.0 slots support 1/10/25/100 GbE; optional InfiniBand adapters Power Supplies 2–4× Titanium/Platinum hot-swap PSUs (N+N redundancy) Management XClarity Controller 2, XClarity Administrator, Redfish API; LiCO orchestration optional Compatibility
Supports all major OS and hypervisors:
-
Microsoft Windows Server (2016/2019/2022) Lenovo Press
-
Red Hat Enterprise Linux 8.x Lenovo Press
-
SUSE Linux Enterprise Server 15.x Lenovo Press
-
VMware ESXi 7.x Lenovo Press
-
Ubuntu LTS 22.04 (via Canonical certification) Lenovo Documentation
Usage Scenarios
-
In-Memory Databases (SAP HANA): Mesh-connected CPUs and 16 TB DDR5 accelerate real-time analytics with sub-millisecond latency Lenovo Press.
-
AI Training & Inference: Up to four 350 W GPUs with NVLink and Neptune cooling deliver scalable multi-instance GPU (MIG) performance for deep learning workloads.
-
High-Performance Computing (HPC): 64 DIMMs at 4800 MT/s and PCIe 5.0 I/O maximize throughput for simulations, genomic sequencing, and weather modeling Lenovo Press.
-
Virtualization & VDI: Up to 480 threads and 48 NVMe drives enable high VM density with low boot-storm impact and rapid cloning Lenovo Press.
Frequently Asked Questions
Q1: Can the SR860 V3 mix 350 W and 75 W GPUs?
A1: Yes—you may install up to four double-width 350 W GPUs or up to eight single-width 75 W GPUs, but mixing both widths in the same chassis is not supported.Q2: How does the “pay-as-you-grow” CPU upgrade work?
A2: You can start with two 4th Gen Intel Xeon Scalable CPUs and add a customer-installable mezzanine tray to upgrade to four CPUs and 64 DIMMs without downtime Lenovo Press.Q3: What cooling options exist for GPU modules?
A3: Lenovo Neptune™ liquid-to-air hybrid cooling is available for HGX GPU modules, offering up to 30 % lower acoustic noise and 20 % energy savings over air-only cooling.Q4: Which RAID options are supported?
A4: The SR860 V3 supports ThinkSystem PCIe RAID/HBA cards as well as CPU-based VROC for RAID 0/1 boot volumes; hardware RAID controllers are required for data drives Lenovo Documentation.Q5: What network fabrics can be deployed?
A5: Two OCP 3.0 slots can host adapters for 1/10/25/100 GbE, InfiniBand EDR/HDR, or iWARP, enabling flexible cluster interconnects and low-latency storage fabrics Lenovo Press.สินค้าที่เกี่ยวข้องกับรายการนี้
-
Lenovo SR860V3 Rack Server 4U โฮสต์ GPU แบบดูอัล -... - หมายเลขชิ้นส่วน: SR860V3...
- ความพร้อมใช้งาน:In Stock
- สภาพ:ใหม่จากโรงงาน
- ราคาหลักคือ:$7,999.00
- ราคาของคุณ: $5,999.00
- คุณประหยัด $2,000.00
- แชทตอนนี้ ส่งอีเมล
-
Lenovo ThinkSystem SR860 Server - Dual Intel Xeon ... - หมายเลขชิ้นส่วน: SR860...
- ความพร้อมใช้งาน:In Stock
- สภาพ:ใหม่จากโรงงาน
- ราคาหลักคือ:$9,999.00
- ราคาของคุณ: $7,999.00
- คุณประหยัด $2,000.00
- แชทตอนนี้ ส่งอีเมล
-
ปลดปล่อยประสิทธิภาพที่ไม่มีใครเทียบได้ด้วยเซิร์ฟเว... - หมายเลขชิ้นส่วน: SR860 V2...
- ความพร้อมใช้งาน:In Stock
- สภาพ:ใหม่จากโรงงาน
- ราคาหลักคือ:$5,732.00
- ราคาของคุณ: $4,175.00
- คุณประหยัด $1,557.00
- แชทตอนนี้ ส่งอีเมล
-
- Dell R6625 Server
- Dell R6515 Server
- Dell R740XD Server
- Dell R940 Server
- Dell R660 Server
- Dell R630 Server
- Other Dell Server
- Dell R330 Server
- Dell R430 Server
- Dell R440 Server
- Dell R450 Server
- Dell R530 Server
- Dell R640 Server
- Dell R650 Server
- Dell R6525 Server
- Dell R6615 Server
- Dell R720 Server
- Dell R730 Server
- Dell R730XD Server
- Dell R740 Server
- Dell R750 Server
- Dell R820 Server
- Dell R830 Server
- Dell R840 Server
- Dell R930 Server
- Dell R350 Server
- Dell R760 Server
Limited-Time Savings on Inspur NF5466M6 4U Rack Server PN NF5466M6 – High-Density AI & Storage Solution
Keywords
Inspur NF5466M6, Inspur NF5466M5, Inspur NF5468M6, Inspur NF8480M6, Inspur NF8260M5, dual-socket rack server, 4U GPU server, custom rackmount PC, high-performance Inspur servers, enterprise rackmount solutions
Description
Inspur’s NF5466M6 rack server delivers a perfect balance of compute, memory, storage, and expansion—ideal for AI, virtualization, and large-scale data environments. With dual-socket support for the latest Intel® Xeon® Gold processors, the NF5466M6 unlocks massive parallelism for demanding workloads.
Building on that foundation, the NF5466M5 and NF5468M6 variants offer tailored configurations: NF5466M5 focuses on storage density with up to 24 hot-swap bays, while NF5468M6 adds GPU-optimized trays for AI inference. The broader NF series includes 2U GPU servers such as the NF8260M5 and NF5280M6, plus the NF8480M6 2U rack PC host—ensuring an Inspur solution for every enterprise need.
All Inspur NF rack servers—from NF8260M5 to NF8480M6 to NF5466M6—ship ready for customization. Choose your CPUs, memory, drives, and GPU options, then deploy quickly thanks to Inspur’s modular, hot-swappable design and comprehensive management tools.
Key Features
-
Dual-Socket Performance: Supports two Intel Xeon Gold or Silver CPUs for up to 64 cores.
-
Massive Memory: Up to 2 TB DDR4 RDIMM across 32 slots (NF5466M6).
-
Storage Density: Up to 24×3.5″ hot-swap bays (NF5466M5/6) or 16×2.5″ NVMe (NF8480M6).
-
GPU-Ready: NF5468M6 and NF8260M5 provide up to 8 full-height PCIe GPU slots.
-
Flexible Form Factors: 2U (NF8260M5, NF8480M6), 4U (NF5466M6, NF5466M5), and blade-style options.
-
Redundancy & Reliability: Hot-swap PSUs, fans, and drives; ECC memory; integrated hardware monitoring.
-
Easy Management: Inspur InCloud or Redfish API for remote deployment and firmware updates.
Configuration
Model | Form Factor | CPU Options | Memory Capacity | Storage Bays | GPU Slots |
---|---|---|---|---|---|
NF5466M6 | 4U | 2× Intel Xeon Gold (up to 28 cores ea.) | Up to 2 TB DDR4 | 24×3.5″ hot-swap | 4× FH PCIe |
NF5466M5 | 4U | 2× Intel Xeon Silver/Gold | Up to 1 TB DDR4 | 24×3.5″ hot-swap | 2× FH PCIe |
NF5468M6 | 4U | 2× Intel Xeon Silver/Gold | Up to 1 TB DDR4 | 16×3.5″ + GPU trays | 8× FH PCIe |
NF8480M6 | 2U | 1× Intel Xeon Gold 5315Y/6330A | Up to 512 GB DDR4 | 8×2.5″ NVMe | 2× FH PCIe |
NF8260M5 | 2U | 2× Intel Xeon Gold | Up to 512 GB DDR4 | 8×2.5″ SAS/SATA | 8× FH PCIe |
NF5280M6 | 2U | 1× Intel Xeon Silver/Gold | Up to 256 GB DDR4 | 8×2.5″ SAS/SATA | 4× LP PCIe |
NF5270M6 | 2U | 2× Intel Xeon Silver | Up to 256 GB DDR4 | 8×2.5″ or 4×3.5″ | – |
Compatibility
All Inspur NF series servers share a common chassis design, power architectures, and management interfaces, allowing you to mix NF8466, NF8480, NF8260, and NF5466 models in the same rack. PCIe Gen4 slots and standard OCP networking bays ensure you can deploy the latest add-in cards and 25/40 GbE adapters. Each model supports Linux (RHEL, Ubuntu) and Windows Server, plus container-orchestration via Kubernetes.
Usage Scenarios
-
AI Training & Inference
Leverage the NF5468M6’s eight GPU slots for large-scale deep-learning frameworks. Its high memory bandwidth ensures data pipelines remain saturated under peak workloads. -
Virtualized Cloud & VDI
Deploy clusters of NF5466M6 servers in your private cloud. Dual-socket CPUs and up to 2 TB RAM allow hundreds of VMs or thousands of containers to run concurrently. -
Enterprise Storage & Backup
Use NF5466M5’s 24 hot-swap bays for large-capacity backup and archive solutions. Combine HDDs and SSDs in hybrid arrays to balance performance and cost. -
High-Performance HPC
In 2U form factor, the NF8260M5 combines GPU acceleration—up to eight cards—with dual-CPU compute, ideal for scientific simulations and financial modeling.
Frequently Asked Questions (FAQs)
-
Which Inspur model is best for GPU-heavy workloads?
For maximum GPU density, choose the NF5468M6 (8× full-height GPU slots) or NF8260M5 (8× cards in 2U) for large parallel training tasks. -
Can I mix HDDs and NVMe drives?
Yes. The NF5466M6 chassis supports hybrid configurations—mix 3.5″ HDDs and 2.5″ NVMe drives—while NF8480M6 is optimized for NVMe-only arrays. -
What remote-management tools are available?
Inspur InCloud provides a web UI and RESTful Redfish API. Out-of-band management via IPMI is standard across all NF series models. -
Are these servers covered by on-site support?
Yes. Inspur offers factory-warranty with optional 3-year or 5-year on-site response SLAs, including parts, labor, and firmware upgrades.
Supermicro X14, GPU & AMD Server Portfolio PN X14-SYS – Exclusive Rackmount Solutions Sale
Keywords
Supermicro X14 servers, Supermicro GPU servers, Supermicro AMD servers, SYS-112C-TN, AS-4124GO-NART+, AS-1115CS-TNR-G1, enterprise rack servers, high-performance compute, AI ready servers, cloud datacenter servers
Description
Our curated Supermicro server portfolio brings together the latest X14 generation, GPU-accelerated platforms, and AMD-powered systems in one place. Whether you’re building a cloud datacenter or deploying AI inference nodes, these Supermicro X14 servers deliver industry-leading performance and density.
Explore 1U and 2U chassis optimized for storage-only (WIO), hyperconverged workloads (Hyper), or cloud-scale deployments (CloudDC). Each X14 SuperServer features the next-gen Intel Xeon Scalable processors, PCIe Gen5 expansion, and flexible I/O trays for NVMe, U.2, or U.3 drives.
For GPU-heavy applications, our 4U and 5U Supermicro GPU servers—such as the AS-4124GO-NART+ and AS-5126GS-TNRT2—support up to eight double-wide GPUs, advanced cooling, and 4× 100GbE or HDR InfiniBand networking. Meanwhile, our AMD lineup—from the AS-1115CS-TNR-G1 1U Gold Series to the 2U AS-2015HS-TNR SuperServer—offers unparalleled memory bandwidth and core counts for virtualization and HPC.
Key Features
-
X14 Generation Platforms: Intel Xeon Scalable Gen 4 support with PCIe Gen5 slots.
-
Flexible Chassis Options: 1U CloudDC, Hyper, WIO; 2U Hyper and WIO SuperServers.
-
GPU-Optimized Solutions: 4U AS-4124GO-NART+ & 5U AS-5126GS-TNRT2 for AI/ML training.
-
High-Core AMD Configurations: 1U and 2U Gold Series AMD EPYC servers.
-
Advanced Cooling & Redundancy: Hot-swap fans, PSUs, and tool-less drive trays.
-
Enterprise Networking: OCP 3.0 slots, 100GbE and HDR InfiniBand options.
Configuration
Category | Model Series | Form Factor | CPU Family | Max GPUs | Drive Bays |
---|---|---|---|---|---|
X14 Servers | SYS-112C-TN, SYS-112H-TN, SYS-122H-TN, SYS-112B-WR | 1U | Intel Xeon Scalable Gen4 | – | Up to 4× U.3 or 8×2.5″ |
SYS-212H-TN, SYS-222H-TN, SYS-522B-WR | 2U | Intel Xeon Scalable Gen4 | – | Up to 12× U.3 or 24×2.5″ | |
GPU Servers | AS-4124GO-NART+ | 4U | Intel Xeon Scalable | 4–8 | 12× U.3 + GPU trays |
AS-4125GS-TNRT2, AS-5126GS-TNRT, AS-5126GS-TNRT2 | 4U/5U | Intel Xeon Scalable H13/H14 | 8 | 16× U.3 + GPU trays | |
AMD Servers | AS-1115CS-TNR-G1, AS-1115HS-TNR-G1, AS-1125HS-TNR-G1 | 1U | AMD EPYC™ 7003/7004 Series | – | Up to 8×2.5″ |
AS-2015CS-TNR-G1, AS-2015HS-TNR | 2U | AMD EPYC™ 7003/7004 Series | – | Up to 12×2.5″ |
Compatibility
All Supermicro X14, GPU, and AMD servers use standard 19″ rack rails and share hot-swap PSUs, fans, and EEPROM management modules. The X14 and GPU platforms support OCP 3.0 NICs, enabling seamless integration of 25/50/100 GbE or InfiniBand cards. AMD Gold Series servers are fully compatible with Linux distributions (RHEL, Ubuntu) and container orchestration via Kubernetes.
Usage Scenarios
-
Cloud Data Centers
Deploy the 1U CloudDC SYS-112C-TN with dual Intel Xeon Gen4 CPUs and up to 8 NVMe drives for high-density tenant hosting. -
AI & GPU Accelerated Workloads
Use the 4U AS-4124GO-NART+ SuperServer with 4–8 high-wattage GPUs for model training in TensorFlow or PyTorch environments. -
High-Performance Computing (HPC)
Leverage AMD EPYC Gold Series AS-2015HS-TNR in 2U to run large-scale simulations and data analytics with high core counts and memory bandwidth. -
Edge & Enterprise Virtualization
Utilize the 1U Hyper SYS-112H-TN or AS-1115CS-TNR-G1 AMD server at branch offices for cost-effective virtual desktop and application hosting.
Frequently Asked Questions (FAQs)
-
Which Supermicro model is best for GPU-heavy AI training?
The AS-4124GO-NART+ (4U) and AS-5126GS-TNRT2 (5U) support up to eight double-wide GPUs and advanced liquid-air hybrid cooling for sustained AI workloads. -
Can I mix Intel and AMD servers in one rack?
Yes. All X14 and AMD Gold Series servers share rack-mount hardware, power, and management modules. Use centralized BMC or IPMI for unified control. -
What storage options are supported on X14 WIO models?
The SYS-112B-WR and SYS-522B-WR support up to 8 or 12 U.3 NVMe drives respectively, offering sub-millisecond latency for real-time analytics. -
How do I enable high-speed networking?
Install an OCP 3.0 100GbE or HDR InfiniBand adapter into the designated mezzanine slots on X14 and GPU servers for low-latency, high-bandwidth connectivity.