Skip to content

NVIDIA Tesla S2070/S2090 GPU Computing Server

Based on the new NVIDIA CUDA™ GPU architecture code named “Fermi”, the Tesla™ S2070/2090 and historic tesla’s the C1060/C2075/S1070 1U Computing Systems are designed from the ground up for high performance computing. It supports “must have” features for the technical and enterprise computing space including ECC memory for uncompromised accuracy and scalability, and 8X the double precision performance compared Tesla 10-series GPU computing products. Compared to the latest quad-core CPUs, Tesla 20-series computing systems deliver equivalent performance at 1/20th the power consumption and 1/10th the cost.
Designed with four latest-generation Tesla computing processors in a standard 1U chassis, the Tesla S2070/2090 computing systems scales to solve the world’s most important computing challenges – more quickly and accurately.

Feeding HPC Industry’s Relentless Demand for Performance.
Keeps pace with the increasing demands of the toughest computing challenges including drug research, oil and gas exploration and computational finance.

Many-core Architecture Delivers Optimum Scaling across HPC Applications.
Parallel performance from 960 cores capable of concurrent execution of thousands of computing threads and scalable architecture meets computational demands of applications whose complexity has outstripped the CPU’s ability to solve them.


High Efficiency Computing Platform for Energy-conscious Organizations.
Higher performance and higher density for solving complex problems with fewer resources.
NVIDIA CUDA™ Technology Unlocks the Power of Tesla Many-core Computing Products.
The only C language environment that unlocks the many-core processing power of GPU’s to solve the world’s most computationally-intensive challenges.


Features Benefits
Four NVIDIA Tesla M2070 or M2090 Processors in a High Density 1U System
Delivers over 2.4TeraFlops of Double Precision performance in a 1U rack-mount system for unmatched performance in high density rack systems.
Massively-Parallel Many-Core Architecture
Up to 512 computing cores per processor that can execute thousands of concurrent threads.
Scales to Multi-GPU Computing
Scale to thousands of processor cores to solve large-scale problems by splitting the problem across multiple GPU’s
Program in NVIDIA CUDA™: C for GPU
Programmable using CUDA, the world’s leading application development platform for many core solutions.
IEEE 754 Floating Point
Ensures your results meet industry standard precision including optional features to ensure accuracy.
Double-Precision Floating Point Support
Meets the precision requirements of your most demanding applications with IEEE 64-bit precision.
Asynchronous Data Transfer
Turbo charges system performance because data transfers can be executed even while the computing cores are busy.
24 GB Ultra-fast Memory
Enables larger datasets to be stored locally with 6 GB dedicated for each processor to maximize performance and minimize data movement around the system.
High-Speed, PCI-Express 2.0 Data Transfer
With low latency and high bandwidth, computing applications benefit from the highest data transfer rate possible through standard PCI-Express architecture.
Single-screw Rail Mounting
Single-screw rail design is quick to install like a tool-less design, but with the extra security and rigidity from a single screw to secure the rail to the rack.
System Monitoring Features
Easy management and monitoring post-installation helps your IT staff manage systems with minimal effort. Remote capabilities and status lights on the front and rear of the unit ensure your staff can see the status whether they are on the other side of the rack, or the other side of the world.
Dual PCI-Express 2.0 Cable Connections
Maximizes bandwidth between the host processor and the Tesla processors with up to 12.8 GB/s transfer rates (up to 6.4 GB/s per PCI Express connection)
Small-form-factor (SFF) host adapter card
The low power host adapter card enables Tesla systems to work with virtually any PCIe compliant host system with an open PCI Express slot (x8 or x16).


# of Tesla GPU’s
# of Streaming Processor Cores
2048 (512 per processor)
Floating Point Precision
IEEE 754 single & double
Total Dedicated Memory
24 GB
System Interface
PCIe x16 or x8
Software Development Tools
C-based CUDA Toolkit

Leave a Reply

Your email address will not be published. Required fields are marked *

>> You might also like

Our website uses cookies to give you the best experience possible. By continuing to use our site, you agree with our use of cookies outlined in our Privacy Policy
Loading product data...