NVIDIA H100 SXM5 94 GB
 
                                    About processor
                                                             H100 SXM5 94 GB is a Desktop GPU manufactured by NVIDIA. It was released on March 2023. The GPU has 94GB HBM3 memory. The main feachers of the GPU are: Shading Units - 16896, L2 Cache - 50 MB, TDP - 700W.
                                                    
                    Basic
                            Label Name
                        
                        
                            NVIDIA
                        
                    
                                Platform
                            
                            
                                Desktop
                            
                        
                                Launch Date
                            
                            
                                March 2023
                            
                        Model Name
                                                    
                            
                                H100 SXM5 94 GB
                                                            
                        
                    Generation
                                                    
                            
                                Server Hopper
                                                            
                        
                    Base Clock
                                                    
                            
                                1350 MHz
                                                            
                        
                    Boost Clock
                                                    
                            
                                1980 MHz
                                                            
                        
                    Bus Interface
                                                    
                            
                                PCIe 5.0 x16
                                                            
                        
                    Transistors
                                                    
                            
                                80 billion
                                                            
                        
                    Tensor Cores
                                                            
                                    ?
                                    
                                                    Tensor Cores are specialized processing units designed specifically for deep learning, providing higher training and inference performance compared to FP32 training. They enable rapid computations in areas such as computer vision, natural language processing, speech recognition, text-to-speech conversion, and personalized recommendations. The two most notable applications of Tensor Cores are DLSS (Deep Learning Super Sampling) and AI Denoiser for noise reduction.
                                
                            
                                528
                                                            
                        
                    TMUs
                                                            
                                    ?
                                    
                                                    Texture Mapping Units (TMUs) serve as components of the GPU, which are capable of rotating, scaling, and distorting binary images, and then placing them as textures onto any plane of a given 3D model. This process is called texture mapping.
                                
                            
                                528
                                                            
                        
                    Foundry
                                                    
                            
                                TSMC
                                                            
                        
                    Process Size
                                                    
                            
                                5 nm
                                                            
                        
                    Architecture
                                                    
                            
                                Hopper
                                                            
                        
                    Memory Specifications
Memory Size
                                                    
                            
                                94GB
                                                            
                        
                    Memory Type
                                                    
                            
                                HBM3
                                                            
                        
                    Memory Bus
                                                            
                                    ?
                                    
                                                    The memory bus width refers to the number of bits of data that the video memory can transfer within a single clock cycle. The larger the bus width, the greater the amount of data that can be transmitted instantaneously, making it one of the crucial parameters of video memory. The memory bandwidth is calculated as: Memory Bandwidth = Memory Frequency x Memory Bus Width / 8. Therefore, when the memory frequencies are similar, the memory bus width will determine the size of the memory bandwidth.
                                
                            
                                5120bit
                                                            
                        
                    Memory Clock
                                                    
                            
                                1313 MHz
                                                            
                        
                    Bandwidth
                                                            
                                    ?
                                    
                                                    Memory bandwidth refers to the data transfer rate between the graphics chip and the video memory. It is measured in bytes per second, and the formula to calculate it is: memory bandwidth = working frequency × memory bus width / 8 bits.
                                
                            
                                3.36TB/s
                                                            
                        
                    Theoretical Performance
Pixel Rate
                                                            
                                    ?
                                    
                                                    Pixel fill rate refers to the number of pixels a graphics processing unit (GPU) can render per second, measured in MPixels/s (million pixels per second) or GPixels/s (billion pixels per second). It is the most commonly used metric to evaluate the pixel processing performance of a graphics card.
                                
                            
                                47.52 GPixel/s
                                                            
                        
                    Texture Rate
                                                            
                                    ?
                                    
                                                    Texture fill rate refers to the number of texture map elements (texels) that a GPU can map to pixels in a single second.
                                
                            
                                1045 GTexel/s
                                                            
                        
                    FP16 (half)
                                                            
                                    ?
                                    
                                                    An important metric for measuring GPU performance is floating-point computing capability. Half-precision floating-point numbers (16-bit) are used for applications like machine learning, where lower precision is acceptable. Single-precision floating-point numbers (32-bit) are used for common multimedia and graphics processing tasks, while double-precision floating-point numbers (64-bit) are required for scientific computing that demands a wide numeric range and high accuracy.
                                
                            
                                267.6 TFLOPS
                                                            
                        
                    FP64 (double)
                                                            
                                    ?
                                    
                                                    An important metric for measuring GPU performance is floating-point computing capability. Double-precision floating-point numbers (64-bit) are required for scientific computing that demands a wide numeric range and high accuracy, while single-precision floating-point numbers (32-bit) are used for common multimedia and graphics processing tasks. Half-precision floating-point numbers (16-bit) are used for applications like machine learning, where lower precision is acceptable.
                                
                            
                                33.45 TFLOPS
                                                            
                        
                    Miscellaneous
SM Count
                                                            
                                    ?
                                    
                                                    Multiple Streaming Processors (SPs), along with other resources, form a Streaming Multiprocessor (SM), which is also referred to as a GPU's major core. These additional resources include components such as warp schedulers, registers, and shared memory. The SM can be considered the heart of the GPU, similar to a CPU core, with registers and shared memory being scarce resources within the SM.
                                
                            
                                132
                                                            
                        
                    Shading Units
                                                            
                                    ?
                                    
                                                    The most fundamental processing unit is the Streaming Processor (SP), where specific instructions and tasks are executed. GPUs perform parallel computing, which means multiple SPs work simultaneously to process tasks.
                                
                            
                                16896
                                                            
                        
                    L1 Cache
                                                    
                            
                                256 KB (per SM)
                                                            
                        
                    L2 Cache
                                                    
                            
                                50 MB
                                                            
                        
                    TDP
                                                    
                            
                                700W
                                                            
                        
                    Vulkan Version
                                                            
                                    ?
                                    
                                                    Vulkan is a cross-platform graphics and compute API by Khronos Group, offering high performance and low CPU overhead. It lets developers control the GPU directly, reduces rendering overhead, and supports multi-threading and multi-core processors.
                                
                            
                                N/A
                                                            
                        
                    OpenCL Version
                                                    
                            
                                3.0
                                                            
                        
                    OpenGL
                                                    
                            
                                N/A
                                                            
                        
                    DirectX
                                                    
                            
                                N/A
                                                            
                        
                    CUDA
                                                    
                            
                                9.0
                                                            
                        
                    Power Connectors
                                                    
                            
                                8-pin EPS
                                                            
                        
                    Shader Model
                                                    
                            
                                N/A
                                                            
                        
                    ROPs
                                                            
                                    ?
                                    
                                                    The Raster Operations Pipeline (ROPs) is primarily responsible for handling lighting and reflection calculations in games, as well as managing effects like anti-aliasing (AA), high resolution, smoke, and fire. The more demanding the anti-aliasing and lighting effects in a game, the higher the performance requirements for the ROPs; otherwise, it may result in a sharp drop in frame rate.
                                
                            
                                24
                                                            
                        
                    Suggested PSU
                                                    
                            
                                1100 W
                                                            
                        
                    Share in social media
Or Link To Us
                    <a href="https://cputronic.com/en/gpu/nvidia-h100-sxm5-94-gb" target="_blank">NVIDIA H100 SXM5 94 GB</a>