NVIDIA GeForce GTX 690

NVIDIA GeForce GTX 690

NVIDIA GeForce GTX 690 in 2025: A Retrospective on the Legend of the Dual-GPU Era

Introduction

Released in 2012, the NVIDIA GeForce GTX 690 became a symbol of an era when multi-chip solutions dominated the battle for maximum performance. Thirteen years later, this model still holds cult status among enthusiasts, but its place in the modern world requires reassessment. In this article, we will explore what the GTX 690 can still surprise us with in 2025 and where its limitations become critical.


Architecture and Key Features

Kepler Architecture: Double Trouble

The GTX 690 is built on the Kepler architecture (GK104 model) and is unique because it combines two GPUs on a single board. The manufacturing process is 28 nm, which was cutting-edge in 2012. Each chip contains 1536 CUDA cores, totaling 3072 stream processors.

Lack of Modern Technologies

The GTX 690 was created before the era of ray tracing and AI algorithms. It does not support RTX, DLSS, or FidelityFX. However, at its time, NVIDIA introduced TXAA (Temporal Anti-Aliasing) and Adaptive VSync—features that enhance anti-aliasing and frame synchronization.

Dual-GPU System Characteristics

The card uses SLI technology to combine the two chips. However, performance scaling rarely reached 100%, and support for multi-GPU setups in games remains unstable even in 2025. Many modern titles are not optimized for such solutions at all.


Memory: Potential and Limitations

GDDR5 and 4 GB: Insufficient for Modern Tasks

Each GPU is equipped with 2 GB of GDDR5 memory with a 256-bit bus (totaling 4 GB). The bandwidth is 384 GB/s (192 GB/s per chip). For games from the 2010s, this was sufficient, but in 2025, even 8 GB is the minimum standard. For instance, in "Cyberpunk 2077" at 1080p, 6-8 GB of VRAM is required.

Buffer Issues

The limited memory capacity and its division between chips create a bottleneck in modern games. High-resolution textures and complex scenes lead to frame rate drops due to buffer overflow.


Gaming Performance: Nostalgia with Caveats

1080p: Acceptable Only for Older Titles

In less demanding games like "CS:GO" or "Dota 2," the GTX 690 achieves 100-150 FPS on medium settings. However, in "Elden Ring" or "Starfield," even on low presets, the frame rate barely reaches 30 FPS.

1440p and 4K: Not for the Faint of Heart

The card is not designed for resolutions above Full HD. Attempting to run "Hogwarts Legacy" at 1440p results in 15-20 FPS, while 4K turns into a slideshow.

Ray Tracing: Not Available

The absence of hardware support for RT cores makes ray tracing impossible, even through third-party mods.


Professional Tasks: Time Has Taken Its Toll

Video Editing and Rendering

Thanks to CUDA, the GTX 690 can handle basic editing in DaVinci Resolve or Premiere Pro, but rendering 4K video takes 3-4 times longer compared to modern GPUs (e.g., RTX 4060).

3D Modeling

In Blender or Maya, the card shows modest results. Projects with high-polygon models (>1 million polygons) cause lag.

Scientific Calculations

Support for CUDA and OpenCL allows the GTX 690 to be used for simple simulations, but energy efficiency is extremely low. For comparison, a single RTX 4090 chip outperforms the GTX 690 in FP32 calculations by 20 times.


Power Consumption and Heat Output

TDP 300 W: Prepare for Electric Bills

The GTX 690 requires powerful cooling and a high-quality PSU. Its TDP is 300 W, but peak load can reach 350 W.

Cooling Recommendations

- A case with at least 3 fans: 2 for intake and 1 for exhaust.

- Replacing thermal paste is essential for units that haven't been serviced for years.

- An ideal room temperature is below 25°C. At 30°C, the GPU can heat up to 85°C.


Comparison with Competitors

AMD Radeon HD 7990: The Main Rival

The dual-chip HD 7990 (2x Tahiti XT) competed with the GTX 690 in 2013. Today, both cards are equally outdated, but the AMD solution suffers more from frame drops due to less efficient drivers.

Modern Counterparts: RTX 3050

Even the budget RTX 3050 (8 GB GDDR6) outperforms the GTX 690 in performance, consuming only 130 W.


Practical Advice

Power Supply: Minimum 600 W

Even if your system is modest, choose a PSU with some margin. Recommended models include Corsair CX650M or Be Quiet! Pure Power 12 M 600W.

Platform Compatibility

- PCIe 3.0 x16: the card works in PCIe 4.0/5.0 slots, but with no speed increase.

- Windows 10/11: NVIDIA ceased support for the GTX 600 series in 2021. The last stable driver version is 472.12.

Driver Nuances

- Modern games may not run due to outdated API (DirectX 12 Ultimate is not supported).

- The enthusiast community releases modified drivers, but their stability is questionable.


Pros and Cons

Pros:

- Historical Value: Iconic design with lighting and aluminum casing.

- Uniqueness: One of the last dual-GPU solutions from NVIDIA.

- SLI Support for compatibility with other Kepler cards.

Cons:

- Outdated Architecture: No RTX, no DLSS, low memory capacity.

- High Power Consumption.

- Limited game and driver support.


Final Conclusion: Who Is the GTX 690 For in 2025?

This graphics card is suitable for:

1. Collectors and retro enthusiasts building PCs in the 2010s style.

2. Owners of older systems where upgrading is impossible (e.g., LGA 1155 platform).

3. Experimenters willing to struggle with drivers to run classics like "Crysis 3" on ultra settings.

For modern gaming and professional tasks, the GTX 690 is not suitable. Its domain is nostalgia and niche applications. If you're looking for power around $150, consider a used GTX 1660 Super or RX 6600.


Postscript

The NVIDIA GeForce GTX 690 stands as a monument to an era when engineers competed in multi-chip solutions. Today, it reminds us how quickly technology evolves, allowing us to delve into the past where every frame was hard-won. But for real work and gaming in 2025, choose something modern—like the RTX 4060 or RX 7600.

Basic

Label Name
NVIDIA
Platform
Desktop
Launch Date
May 2012
Model Name
GeForce GTX 690
Generation
GeForce 600
Base Clock
915MHz
Boost Clock
1019MHz
Bus Interface
PCIe 3.0 x16
Transistors
3,540 million
TMUs
?
Texture Mapping Units (TMUs) serve as components of the GPU, which are capable of rotating, scaling, and distorting binary images, and then placing them as textures onto any plane of a given 3D model. This process is called texture mapping.
128
Foundry
TSMC
Process Size
28 nm
Architecture
Kepler

Memory Specifications

Memory Size
2GB
Memory Type
GDDR5
Memory Bus
?
The memory bus width refers to the number of bits of data that the video memory can transfer within a single clock cycle. The larger the bus width, the greater the amount of data that can be transmitted instantaneously, making it one of the crucial parameters of video memory. The memory bandwidth is calculated as: Memory Bandwidth = Memory Frequency x Memory Bus Width / 8. Therefore, when the memory frequencies are similar, the memory bus width will determine the size of the memory bandwidth.
256bit
Memory Clock
1502MHz
Bandwidth
?
Memory bandwidth refers to the data transfer rate between the graphics chip and the video memory. It is measured in bytes per second, and the formula to calculate it is: memory bandwidth = working frequency × memory bus width / 8 bits.
192.3 GB/s

Theoretical Performance

Pixel Rate
?
Pixel fill rate refers to the number of pixels a graphics processing unit (GPU) can render per second, measured in MPixels/s (million pixels per second) or GPixels/s (billion pixels per second). It is the most commonly used metric to evaluate the pixel processing performance of a graphics card.
32.61 GPixel/s
Texture Rate
?
Texture fill rate refers to the number of texture map elements (texels) that a GPU can map to pixels in a single second.
130.4 GTexel/s
FP64 (double)
?
An important metric for measuring GPU performance is floating-point computing capability. Double-precision floating-point numbers (64-bit) are required for scientific computing that demands a wide numeric range and high accuracy, while single-precision floating-point numbers (32-bit) are used for common multimedia and graphics processing tasks. Half-precision floating-point numbers (16-bit) are used for applications like machine learning, where lower precision is acceptable.
130.4 GFLOPS
FP32 (float)
?
An important metric for measuring GPU performance is floating-point computing capability. Single-precision floating-point numbers (32-bit) are used for common multimedia and graphics processing tasks, while double-precision floating-point numbers (64-bit) are required for scientific computing that demands a wide numeric range and high accuracy. Half-precision floating-point numbers (16-bit) are used for applications like machine learning, where lower precision is acceptable.
3.193 TFLOPS

Miscellaneous

Shading Units
?
The most fundamental processing unit is the Streaming Processor (SP), where specific instructions and tasks are executed. GPUs perform parallel computing, which means multiple SPs work simultaneously to process tasks.
1536
L1 Cache
16 KB (per SMX)
L2 Cache
512KB
TDP
300W
Vulkan Version
?
Vulkan is a cross-platform graphics and compute API by Khronos Group, offering high performance and low CPU overhead. It lets developers control the GPU directly, reduces rendering overhead, and supports multi-threading and multi-core processors.
1.1
OpenCL Version
3.0
OpenGL
4.6
DirectX
12 (11_0)
CUDA
3.0
Power Connectors
2x 8-pin
Shader Model
5.1
ROPs
?
The Raster Operations Pipeline (ROPs) is primarily responsible for handling lighting and reflection calculations in games, as well as managing effects like anti-aliasing (AA), high resolution, smoke, and fire. The more demanding the anti-aliasing and lighting effects in a game, the higher the performance requirements for the ROPs; otherwise, it may result in a sharp drop in frame rate.
32
Suggested PSU
700W

Benchmarks

FP32 (float)
Score
3.193 TFLOPS
Vulkan
Score
17454
OpenCL
Score
16268

Compared to Other GPU

FP32 (float) / TFLOPS
3.356 +5.1%
3.291 +3.1%
3.044 -4.7%
2.911 -8.8%
Vulkan
69708 +299.4%
40716 +133.3%
18660 +6.9%
OpenCL
62821 +286.2%
38843 +138.8%
21442 +31.8%
884 -94.6%