NVIDIA GeForce GTX 980M

NVIDIA GeForce GTX 980M

NVIDIA GeForce GTX 980M: Review of a Legend in Mobile Gaming in 2025

Relevance, Performance, and Practical Tips for Users


Introduction

The NVIDIA GeForce GTX 980M is one of the most significant mobile graphics cards in the history of gaming laptops. Released in 2014, it long remained the benchmark for performance. But how does it fare against modern technologies in 2025? Let's explore who might still find this GPU useful today and what tasks it can handle.


1. Architecture and Key Features

Maxwell Architecture: A Foundation of Reliability

The GTX 980M is built on the Maxwell architecture (GM204) using a 28nm manufacturing process. This enabled a balance between performance and energy efficiency for its time. However, by 2025, 28nm is an outdated standard (modern cards use 5-7nm), limiting upgrade potential.

Lack of RTX and DLSS

The GTX 980M does not support ray tracing (RTX) or DLSS — key NVIDIA technologies that emerged in the Turing (2018) and Ampere (2020) series. Utilizing these features requires dedicated hardware blocks (RT and Tensor Cores), which Maxwell lacks. Additionally, AMD’s FidelityFX is also unavailable, as it is a competitor's technology.

Pros of the Architecture

- Optimized for DirectX 12 (support level Feature Level 11_2).

- GPU Boost 2.0 and Optimus technologies for automatic overclocking and power savings.


2. Memory: Speed and Capacity

GDDR5: A Genre Classic

The graphics card is equipped with 8GB of GDDR5 memory with a 256-bit bus. The bandwidth is 160 GB/s (5 GHz clock speed). In comparison, modern mobile GPUs utilize GDDR6 (up to 600 GB/s) or HBM2 (up to 1 TB/s), which are significantly higher.

Impact on Performance

The 8GB capacity is sufficient for gaming at moderate settings in 1080p resolution, but at 1440p and 4K, performance dips may occur due to limited bandwidth. In professional tasks (e.g., 3D scene rendering), memory speed limitations can become a bottleneck.


3. Gaming Performance

1080p: Comfortable Gaming

In games from the 2020s, the GTX 980M exhibits modest results:

- Cyberpunk 2077: 25-30 FPS on low settings.

- Apex Legends: 45-55 FPS on medium.

- Fortnite: 60 FPS on medium (without enabling Nanite or Lumen).

1440p and 4K: Not Recommended

Due to memory and computational power limitations, resolutions above 1080p become problematic. For example, Hogwarts Legacy runs at just 15-20 FPS at 1440p even at minimum settings.

Ray Tracing: Not Available

The absence of RT cores makes the use of RTX effects impossible. Alternatives exist—such as software methods (e.g., Screen Space Reflections)—but they are less realistic.


4. Professional Tasks

CUDA Cores: A Foundation for Work

With 1,536 CUDA cores, the GTX 980M can handle basic tasks:

- Video Editing: Rendering in Adobe Premiere Pro or DaVinci Resolve on 1080p footage proceeds without lag, but a 4K timeline may stutter.

- 3D Modeling: Blender and Autodesk Maya run, but complex scenes require optimization.

- Scientific Computation: CUDA and OpenCL support allows the card to be used in machine learning (only for educational projects).

Limitations

No support for modern APIs like Vulkan Ray Tracing or DirectStorage. For professional tasks, it’s better to choose cards with RTX 4060/70 or AMD Radeon RX 7600M and higher.


5. Power Consumption and Heat Dissipation

TDP 100W: Cooling Requirements

Maximum thermal output is 100W. In modern compact laptops, this may cause overheating. Recommendations:

- Regular cleaning of the cooling system.

- Use of cooling pads.

- Replace thermal paste every 1-2 years.

Chassis

An ideal option would be bulky gaming laptops with enhanced ventilation (e.g., older models from the MSI GT Series or Alienware 17). Ultrabooks are not suitable due to inadequate cooling.


6. Comparison with Competitors

AMD Radeon R9 M395X (2015)

- Comparable performance, but higher power consumption (TDP 125W).

- Better performance with Vulkan games, worse with DX12.

NVIDIA RTX 2050 Mobile (2022)

- 30-40% faster in games.

- Support for DLSS and RTX.

- TDP only 45W.

Conclusion

The GTX 980M falls behind modern budget models but may be interesting as a budget solution in the secondary market.


7. Practical Tips

Power Supply

Recommended power supply for the laptop is at least 180W. Avoid cheap alternatives for stable operation.

Compatibility

- Interface: PCIe 3.0 x16 (compatible with PCIe 4.0/5.0, but without speed increase).

- Drivers: Official support from NVIDIA ended in 2023. Use the latest available versions (for example, 527.56).

Optimization

- In games, lower shadow and texture settings.

- Disable anti-aliasing through NVIDIA Control Panel drivers.


8. Advantages and Disadvantages

Advantages

- Reliability and time-tested architecture.

- Sufficient performance for older and less demanding games.

- CUDA support for basic professional tasks.

Disadvantages

- No ray tracing or DLSS.

- High power consumption for a mobile GPU.

- Outdated drivers.


9. Final Conclusion: Who is the GTX 980M Suitable For?

This graphics card is suitable for:

1. Budget gamers who are willing to play at medium settings in Full HD.

2. Owners of old laptops who wish to extend their lifespan.

3. Students learning the basics of 3D modeling or editing.

Why You Shouldn't Buy It?

If you need modern gaming in 4K, ray tracing, or work with AI tools—look for GPUs from 2023-2025.


Conclusion

The NVIDIA GeForce GTX 980M is a legend that can still be useful in 2025, but only in narrow scenarios. As a temporary solution or a nod to nostalgia—yes; as a foundation for a future gaming PC—no. Choose wisely!

Basic

Label Name
NVIDIA
Platform
Mobile
Launch Date
October 2014
Model Name
GeForce GTX 980M
Generation
GeForce 900M
Base Clock
1038MHz
Boost Clock
1127MHz
Bus Interface
MXM-B (3.0)
Transistors
5,200 million
TMUs
?
Texture Mapping Units (TMUs) serve as components of the GPU, which are capable of rotating, scaling, and distorting binary images, and then placing them as textures onto any plane of a given 3D model. This process is called texture mapping.
96
Foundry
TSMC
Process Size
28 nm
Architecture
Maxwell 2.0

Memory Specifications

Memory Size
8GB
Memory Type
GDDR5
Memory Bus
?
The memory bus width refers to the number of bits of data that the video memory can transfer within a single clock cycle. The larger the bus width, the greater the amount of data that can be transmitted instantaneously, making it one of the crucial parameters of video memory. The memory bandwidth is calculated as: Memory Bandwidth = Memory Frequency x Memory Bus Width / 8. Therefore, when the memory frequencies are similar, the memory bus width will determine the size of the memory bandwidth.
256bit
Memory Clock
1253MHz
Bandwidth
?
Memory bandwidth refers to the data transfer rate between the graphics chip and the video memory. It is measured in bytes per second, and the formula to calculate it is: memory bandwidth = working frequency × memory bus width / 8 bits.
160.4 GB/s

Theoretical Performance

Pixel Rate
?
Pixel fill rate refers to the number of pixels a graphics processing unit (GPU) can render per second, measured in MPixels/s (million pixels per second) or GPixels/s (billion pixels per second). It is the most commonly used metric to evaluate the pixel processing performance of a graphics card.
72.13 GPixel/s
Texture Rate
?
Texture fill rate refers to the number of texture map elements (texels) that a GPU can map to pixels in a single second.
108.2 GTexel/s
FP64 (double)
?
An important metric for measuring GPU performance is floating-point computing capability. Double-precision floating-point numbers (64-bit) are required for scientific computing that demands a wide numeric range and high accuracy, while single-precision floating-point numbers (32-bit) are used for common multimedia and graphics processing tasks. Half-precision floating-point numbers (16-bit) are used for applications like machine learning, where lower precision is acceptable.
108.2 GFLOPS
FP32 (float)
?
An important metric for measuring GPU performance is floating-point computing capability. Single-precision floating-point numbers (32-bit) are used for common multimedia and graphics processing tasks, while double-precision floating-point numbers (64-bit) are required for scientific computing that demands a wide numeric range and high accuracy. Half-precision floating-point numbers (16-bit) are used for applications like machine learning, where lower precision is acceptable.
3.393 TFLOPS

Miscellaneous

Shading Units
?
The most fundamental processing unit is the Streaming Processor (SP), where specific instructions and tasks are executed. GPUs perform parallel computing, which means multiple SPs work simultaneously to process tasks.
1536
L1 Cache
48 KB (per SMM)
L2 Cache
2MB
TDP
Unknown
Vulkan Version
?
Vulkan is a cross-platform graphics and compute API by Khronos Group, offering high performance and low CPU overhead. It lets developers control the GPU directly, reduces rendering overhead, and supports multi-threading and multi-core processors.
1.3
OpenCL Version
3.0
OpenGL
4.6
DirectX
12 (12_1)
CUDA
5.2
Power Connectors
None
Shader Model
6.7 (6.4)
ROPs
?
The Raster Operations Pipeline (ROPs) is primarily responsible for handling lighting and reflection calculations in games, as well as managing effects like anti-aliasing (AA), high resolution, smoke, and fire. The more demanding the anti-aliasing and lighting effects in a game, the higher the performance requirements for the ROPs; otherwise, it may result in a sharp drop in frame rate.
64

Benchmarks

FP32 (float)
Score
3.393 TFLOPS
3DMark Time Spy
Score
2888
Blender
Score
276.39
Vulkan
Score
26002
OpenCL
Score
23366
Hashcat
Score
143310 H/s

Compared to Other GPU

FP32 (float) / TFLOPS
3.713 +9.4%
3.552 +4.7%
3.337 -1.7%
3.246 -4.3%
3DMark Time Spy
4147 +43.6%
1855 -35.8%
1056 -63.4%
Blender
1506.77 +445.2%
848 +206.8%
45.58 -83.5%
Vulkan
98446 +278.6%
69708 +168.1%
40716 +56.6%
5522 -78.8%
OpenCL
64365 +175.5%
40953 +75.3%
12037 -48.5%
3977 -83%
Hashcat / H/s
151963 +6%
144625 +0.9%
141898 -1%
141221 -1.5%