NVIDIA GeForce GTX 980M NVIDIA GeForce GTX 980M
NVIDIA GeForce GTX 1050 NVIDIA GeForce GTX 1050
VS

Comparision NVIDIA GeForce GTX 980M vs NVIDIA GeForce GTX 1050

NVIDIA GeForce GTX 980M

WINNER
NVIDIA GeForce GTX 980M

Rating: 23 points
NVIDIA GeForce GTX 1050

NVIDIA GeForce GTX 1050

Rating: 16 points
Grade
NVIDIA GeForce GTX 980M
NVIDIA GeForce GTX 1050
Performance
5
6
Memory
3
3
General information
7
7
Functions
9
9
Benchmark tests
2
2
Ports
0
7

Top specs and features

Passmark score

NVIDIA GeForce GTX 980M: 6934 NVIDIA GeForce GTX 1050: 4929

3DMark Cloud Gate GPU benchmark score

NVIDIA GeForce GTX 980M: 62082 NVIDIA GeForce GTX 1050: 38901

3DMark Fire Strike Score

NVIDIA GeForce GTX 980M: 7939 NVIDIA GeForce GTX 1050: 5820

3DMark Fire Strike Graphics test score

NVIDIA GeForce GTX 980M: 9213 NVIDIA GeForce GTX 1050: 6461

3DMark 11 Performance GPU benchmark score

NVIDIA GeForce GTX 980M: 11911 NVIDIA GeForce GTX 1050: 8148

Description

The NVIDIA GeForce GTX 980M video card is based on the Maxwell 2.0 architecture. NVIDIA GeForce GTX 1050 on the Pascal architecture. The first has 5200 million transistors. The second is 3300 million. NVIDIA GeForce GTX 980M has a transistor size of 28 nm versus 14.

The base clock speed of the first video card is 1038 MHz versus 1354 MHz for the second.

Let's move on to memory. NVIDIA GeForce GTX 980M has 8 GB. NVIDIA GeForce GTX 1050 has 8 GB installed. The bandwidth of the first video card is 160.4 Gb/s versus 112.1 Gb/s of the second.

FLOPS of NVIDIA GeForce GTX 980M is 3.53. At NVIDIA GeForce GTX 1050 1.81.

Goes to tests in benchmarks. In the Passmark benchmark, NVIDIA GeForce GTX 980M scored 6934 points. And here is the second card 4929 points. In 3DMark, the first model scored 9213 points. Second 6461 points.

In terms of interfaces. The first video card is connected using MXM-B (3.0). The second is PCIe 3.0 x16. Video card NVIDIA GeForce GTX 980M has Directx version 12.1. Video card NVIDIA GeForce GTX 1050 -- Directx version - 12.1.

Regarding cooling, NVIDIA GeForce GTX 980M has There is no dataW heat dissipation requirements versus 75W for NVIDIA GeForce GTX 1050.

Why NVIDIA GeForce GTX 980M is better than NVIDIA GeForce GTX 1050

  • Passmark score 6934 против 4929 , more on 41%
  • 3DMark Cloud Gate GPU benchmark score 62082 против 38901 , more on 60%
  • 3DMark Fire Strike Score 7939 против 5820 , more on 36%
  • 3DMark Fire Strike Graphics test score 9213 против 6461 , more on 43%
  • 3DMark 11 Performance GPU benchmark score 11911 против 8148 , more on 46%
  • Unigine Heaven 3.0 test score 106 против 83 , more on 28%

NVIDIA GeForce GTX 980M vs NVIDIA GeForce GTX 1050: highlights

NVIDIA GeForce GTX 980M
NVIDIA GeForce GTX 980M
NVIDIA GeForce GTX 1050
NVIDIA GeForce GTX 1050
Performance
GPU base clock speed
The graphics processing unit (GPU) has a high clock speed.
1038 MHz
max 2457
Average: 1124.9 MHz
1354 MHz
max 2457
Average: 1124.9 MHz
Gpu memory speed
This is an important aspect for calculating memory bandwidth.
1253 MHz
max 16000
Average: 1468 MHz
1752 MHz
max 16000
Average: 1468 MHz
FLOPS
Measuring the processing power of a processor is called FLOPS.
3.53 TFLOPS
max 1142.32
Average: 53 TFLOPS
1.81 TFLOPS
max 1142.32
Average: 53 TFLOPS
RAM
RAM in video cards (also known as video memory or VRAM) is a special type of memory used by a video card to store graphics data. It serves as a temporary buffer for textures, shaders, geometry, and other graphics resources that are needed to display images on the screen. More RAM allows the graphics card to work with more data and handle more complex graphic scenes with high resolution and detail. Show more
8 GB
max 128
Average: 4.6 GB
2 GB
max 128
Average: 4.6 GB
Number of PCIe lanes
The number of PCIe lanes in video cards determines the speed and bandwidth of data transfer between the video card and other computer components through the PCIe interface. The more PCIe lanes a video card has, the more bandwidth and ability to communicate with other computer components. Show more
16
max 16
Average:
16
max 16
Average:
L1 cache size
The amount of L1 cache in video cards is usually small and is measured in kilobytes (KB) or megabytes (MB). It is designed to temporarily store the most active and frequently used data and instructions, allowing the graphics card to access them faster and reduce delays in graphics operations. Show more
48
48
Pixel rendering speed
The higher the pixel rendering speed, the smoother and more realistic the display of graphics and the movement of objects on the screen will be.
72 GTexel/s    
max 563
Average: 94.3 GTexel/s    
47 GTexel/s    
max 563
Average: 94.3 GTexel/s    
TMUs
Responsible for texturing objects in 3D graphics. TMU provides textures to the surfaces of objects, which gives them a realistic look and detail. The number of TMUs in a video card determines its ability to process textures. The more TMUs, the more textures can be processed at the same time, which contributes to better texturing of objects and increases the realism of graphics. Show more
96
max 880
Average: 140.1
40
max 880
Average: 140.1
ROPs
Responsible for the final processing of pixels and their display on the screen. ROPs perform various operations on pixels, such as blending colors, applying transparency, and writing to the framebuffer. The number of ROPs in a video card affects its ability to process and display graphics. The more ROPs, the more pixels and image fragments can be processed and displayed on the screen at the same time. A higher number of ROPs generally results in faster and more efficient graphics rendering and better performance in games and graphics applications. Show more
64
max 256
Average: 56.8
32
max 256
Average: 56.8
Number of shader blocks
The number of shader units in video cards refers to the number of parallel processors that perform computational operations in the GPU. The more shader units in the video card, the more computing resources are available for processing graphics tasks. Show more
1536
max 17408
Average:
640
max 17408
Average:
L2 cache size
Used to temporarily store data and instructions used by the graphics card when performing graphics calculations. A larger L2 cache allows the graphics card to store more data and instructions, which helps speed up the processing of graphics operations. Show more
2000
1024
Turbo gpu
If the GPU speed has dropped below its limit, then to improve performance, it can go to a high clock speed.
1127 MHz
max 2903
Average: 1514 MHz
1455 MHz
max 2903
Average: 1514 MHz
Texture size
A certain number of textured pixels are displayed on the screen every second.
99.6 GTexels/s
max 756.8
Average: 145.4 GTexels/s
72.86 GTexels/s
max 756.8
Average: 145.4 GTexels/s
architecture name
Maxwell 2.0
Pascal
GPU name
GM204
GP107
Memory
Memory bandwidth
This is the rate at which the device stores or reads information.
160.4 GB/s
max 2656
Average: 257.8 GB/s
112.1 GB/s
max 2656
Average: 257.8 GB/s
Effective memory speed
The effective memory clock is calculated from the size and transfer rate of the memory information. The performance of the device in applications depends on the clock frequency. The higher it is, the better. Show more
5012 MHz
max 19500
Average: 6984.5 MHz
7008 MHz
max 19500
Average: 6984.5 MHz
RAM
RAM in video cards (also known as video memory or VRAM) is a special type of memory used by a video card to store graphics data. It serves as a temporary buffer for textures, shaders, geometry, and other graphics resources that are needed to display images on the screen. More RAM allows the graphics card to work with more data and handle more complex graphic scenes with high resolution and detail. Show more
8 GB
max 128
Average: 4.6 GB
2 GB
max 128
Average: 4.6 GB
GDDR memory versions
Latest versions of GDDR memory provide high data transfer rates to improve overall performance
5
max 6
Average: 4.9
5
max 6
Average: 4.9
Memory bus width
A wide memory bus means that it can transfer more information in one cycle. This property affects memory performance as well as the overall performance of the device's graphics card. Show more
256 bit
max 8192
Average: 283.9 bit
128 bit
max 8192
Average: 283.9 bit
General information
Crystal size
The physical dimensions of the chip on which the transistors, microcircuits and other components necessary for the operation of the video card are located. The larger the die size, the more space the GPU takes up on the graphics card. Larger die sizes can provide more computing resources such as CUDA cores or tensor cores, which can result in increased performance and graphics processing capabilities. Show more
398
max 826
Average: 356.7
132
max 826
Average: 356.7
Generation
A new generation of graphics card usually includes improved architecture, higher performance, more efficient use of power, improved graphics capabilities, and new features. Show more
GeForce 900
GeForce 10
Manufacturer
TSMC
Samsung
Year of issue
2014
max 2023
Average:
2016
max 2023
Average:
Technological process
The small size of the semiconductors means this is a new generation chip.
28 nm
Average: 34.7 nm
14 nm
Average: 34.7 nm
Number of transistors
The higher their number, the more processor power this indicates.
5200 million
max 80000
Average: 7150 million
3300 million
max 80000
Average: 7150 million
PCIe connection interface
A considerable speed of the expansion card used to connect the computer to the peripherals is provided. The updated versions offer impressive bandwidth and high performance. Show more
3
max 4
Average: 3
3
max 4
Average: 3
Purpose
Laptop
Desktop
Functions
OpenGL Version
OpenGL provides access to the graphics card's hardware capabilities for displaying 2D and 3D graphics objects. New versions of OpenGL may include support for new graphical effects, performance optimizations, bug fixes, and other improvements. Show more
4.6
max 4.6
Average:
4.6
max 4.6
Average:
DirectX
Used in demanding games, providing improved graphics
12.1
max 12.2
Average: 11.4
12.1
max 12.2
Average: 11.4
Shader model version
The higher the version of the shader model in the video card, the more functions and possibilities are available for programming graphic effects.
6.4
max 6.7
Average: 5.9
6.4
max 6.7
Average: 5.9
Vulkan version
A higher version of Vulkan usually means a larger set of features, optimizations, and enhancements that software developers can use to create better and more realistic graphical applications and games. Show more
1.3
max 1.3
Average:
1.3
max 1.3
Average:
CUDA Version
Allows you to use the compute cores of your graphics card to perform parallel computing, which can be useful in areas such as scientific research, deep learning, image processing, and other computationally intensive tasks. Show more
5.2
max 9
Average:
6.1
max 9
Average:
Benchmark tests
Passmark score
The Passmark Video Card Test is a program for measuring and comparing the performance of a graphics system. It conducts various tests and calculations to evaluate the speed and performance of a graphics card in various areas. Show more
6934
max 30117
Average: 7628.6
4929
max 30117
Average: 7628.6
3DMark Cloud Gate GPU benchmark score
62082
max 196940
Average: 80042.3
38901
max 196940
Average: 80042.3
3DMark Fire Strike Score
7939
max 39424
Average: 12463
5820
max 39424
Average: 12463
3DMark Fire Strike Graphics test score
It measures and compares the ability of a graphics card to handle high-resolution 3D graphics with various graphical effects. The Fire Strike Graphics test includes complex scenes, lighting, shadows, particles, reflections, and other graphical effects to evaluate the graphics card's performance in gaming and other demanding graphics scenarios. Show more
9213
max 51062
Average: 11859.1
6461
max 51062
Average: 11859.1
3DMark 11 Performance GPU benchmark score
11911
max 59675
Average: 18799.9
8148
max 59675
Average: 18799.9
3DMark Vantage Performance test score
30397
max 97329
Average: 37830.6
30860
max 97329
Average: 37830.6
3DMark Ice Storm GPU benchmark score
311768
max 539757
Average: 372425.7
332408
max 539757
Average: 372425.7
Unigine Heaven 3.0 test score
106
max 61874
Average: 2402
83
max 61874
Average: 2402
Unigine Heaven 4.0 test score
During the Unigine Heaven test, the graphics card goes through a series of graphical tasks and effects that can be intensive to process, and displays the result as a numerical value (points) and a visual representation of the scene. Show more
1349
max 4726
Average: 1291.1
max 4726
Average: 1291.1
SPECviewperf 12 test score - Solidworks
40
max 203
Average: 62.4
max 203
Average: 62.4
SPECviewperf 12 test score - specvp12 sw-03
The sw-03 test includes visualization and modeling of objects using various graphic effects and techniques such as shadows, lighting, reflections and others. Show more
40
max 203
Average: 64
max 203
Average: 64
SPECviewperf 12 test evaluation - Siemens NX
5
max 213
Average: 14
max 213
Average: 14
SPECviewperf 12 test score - specvp12 showcase-01
The showcase-01 test is a scene with complex 3D models and effects that demonstrates the capabilities of the graphics system in processing complex scenes. Show more
45
max 239
Average: 121.3
max 239
Average: 121.3
SPECviewperf 12 test score - Showcase
45
max 180
Average: 108.4
max 180
Average: 108.4
SPECviewperf 12 test score - Medical
22
max 107
Average: 39.6
max 107
Average: 39.6
SPECviewperf 12 test score - specvp12 mediacal-01
22
max 107
Average: 39
max 107
Average: 39
SPECviewperf 12 test score - Maya
80
max 182
Average: 129.8
max 182
Average: 129.8
SPECviewperf 12 test score - specvp12 maya-04
80
max 185
Average: 132.8
max 185
Average: 132.8
SPECviewperf 12 test score - Energy
6
max 25
Average: 9.7
max 25
Average: 9.7
SPECviewperf 12 test score - specvp12 energy-01
6
max 21
Average: 10.7
max 21
Average: 10.7
SPECviewperf 12 Test Evaluation - Creo
25
max 154
Average: 49.5
max 154
Average: 49.5
SPECviewperf 12 test score - specvp12 creo-01
25
max 154
Average: 52.5
max 154
Average: 52.5
SPECviewperf 12 test score - specvp12 catia-04
37
max 190
Average: 91.5
max 190
Average: 91.5
SPECviewperf 12 test score - Catia
37
max 190
Average: 88.6
max 190
Average: 88.6
Octane Render test score OctaneBench
A special test that is used to evaluate the performance of video cards in rendering using the Octane Render engine.
61
max 128
Average: 47.1
max 128
Average: 47.1
Ports
Interface
MXM-B (3.0)
PCIe 3.0 x16
HDMI
A digital interface that is used to transmit high-resolution audio and video signals.
Available
Available

FAQ

How does the NVIDIA GeForce GTX 980M processor perform in benchmarks?

Passmark NVIDIA GeForce GTX 980M scored 6934 points. The second video card scored 4929 points in Passmark.

What FLOPS do video cards have?

FLOPS NVIDIA GeForce GTX 980M is 3.53 TFLOPS. But the second video card has FLOPS equal to 1.81 TFLOPS.

What power consumption?

NVIDIA GeForce GTX 980M There is no data Watt. NVIDIA GeForce GTX 1050 75 Watt.

How fast are NVIDIA GeForce GTX 980M and NVIDIA GeForce GTX 1050?

NVIDIA GeForce GTX 980M operates at 1038 MHz. In this case, the maximum frequency reaches 1127 MHz. The clock base frequency of NVIDIA GeForce GTX 1050 reaches 1354 MHz. In turbo mode it reaches 1455 MHz.

What kind of memory do graphics cards have?

NVIDIA GeForce GTX 980M supports GDDR5. Installed 8 GB of RAM. Throughput reaches 160.4 GB/s. NVIDIA GeForce GTX 1050 works with GDDR5. The second one has 2 GB of RAM installed. Its bandwidth is 160.4 GB/s.

How many HDMI connectors do they have?

NVIDIA GeForce GTX 980M has There is no data HDMI outputs. NVIDIA GeForce GTX 1050 is equipped with 1 HDMI outputs.

What power connectors are used?

NVIDIA GeForce GTX 980M uses There is no data. NVIDIA GeForce GTX 1050 is equipped with There is no data HDMI outputs.

What architecture are video cards based on?

NVIDIA GeForce GTX 980M is built on Maxwell 2.0. NVIDIA GeForce GTX 1050 uses the Pascal architecture.

What graphics processor is being used?

NVIDIA GeForce GTX 980M is equipped with GM204. NVIDIA GeForce GTX 1050 is set to GP107.

How many PCIe lanes

The first graphics card has 16 PCIe lanes. And the PCIe version is 3. NVIDIA GeForce GTX 1050 16 PCIe lanes. PCIe version 3.

How many transistors?

NVIDIA GeForce GTX 980M has 5200 million transistors. NVIDIA GeForce GTX 1050 has 3300 million transistors