MSI GeForce GTX Titan X MSI GeForce GTX Titan X
MSI GeForce RTX 2080 Ti Gaming X Trio MSI GeForce RTX 2080 Ti Gaming X Trio
VS

Comparision MSI GeForce GTX Titan X vs MSI GeForce RTX 2080 Ti Gaming X Trio

MSI GeForce GTX Titan X

MSI GeForce GTX Titan X

Rating: 43 points
MSI GeForce RTX 2080 Ti Gaming X Trio

WINNER
MSI GeForce RTX 2080 Ti Gaming X Trio

Rating: 73 points
Grade
MSI GeForce GTX Titan X
MSI GeForce RTX 2080 Ti Gaming X Trio
Performance
6
7
Memory
4
7
General information
7
7
Functions
7
8
Benchmark tests
4
7
Ports
3
7

Top specs and features

Passmark score

MSI GeForce GTX Titan X: 13032 MSI GeForce RTX 2080 Ti Gaming X Trio: 21893

Unigine Heaven 4.0 test score

MSI GeForce GTX Titan X: 2595 MSI GeForce RTX 2080 Ti Gaming X Trio:

GPU base clock speed

MSI GeForce GTX Titan X: 1000 MHz MSI GeForce RTX 2080 Ti Gaming X Trio: 1350 MHz

RAM

MSI GeForce GTX Titan X: 12 GB MSI GeForce RTX 2080 Ti Gaming X Trio: 11 GB

Memory bandwidth

MSI GeForce GTX Titan X: 337 GB/s MSI GeForce RTX 2080 Ti Gaming X Trio: 616 GB/s

Description

The MSI GeForce GTX Titan X video card is based on the Maxwell 2.0 architecture. MSI GeForce RTX 2080 Ti Gaming X Trio on the Turing architecture. The first has 8000 million transistors. The second is 18600 million. MSI GeForce GTX Titan X has a transistor size of 28 nm versus 12.

The base clock speed of the first video card is 1000 MHz versus 1350 MHz for the second.

Let's move on to memory. MSI GeForce GTX Titan X has 12 GB. MSI GeForce RTX 2080 Ti Gaming X Trio has 12 GB installed. The bandwidth of the first video card is 337 Gb/s versus 616 Gb/s of the second.

FLOPS of MSI GeForce GTX Titan X is 6.03. At MSI GeForce RTX 2080 Ti Gaming X Trio 14.78.

Goes to tests in benchmarks. In the Passmark benchmark, MSI GeForce GTX Titan X scored 13032 points. And here is the second card 21893 points. In 3DMark, the first model scored There is no data points. Second 20380 points.

In terms of interfaces. The first video card is connected using PCIe 3.0 x16. The second is PCIe 3.0 x16. Video card MSI GeForce GTX Titan X has Directx version 12.1. Video card MSI GeForce RTX 2080 Ti Gaming X Trio -- Directx version - 12.

Regarding cooling, MSI GeForce GTX Titan X has 250W heat dissipation requirements versus 250W for MSI GeForce RTX 2080 Ti Gaming X Trio.

Why MSI GeForce RTX 2080 Ti Gaming X Trio is better than MSI GeForce GTX Titan X

  • RAM 12 GB против 11 GB, more on 9%
  • Gpu memory speed 1753 MHz против 1750 MHz, more on 0%

MSI GeForce GTX Titan X vs MSI GeForce RTX 2080 Ti Gaming X Trio: highlights

MSI GeForce GTX Titan X
MSI GeForce GTX Titan X
MSI GeForce RTX 2080 Ti Gaming X Trio
MSI GeForce RTX 2080 Ti Gaming X Trio
Performance
GPU base clock speed
The graphics processing unit (GPU) has a high clock speed.
1000 MHz
max 2457
Average: 1124.9 MHz
1350 MHz
max 2457
Average: 1124.9 MHz
Gpu memory speed
This is an important aspect for calculating memory bandwidth.
1753 MHz
max 16000
Average: 1468 MHz
1750 MHz
max 16000
Average: 1468 MHz
FLOPS
Measuring the processing power of a processor is called FLOPS.
6.03 TFLOPS
max 1142.32
Average: 53 TFLOPS
14.78 TFLOPS
max 1142.32
Average: 53 TFLOPS
RAM
RAM in video cards (also known as video memory or VRAM) is a special type of memory used by a video card to store graphics data. It serves as a temporary buffer for textures, shaders, geometry, and other graphics resources that are needed to display images on the screen. More RAM allows the graphics card to work with more data and handle more complex graphic scenes with high resolution and detail. Show more
12 GB
max 128
Average: 4.6 GB
11 GB
max 128
Average: 4.6 GB
Number of PCIe lanes
The number of PCIe lanes in video cards determines the speed and bandwidth of data transfer between the video card and other computer components through the PCIe interface. The more PCIe lanes a video card has, the more bandwidth and ability to communicate with other computer components. Show more
16
max 16
Average:
16
max 16
Average:
L1 cache size
The amount of L1 cache in video cards is usually small and is measured in kilobytes (KB) or megabytes (MB). It is designed to temporarily store the most active and frequently used data and instructions, allowing the graphics card to access them faster and reduce delays in graphics operations. Show more
48
64
Pixel rendering speed
The higher the pixel rendering speed, the smoother and more realistic the display of graphics and the movement of objects on the screen will be.
96 GTexel/s    
max 563
Average: 94.3 GTexel/s    
154.4 GTexel/s    
max 563
Average: 94.3 GTexel/s    
TMUs
Responsible for texturing objects in 3D graphics. TMU provides textures to the surfaces of objects, which gives them a realistic look and detail. The number of TMUs in a video card determines its ability to process textures. The more TMUs, the more textures can be processed at the same time, which contributes to better texturing of objects and increases the realism of graphics. Show more
192
max 880
Average: 140.1
272
max 880
Average: 140.1
ROPs
Responsible for the final processing of pixels and their display on the screen. ROPs perform various operations on pixels, such as blending colors, applying transparency, and writing to the framebuffer. The number of ROPs in a video card affects its ability to process and display graphics. The more ROPs, the more pixels and image fragments can be processed and displayed on the screen at the same time. A higher number of ROPs generally results in faster and more efficient graphics rendering and better performance in games and graphics applications. Show more
96
max 256
Average: 56.8
88
max 256
Average: 56.8
Number of shader blocks
The number of shader units in video cards refers to the number of parallel processors that perform computational operations in the GPU. The more shader units in the video card, the more computing resources are available for processing graphics tasks. Show more
3072
max 17408
Average:
4352
max 17408
Average:
L2 cache size
Used to temporarily store data and instructions used by the graphics card when performing graphics calculations. A larger L2 cache allows the graphics card to store more data and instructions, which helps speed up the processing of graphics operations. Show more
3000
5500
Turbo gpu
If the GPU speed has dropped below its limit, then to improve performance, it can go to a high clock speed.
1075 MHz
max 2903
Average: 1514 MHz
1755 MHz
max 2903
Average: 1514 MHz
Texture size
A certain number of textured pixels are displayed on the screen every second.
192 GTexels/s
max 756.8
Average: 145.4 GTexels/s
477.4 GTexels/s
max 756.8
Average: 145.4 GTexels/s
architecture name
Maxwell 2.0
Turing
GPU name
GM200
Turing TU102
Memory
Memory bandwidth
This is the rate at which the device stores or reads information.
337 GB/s
max 2656
Average: 257.8 GB/s
616 GB/s
max 2656
Average: 257.8 GB/s
Effective memory speed
The effective memory clock is calculated from the size and transfer rate of the memory information. The performance of the device in applications depends on the clock frequency. The higher it is, the better. Show more
7012 MHz
max 19500
Average: 6984.5 MHz
14000 MHz
max 19500
Average: 6984.5 MHz
RAM
RAM in video cards (also known as video memory or VRAM) is a special type of memory used by a video card to store graphics data. It serves as a temporary buffer for textures, shaders, geometry, and other graphics resources that are needed to display images on the screen. More RAM allows the graphics card to work with more data and handle more complex graphic scenes with high resolution and detail. Show more
12 GB
max 128
Average: 4.6 GB
11 GB
max 128
Average: 4.6 GB
GDDR memory versions
Latest versions of GDDR memory provide high data transfer rates to improve overall performance
5
max 6
Average: 4.9
6
max 6
Average: 4.9
Memory bus width
A wide memory bus means that it can transfer more information in one cycle. This property affects memory performance as well as the overall performance of the device's graphics card. Show more
384 bit
max 8192
Average: 283.9 bit
352 bit
max 8192
Average: 283.9 bit
General information
Generation
A new generation of graphics card usually includes improved architecture, higher performance, more efficient use of power, improved graphics capabilities, and new features. Show more
GeForce 900
GeForce 20
Manufacturer
TSMC
TSMC
Power Consumption (TDP)
Heat Dissipation Requirements (TDP) is the maximum possible amount of energy dissipated by the cooling system. The lower the TDP, the less power will be consumed Show more
250 W
Average: 160 W
250 W
Average: 160 W
Technological process
The small size of the semiconductors means this is a new generation chip.
28 nm
Average: 34.7 nm
12 nm
Average: 34.7 nm
Number of transistors
The higher their number, the more processor power this indicates.
8000 million
max 80000
Average: 7150 million
18600 million
max 80000
Average: 7150 million
PCIe connection interface
A considerable speed of the expansion card used to connect the computer to the peripherals is provided. The updated versions offer impressive bandwidth and high performance. Show more
3
max 4
Average: 3
3
max 4
Average: 3
Width
267 mm
max 421.7
Average: 192.1 mm
327 mm
max 421.7
Average: 192.1 mm
Height
111 mm
max 620
Average: 89.6 mm
140 mm
max 620
Average: 89.6 mm
Purpose
Desktop
Desktop
Functions
OpenGL Version
OpenGL provides access to the graphics card's hardware capabilities for displaying 2D and 3D graphics objects. New versions of OpenGL may include support for new graphical effects, performance optimizations, bug fixes, and other improvements. Show more
4.5
max 4.6
Average:
4.5
max 4.6
Average:
DirectX
Used in demanding games, providing improved graphics
12.1
max 12.2
Average: 11.4
12
max 12.2
Average: 11.4
Shader model version
The higher the version of the shader model in the video card, the more functions and possibilities are available for programming graphic effects.
6.4
max 6.7
Average: 5.9
6.5
max 6.7
Average: 5.9
Vulkan version
A higher version of Vulkan usually means a larger set of features, optimizations, and enhancements that software developers can use to create better and more realistic graphical applications and games. Show more
1.3
max 1.3
Average:
1.3
max 1.3
Average:
CUDA Version
Allows you to use the compute cores of your graphics card to perform parallel computing, which can be useful in areas such as scientific research, deep learning, image processing, and other computationally intensive tasks. Show more
6.1
max 9
Average:
7.5
max 9
Average:
Benchmark tests
Passmark score
The Passmark Video Card Test is a program for measuring and comparing the performance of a graphics system. It conducts various tests and calculations to evaluate the speed and performance of a graphics card in various areas. Show more
13032
max 30117
Average: 7628.6
21893
max 30117
Average: 7628.6
Unigine Heaven 4.0 test score
During the Unigine Heaven test, the graphics card goes through a series of graphical tasks and effects that can be intensive to process, and displays the result as a numerical value (points) and a visual representation of the scene. Show more
2595
max 4726
Average: 1291.1
max 4726
Average: 1291.1
Octane Render test score OctaneBench
A special test that is used to evaluate the performance of video cards in rendering using the Octane Render engine.
125
max 128
Average: 47.1
max 128
Average: 47.1
Ports
Has hdmi output
HDMI output allows you to connect devices with HDMI or mini HDMI ports. They can send video and audio to the display.
Available
Available
DisplayPort
Allows you to connect to a display using DisplayPort
3
max 4
Average: 2.2
3
max 4
Average: 2.2
DVI Outputs
Allows you to connect to a display using DVI
1
max 3
Average: 1.4
max 3
Average: 1.4
Interface
PCIe 3.0 x16
PCIe 3.0 x16
HDMI
A digital interface that is used to transmit high-resolution audio and video signals.
Available
Available

FAQ

How does the MSI GeForce GTX Titan X processor perform in benchmarks?

Passmark MSI GeForce GTX Titan X scored 13032 points. The second video card scored 21893 points in Passmark.

What FLOPS do video cards have?

FLOPS MSI GeForce GTX Titan X is 6.03 TFLOPS. But the second video card has FLOPS equal to 14.78 TFLOPS.

What power consumption?

MSI GeForce GTX Titan X 250 Watt. MSI GeForce RTX 2080 Ti Gaming X Trio 250 Watt.

How fast are MSI GeForce GTX Titan X and MSI GeForce RTX 2080 Ti Gaming X Trio?

MSI GeForce GTX Titan X operates at 1000 MHz. In this case, the maximum frequency reaches 1075 MHz. The clock base frequency of MSI GeForce RTX 2080 Ti Gaming X Trio reaches 1350 MHz. In turbo mode it reaches 1755 MHz.

What kind of memory do graphics cards have?

MSI GeForce GTX Titan X supports GDDR5. Installed 12 GB of RAM. Throughput reaches 337 GB/s. MSI GeForce RTX 2080 Ti Gaming X Trio works with GDDR6. The second one has 11 GB of RAM installed. Its bandwidth is 337 GB/s.

How many HDMI connectors do they have?

MSI GeForce GTX Titan X has There is no data HDMI outputs. MSI GeForce RTX 2080 Ti Gaming X Trio is equipped with 1 HDMI outputs.

What power connectors are used?

MSI GeForce GTX Titan X uses There is no data. MSI GeForce RTX 2080 Ti Gaming X Trio is equipped with There is no data HDMI outputs.

What architecture are video cards based on?

MSI GeForce GTX Titan X is built on Maxwell 2.0. MSI GeForce RTX 2080 Ti Gaming X Trio uses the Turing architecture.

What graphics processor is being used?

MSI GeForce GTX Titan X is equipped with GM200. MSI GeForce RTX 2080 Ti Gaming X Trio is set to Turing TU102.

How many PCIe lanes

The first graphics card has 16 PCIe lanes. And the PCIe version is 3. MSI GeForce RTX 2080 Ti Gaming X Trio 16 PCIe lanes. PCIe version 3.

How many transistors?

MSI GeForce GTX Titan X has 8000 million transistors. MSI GeForce RTX 2080 Ti Gaming X Trio has 18600 million transistors