RTX 4070 vs RTX 4080
Which is better? 4070 vs 4080. Let's find out.
WePC is reader-supported. When you buy through links on our site, we may earn an affiliate commission. Prices subject to change. Learn more
RTX 4070 vs RTX 4080: which is better? The RTX 4070 is a recent addition to the lineup, and we can expect it to quickly take the throne as a very powerful middle-ground 40-series GPU. But how does it stack up against its bigger brother? The RTX 4080? Here we will compare the RTX 4070 and RTX 4080 to see which is better for you and your use case.
Choosing a GPU largely depends on your use case. Both of the GPUs on the list today have specific use cases, and that use case is gaming. Whilst both GPUs will perform other tasks, gaming is their primary function. Without further delay, we will now compare the two GPUs.
RTX 4070 vs RTX 4080
Here we will compare the RTX 4070 and the RTX 4080, as well as pit them against each other. We aim to find the best GPU for your specific use case, whether that be gaming, workstation, or streaming.
RTX 4070 specifications
Here are the RTX 4070 specifications as we know them so far.
- Process: 4nm TSMC
- CUDA cores: 5,888
- Base clock: 1,920 MHz
- Memory Size: 12 GB
- Boost clock: 2,475 MHz
- Memory clock: 21 Gbps
- Memory bandwidth: 504 GB/s
- TBP: 200W
- PCIe interface: Gen4 x 16
- MSRP $599
RTX 4080 specifications
Here are all of the specifications for the RTX 4080.
- Process: 5nm TSMC
- CUDA cores: 9,728
- Base clock: 2,205 MHz
- Memory Size: 16 GB
- Boost clock: 2,505 MHz
- Memory clock: 22.4 Gbps
- Memory bandwidth: 716.8 GB/s
- TBP: 320W
- PCIe interface: Gen4 x 16
- MSRP $1199
As you can see, the RTX 4080 is a much better GPU but it’s designed to be, we have the RTX 4070 Ti standing in between the two. But just how much better is the RTX 4080? the specs indicate a large amount.
RTX 4070 vs RTX 4080: Things to consider
Here we will outline some things to consider regarding the comparison of the RTX 4070 and RTX 4080. But ultimately, it’s your use case that decides.
GPU CUDA core count
The number of CUDA cores present in a GPU can impact its performance in specific types of workloads, particularly those that are capable of parallelization via CUDA. This includes fields such as machine learning, scientific simulations, and video processing.
CUDA cores are essentially small processing units embedded within a GPU that can concurrently execute instructions. This allows for the swift computation of tasks that involve the processing of large amounts of data. The more CUDA cores a GPU has, the more parallel computations it can handle simultaneously, leading to improved performance and faster processing times.
It is worth noting that the degree of performance gain obtained by increasing the CUDA core count is subject to various factors, such as the type of workload being executed, the software implementation quality, the GPU’s memory quantity, and speed, as well as the cooling system’s effectiveness.
In general, the power consumption and heat generation increase as the CUDA core count increases. This can make it challenging to increase the core count without risking instability or decreased reliability.
GPU clock speed
The maximum rated boost GPU clock speed is a crucial specification that can significantly impact a GPU’s performance. This clock speed represents the highest frequency at which the GPU can function ideally, typically when the workload uses all available processing resources while the GPU remains within its power and temperature constraints.
A higher maximum rated boost GPU clock speed often translates into the ability to process more data and perform calculations more quickly, which leads to improved performance and higher frame rates in games or faster processing times in other applications. However, the actual performance gain from increasing the maximum rated boost GPU clock speed relies on several factors, such as the workload type, the software implementation quality, and the cooling system’s efficiency.
Furthermore, increasing the maximum rated boost GPU clock speed usually involves a trade-off between power consumption, heat generation, and stability, similar to the CUDA core count. The GPU may need a more powerful and efficient cooling system to avoid thermal throttling and ensure steady operation at higher clock speeds.
GPU memory size
The size of a GPU’s memory plays a vital role in its performance, especially in tasks that require processing large amounts of data like video processing, deep learning, and high-resolution graphics rendering. A GPU’s memory, also known as Video Random Access Memory (VRAM), stores the data and instructions that the GPU needs to perform its calculations. With more memory, the GPU can process and store more data, leading to better performance and faster processing times, especially when dealing with extensive datasets.
However, the impact of memory size on performance is workload-specific and is dependent on the efficiency of the software utilized to exploit the GPU’s processing power. In some cases, increasing the memory size beyond a certain point may not translate into substantial performance gains, as the GPU might not be able to use the additional memory fully due to limitations in other areas like processing power or memory bandwidth.
Choosing a GPU with a larger memory size requires careful consideration as it typically comes at a higher cost, not only in terms of the GPU price but also in power consumption. Thus, it’s essential to evaluate the specific needs of the intended workload before deciding on a GPU with a larger memory size.
GPU TBP (total board power)
The Total Board Power (TBP) specification of a GPU defines the maximum amount of power the GPU can draw from the power supply unit (PSU) and the cooling system’s capacity to dissipate heat. The TBP can influence the performance of a GPU in several ways.
If the TBP of a GPU is not sufficient for a given workload, the GPU may struggle to maintain the required clock speeds for optimal performance, resulting in lower frame rates or slower processing times. This is especially critical for tasks requiring consistently high levels of GPU usage, such as gaming or machine learning.
A high TBP can increase power consumption and heat production, causing system instability, thermal throttling, or even hardware failure. The use of an efficient cooling system and a PSU that can deliver the necessary power without overheating or voltage drops can help mitigate these effects.
Overall efficiency can be influenced by the TBP of a GPU. Generally, a higher TBP implies lower power efficiency. This means that a GPU with a higher TBP may require more power to achieve the same level of performance as a GPU with a lower TBP. Careful consideration should be given to the TBP of a GPU when choosing a graphics card for a particular workload.
RTX 4070 vs 4080 performance
Comparing the performance is the key to knowing which card to go for. And with the release of the newer card, we now have a set of data to compare it to. And so we look to the Techspot review to see how the cards compare in a 13-game average, these give a good understanding of what to expect from the cards.
So at 1080p the 4070 averages 175 FPS with a 139 1% low, whilst the 4080 gets a 225 average with a 181 low. Then at 1440p the 4070 averages 126 FPS with a 104 1%, whilst the 4080 averages 182 FPS and 148 1% low. And at 4K the 4070 has an average of 69 FPS with a 57 1% low, with the 4080 getting a 107 FPS average with an 89 1% low.
There is quite a significant difference between the two in their gaming performance. And that does make a bit of sense to the much higher price tag the 4080 asks of over double the 4070.
Final word
In conclusion, both the RTX 4070 and RTX 4080 are powerful graphics cards that offer impressive performance for a range of workloads. While the RTX 4080 may have the edge in terms of raw performance, it also comes at a higher cost and may not be necessary for all users.
Ultimately, the choice between these two GPUs will depend on the specific needs and budget of each user. As always, it’s important to carefully consider your options and do your research before making a decision on which GPU to purchase.