Over the last five years, Nvidia GTX 10-Series GPUs have become a staple of the modern gamer’s build. But following the announcement of Nvidia RTX systems, the fate of the GTX line started to seem indefinite.
With advanced architecture and cool features such as Ray Tracing and DLSS, the RTX series appeared to run seamlessly rendered circles around 10-Series GTX GPUs. It seemed pretty clean-cut until Nvidia unleashed their 16-Series. These were new cards boasting almost identical Turing Line architecture to the RTX series -albeit without their specialized cores.
The 16-Series seems to situate itself performatively between the 10-Series and the RTX 20-Series cards, but there’s definitely an overlap in the market between some of these units, especially GTX 1070 and 1660 Ti. So, what actually is the difference between these familiar GPUs? Actually…quite a lot.
GTX 1070 contains 16nm Pascal microarchitecture in which an SM (streaming multiprocessors) can contain between 60 and 128 CUDA cores a piece that are responsible for all the graphical computations that end up as the image and movement you see.
1660 Ti, on the other hand, is made up of next-gen Turing microarchitecture. Turing architecture SMs can contain up to 64 CUDA cores apiece, and the 1660 Ti contains 1536 cores in total. At first glance, this doesn’t seem right. Why should the newer model have fewer cores? Well, it simply doesn’t need as many. Utilizing an independent integer datapath that can collaborate with a floating-point datapath, Turing architecture offers a 50% increase in performance per CUDA core.
These architectures also differ in clock speeds. The GTX 1070 graphics clock maxes out at 1506MHz, which is actually pretty great considering the 1660 Ti’s base graphics clock tapers out at 1500MHz. However, the Turing architecture gives 1660 Ti insane boosting capabilities, speeding the clock up to a whopping 1770MHz as opposed to 1070’s 1683MHz.
The thermal capabilities of these GTX siblings are very similar. In fact, there’s only one degree in it. 1070 can be pushed to a maximum 94 °C while 1660 Ti can safely hit 95 °C, although we highly recommend not letting it come to that.
A midrange stock fan should be relatively capable of keeping a GTX 1070 GPU in check; however, once you start really pushing it, hitting around the 90% usage mark, it’s going to get pretty toasty in there, with temperatures frequently rising beyond 80 °C. So, if you’re enamored by a 1070 GPU, a good cooling system is one of the primary features we recommend shopping around for.
GTX 1660 Ti GPUs are extremely energy-efficient cards and will never need as intensive cooling systems as its predecessor, which also means they’re quieter. That’s not to say you should strike thermal solutions from your considerations when shopping for a 16-Series card, but you can prioritize other things such as form factor or some sweet RGB lighting.
The modern principle of technological advancement is that electronically, intelligently equipment expands, while physically, it shrinks ever smaller. GTX 1660 Ti is a perfect example of this process. Measuring in at 4.37” (H) x 5.7” (L) x 2 slot (W), it’s a veritable (almost) pocket-sized powerhouse.
At 4.376” (H) x 10.5” (L) x 2 slots (W), the GTX 1070 is a total goliath, comparatively speaking, so if your case is already looking somewhat cramped, this bruiser could cause a few problems.
When bench testing these GPUs, the general pattern we see is the 1660 Ti dominating in graphically dense and demanding games such as Assassin’s Creed Odyssey and Shadow of the Tomb Raider, especially in 1080 and 1480p. That said, it did struggle under the workload of large 4K texture sizes.
The GTX 1070, using pure brawn, excels at lower resolutions and takes the lead in older titles such as Assassin’s Creed Unity. It must be stressed, though, that the difference in fps between these two GPUs is almost always imperceptible, both facilitating some visually stunning and buttery smooth gameplay.
The Turing architecture of GTX 1660 Ti facilitates something called variable-rate shading. It doesn’t work on all games, but for titles that do support it, you can expect a significant jump in framerate. It works by reducing detail in peripheral display locations that player’s eyes are unlikely to ever focus on such as shadowy corners.
First thing’s first, neither the GTX 1070 nor GTX 1660 Ti have integrated ray tracing capabilities. Even though 1660 Ti shares a common architecture with RTX 20-Series cards, it was never endowed with ray tracing or DLSS powers, but the story doesn’t end there.
Nvidia developed an advanced driver that lets you turn on ray tracing in certain GPUs born without the innate ability, so if a game allows it, you can really boost their performance. 1070 is pushed to little beyond the proficiency of 1660 Ti, while the Ti begins to compete at a 1080 level, cool huh?
Memory configurations account for perhaps the biggest difference between these GPUs besides their general architecture. GTX 1070 is packing 8GB of GDDR5 memory, and 1660 Ti has only a 6GB capacity, but it has a GDDR6 VRAM.
GDDR6 consumes far less power than GDDR5 which is great for keeping temperatures down when you’re pushing the GPU to its limits. GDDR6 also doubles GDDR5’s 8GBps transfer rate for ultra swift and silky-smooth rendering.
The performative difference between these two GPUs isn’t exactly significant, nor is it a constant state, rather it fluctuates depending on the game you’re playing and the resolution you’re playing it in. GTX 1660 Ti, thanks to the advanced GDDR6 memory configuration, provides exceedingly crisp lag and flicker-free visuals for most newer titles, so if you’re hoping to future-proof your setup, it’s a great choice.
On the other hand, GTX 1070 more than holds its own in terms of fps considering it’s a little longer in the tooth than the 16-Series combatant. As you’d expect, GTX 1070 GPUs are more affordable, but not by a ridiculous amount; you’re looking at a difference of roughly $60 when compared to an entry-level GTX 1660 Ti unit.
We can’t tell you what’s best for you, but if we were on the hunt for a mid-high range GPU at this point in time, due to the headroom for overclocking, small frame, and dedicated NVENC encoder, we’d go with the GTX 1660 Ti. But if you come to the same conclusion, be shrewd as you’re shopping around. It’s best to opt for a mid to entry-level 1660 Ti GPU as for the price of the high-end models, you’re not far off affording the jump directly to RTX 20-Series GPUs.