The Nvidia RTX 4070 Ti is a card that's going to make the green team a lot of money. For one thing it's a GPU able to deliver gaming performance either level with or ahead of the RTX 3090, and regularly beyond the more expensive new AMD RDNA 3 graphics cards, too. Importantly, for Nvidia's money people, it's also a slice of Ada GPU silicon that's only a little bigger than the chip inside the RTX 3050 but is retailing for a heady $799.
Those die area and pricing numbers add up to a GPU that should be relatively cheap for Nvidia to produce and will then offer high margins to please shareholders and Jen-Hsun alike.
So, high performance and high margins; the holy grail for any graphics card making company. And when you throw in the twin benefits of DLSS 3 and its black magic Frame Generation tech, you also get the ability to throw hyperbolic performance marketing numbers at it and still be able to back it up with actual benchmark data.
It's also a card that represents something new from Nvidia—something akin to a willingness to admit when it's made a mistake. This card originally went by the name of the RTX 4080 12GB until Nvidia unlaunched it with units already made and sitting in partners warehouses waiting to ship. Team green finally realised it was confusing having two very different GPUs under the same name, though only after it was already announced and the industry had exaggerated a collective spit-take.
Now it's rebadged and the Nvidia RTX 4070 Ti, née RTX 4080 12GB, is finally in our test rig with the exact same specifications as before but with a $100 cheaper price point. There is no Founders Edition for the RTX 4070 Ti, and so we've got an MSRP-level Gigabyte RTX 4070 Ti Gaming OC card for our testing of the latest Nvidia GPU.
Nvidia RTX 4070 Ti verdict
Is the RTX 4070 Ti any good?
The unlaunching and subsequent rebadging and repricing of the RTX 4080 12GB was the best thing to happen to this third-tier Ada GPU. Now and forever to be known as the RTX 4070 Ti, this is the card that now makes it impossible to recommend AMD's RX 7900 XT, making that pricey RDNA 3 offering even more unpalatable than it already was.
If, as the RTX 4080 12GB, it had launched ahead of AMD's first chiplet GPUs for the original price it would have gotten a hammering, but now it's both cheaper and practically as performant. And in some instances more so.
It's had the same negative impact on how I feel about the $1,200+ RTX 4080 16GB card, too. It's on average some 21% slower at the 4K level, much closer at 1440p, but with a sticker price that's at least 33% cheaper. I don't know how you recommend someone spend the sort of money retailers are asking for the RTX 4080 when you've got either a far more powerful card in the RTX 4090 or a far cheaper one now in the RTX 4070 Ti bookending it.
I really feel like both those unloved AMD and Nvidia cards are due a price correction sooner rather than later. At least the green team has found something else to do with the unwanted RTX 4080 silicon thanks to the new GeForce Now Ultimate tier.
In gaming terms, the 4K performance of the RTX 4070 Ti is impressive even without upscaling, and is rather astounding with it. But it's the 1440p performance where it really makes its mark, most especially against the other GPUs around it. The gap between it and the RTX 4080 slims down and it can even post higher frame rates than both AMD's top RDNA 3 GPU at 1440p where the Radeons' extra 20GB and 24GB memory pools can't help them out.
I'd have maybe hoped for a greater performance delta compared with the RTX 3080, the card it's really replacing from the Ampere stack, especially given that it's priced $100 higher. The retrograde comparative specs list is also disappointing—something that irked us when it was originally unveiled under its RTX 4080 12GB guise—and it really feels like that 192-bit memory configuration is Scrooge-level miserly.
But still, putting DLSS and Frame Generation aside, the RTX 3090-esque performance levels of the RTX 4070 Ti are hard to ignore. That was a monster GPU this time last year, and here we have a card that's almost half the price with essentially the same gaming chops.
Considering the current state of the market, this is the graphics card that I'd want above pretty much any other out there apart from the RTX 4090.
Nvidia RTX 4070 Ti specs
What are the RTX 4070 Ti specs?
The RTX 4070 Ti is using an entirely new Ada Lovelace GPU, the AD104. This in itself was one of the main reasons for the consternation about it being referred to as another RTX 4080 despite having a completely different chip at its heart. It's a far smaller GPU, coming in at 294.5mm2 compared to the ~378.6mm2 of the AD103 in the actual RTX 4080. It's also sporting the full AD104 core (a few decoders aside), where the only AD103 we've seen so far has been a 95% chip missing the full complement of CUDA cores.
Housed inside this full AD104 GPU are 7,680 CUDA cores arrayed over 60 streaming multiprocessors (SMs), with 80 ROPs, 240 fourth-gen Tensor Cores, and 60 third-gen RT Cores.
|Header Cell - Column 0||RTX 4070 Ti||RTX 3080 12GB|
|Lithography||TSMC 4N||Samung 8N|
|L2 cache||49,152 KB||6,144 KB|
|Memory||12GB GDDR6X||12GB GDDR6X|
|Memory bandwidth||504 GB/s||912 GB/s|
That's a pretty respectable balance sheet, but not the sort of gen-on-gen boost that we've seen from the mighty RTX 4090 compared with its own high-end predecessor. In fact, against either the original RTX 3080 or the mildly tweaked (and price comparable) 12GB RTX 3080, you're looking at a definite retrograde step in terms of GPU configuration.
There are 1,280 fewer CUDA cores in this RTX 4070 Ti, and while Nvidia will likely now wish to retcon this card as one that now shouldn't be compared with the xx80-class GPUs of the previous generation, it's difficult not to see that as a step backwards. Of course the Ada and Ampere architectures are different, but not so much that the actual CUDA cores are different in themselves.
So, where does Ada's extra performance actually come from? The real change from Ampere to Ada stems from the TSMC 4N lithography enabling seriously high clock speeds from even the biggest Ada GPUs, and the huge increase in L2 cache Nvidia has them jammed into each chip.
Compared with the RTX 3080 we're talking about a GPU that's running over 1GHz faster on average, and one that's sporting practically 10x the cache memory inside it. I've previously gone into depth on the full Ada Lovelace GPU architecture in my RTX 4090 review, but in real terms it's not a fundamental change in the cores that delivers the RTX 40-series its performance, but the mix of high clocks and cache.
And then, when you throw in the increasing number of games with ray-traced lighting features, the extra performance of the updated Ada third-gen RT Cores pushes frame rates even further.
The CUDA core count isn't the only generational downgrade either. The memory system of the RTX 4070 Ti is more akin to that of the RTX 3060 12GB than the RTX 3080 12GB. Sure, it's got the same amount of actual VRAM, but it's operating on an aggregated 192-bit memory bus made up of six 32-bit memory controllers, instead of the 320-bit bus of the older card. You are getting RTX 3070 levels of bandwidth, however, thanks to the fact it's using GDDR6X memory running at 21Gbps, but that does all add up to a feeling that this is a GPU well down the ladder when it comes to overall specs.
But Nvidia isn't structuring its GPUs by underlying core configurations, it's looking at the raw performance metrics and badging cards up according to frame rates not specs. Which I guess is how you end up in the sort of mess where it proposes a pair of RTX 4080 cards with wildly different graphics chips inside them.
Nvidia RTX 4070 Ti performance
How does the RTX 4070 Ti perform?
Possibly the most impressive thing to say about the RTX 4070 Ti is that very regularly it's level to, or faster than an RTX 3090. When you think that's the $1,500 GPU of the last generation that looks like a great gen-on-gen uptick in performance, especially when that's at the top 4K resolution.
What's maybe less exciting is that, when you're just talking in straight rasterized gaming terms, it's not a whole lot faster than the old, cheaper RTX 3080 10GB at 4K. It is faster, most especially when you bring those third gen RT Cores into the equation, but it's clear the higher clocks and heftier L2 cache is having to work hard to give it the lead in raw frame rate terms over the older Ampere card.
4K gaming performance
CPU: Intel Core i9 12900K
Motherboard: Asus ROG Z690 Maximus Hero
Cooler: Corsair H100i RGB
RAM: 32GB G.Skill Trident Z5 RGB DDR5-5600
Storage: 1TB WD Black SN850, 4TB Sabrent Rocket 4Q
PSU: Seasonic Prime TX 1600W
OS: Windows 11 22H2
Chassis: DimasTech Mini V2
Monitor: Dough Spectrum ES07D03
Just imagine how good this card would have been if it was using a cut-down AD103 GPU with the same core count as the RTX 3080, and with the same memory bus.
Where it looks far more positive is up against the new AMD RDNA 3 cards, the RX 7900 XTX and RX 7900 XT. It is generally slower than the top Radeon GPU, but against the still more expensive RX 7900 XT the RTX 4070 Ti regularly posts higher 4K performance.
That Nvidia lead is more consistent at the 1440p level, where the extra Radeon video memory can't give RDNA 3 GPUs the edge and even the AMD RX 7900 XTX sometimes finds itself behind the third-tier GeForce card. That's a real feather in the techie cap of the Nvidia architecture, and this is before we get into any of the upscaling shenanigans that either side of the GPU divide can offer over and above the base graphics performance.
1440p gaming performance
The AD104 GPU once more highlights just how efficient the nominally 4nm Ada Lovelace architecture is. In average gaming terms the RTX 4070 Ti runs well below the 300W mark and offers higher performance per watt metrics than all but the RTX 4080. That is one place that it is comfortably ahead of the RTX 3080 and the Ampere architecture as a whole.
In thermal terms, too, despite being a dual-slot rather than a triple-slot card, this Gigabyte triple-fan RTX 4070 Ti runs impressively cool, too. I didn't see it getting above 60°C in any of my testing. I mean, it might be 'just' a dual-slot cooler, but that does still mean you're getting a real mass of heatsink attached to the tiny PCB the RTX 4070 Ti it sits atop, and in real-terms it's not that much more slight than the chonky RTX 4080 Founders Edition.
So yes, without any of the DLSS 3/Frame Gen stuff in attendance the RTX 4070 Ti is a very capable performer, but once again the power of Nvidia's upscaling tech is preposterously good. I keep trying to see where the Frame Generation technology fails but I can't do it. Every time I'm like 'aha, there it is, the tell-tale artefact of fake AI frames!' I then check out the native rendering and it looks exactly the same. If not worse.
And with the extra genuine performance of the upscaled frames and the interpolated smoothness of the AI-generated frames, the performance improvement is spectacular where DLSS 3 is available. Which should be more and more often, with Nvidia's Streamline SDK offering devs a one-stop option for enabling it and other vendors' upscaling tech too.
At its highest quality setting DLSS 3 with Frame Generation shows more than double the 4K frame rate of Cyberpunk 2077, Microsoft Flight Sim, and F1 22. For me, this is still the best feature of Nvidia's next-gen architecture, and I really hope Frame Gen gets opened up to work on Ampere at least down the line.
Nvidia RTX 4070 Ti analysis
What does the RTX 4070 Ti mean for gamers?
I still don't really know how to feel good about yet another ultra enthusiast-priced graphics card launching into the wan light of a new year where global economic concerns are front and centre in all our news media. Even on TikTok. Yeah, down with the kids, me.
We're now five GPUs into this new generation of PC graphics cards, and the least offensively priced one is still $799. And that's only because of a climbdown from Nvidia's previous position that this was an $899 GPU. That's all best case scenario, too, because we've seen some AIBs still slapping $900+ price tags on their overclocked versions of the RTX 4070 Ti.
It's clear then we're still a long way from mainstream Ada Lovelace graphics cards with Nvidia's Jeff Fisher fronting its detail-light CES 2023 event explicitly stating "the RTX 30-series continues to be the best GPU for mainstream gamers." Even if, in reality AMD's RX 6000-series GPUs are the ones you should be dropping dollar on if you're looking for the best bang for your buck at the more reasonable end of the graphics card market.
But there's definitely an argument to be made that in its own price bracket the RTX 4070 Ti is a better option for anyone looking to spend the best part of a grand on their new GPU than either the AMD RDNA 3 cards or the prohibitively priced RTX 4080.
And, honestly, when I started testing the card in the office pre-Christmas, I was surprised at the level of performance it offers. I'd seen all the original benchmarks posted about it as the RTX 4080 12GB, and knew its inflated numbers were down to DLSS 3 and Frame Generation. But it's how well it does without all that which makes it such a worthwhile GPU investment for high-end gamers.
If, however, Nvidia had pushed it out at the same $699 price as the original RTX 3080, with the extra gaming performance that it offers I'd have been far more effusive in my praise of it. And it would have absolutely castrated the AMD RDNA 3 cards, too. At $799 it's only just far enough ahead of the last-gen GPU it's ostensibly replacing.
But it is ahead, and regularly enough performs at the same level as the $1,500 RTX 3090 from the Ampere lineup to still feel like a great upgrade for anyone on a high end card from either the 10- or 20-series GeForce generations.
The only concerns now are what the RTX 4070 Ti means for the lower spec GPUs which will hopefully follow soon. As I mentioned earlier, this AD104 chip has been pared back, most especially when it comes to the memory configurations, compared with the equivalent Ampere cards. Fingers crossed we'll see RTX 4070 and RTX 4060 cards using the same GPU, with a few SMs disabled, but using the same 192-bit bus. Cutting that any more aggressively could really hobble the more mainstream Ada cards.