In a recent blog post, Anthropic warned of the 'major threat' that is chip smuggling. It claims China has "established sophisticated smuggling operations, with documented cases involving hundreds of millions of dollars worth of chips". This leads Anthropic to argue that export enforcement needs increased funding, and that the tier system introduced in the 'AI Diffusion Rule' needs adjusting to allow tier 2 countries better access to technology, alongside more lenient rules for tier 2 countries.
Effectively, the AI Diffusion Rule, which takes effect on May 15, would prioritise allies of America for the control of advanced AI chips, and any chips being smuggled into China would subvert the aims of that rule.
For some context on those specific methods for smuggling, a woman was caught after smuggling 200 CPUs in a prosthetic belly back in 2022. Then, in 2023, two men were found smuggling GPUs into China alongside live lobsters. These aren't just random claims on behalf of Anthropic, it's citing previous cases of smuggling.
Nvidia, however, told CNBC, "American firms should focus on innovation and rise to the challenge, rather than tell tall tales that large, heavy, and sensitive electronics are somehow smuggled in ‘baby bumps’ or ‘alongside live lobsters".
Given the antagonism between America and China regarding advancement in AI (as shown by the likes of DeepSeek), Anthropic, as an American AI company, has an investment in America being the dominant leader in AI growth. As cited at the bottom of Anthropic's blog post, Anthropic's leaders have previously argued America's "shared security, prosperity and freedoms hang in the balance", in regards to wider AI support and adoption.
On the way to a recent White House event Jen-Hsun Huang has spoken about the need to "accelerate the diffusion of American AI technology around the world." That's Nvidia's CEO arguing for getting more 'American' AI chips out into different territories, which could kinda be on the same side of the argument as Anthropic. Especially as he also later stated that: "China is right behind us," which is also seemingly leaning into that whole secure nationhood stuff, too.
This is one step in a large argument made by companies concerning their competition with China. Just a few months ago, OpenAI argued it should be allowed to scrape copyrighted content as it would lose out to China otherwise.
Anthropic, like many other major AI companies, is reliant on the hardware of Nvidia to operate in wider AI workloads. The examples it brings up are from chip smugglers in the last few years, but it doesn't argue that these specific examples are how smugglers are beating detection right now. In this sense, Anthropic is broadly gesturing at a perceived problem with smuggling to justify tightening restrictions and enforcement in light of the Diffusion Rule.
Nvidia is hand-waving that concern with its response, but the incredulity in the messaging does seem a tad strange, given these were previously successful smuggling techniques, albeit not specifically of Nvidia products. Anthropic has not given any evidence of any relevant or ongoing smuggling techniques as of the time of writing.
Best CPU for gaming: Top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game first.
Not that this is a particularly big surprise, given the RX 7900 GRE was also a China-exclusive card to start with, although it did become available in limited quantities to the rest of the world eventually.
Anyway, the RX 9070 GRE has 48 Compute Units, a boost frequency of up to 2,790 MHz, 48 Ray Tracing Accelerators, 96 AI Accelerators, and 96 ROPS, with 3,072 Stream Processors in total (via Videocardz). Oh, and 12 GB of GDDR6 VRAM over a 192-bit bus, which is spookily close (by which I mean, bang on accurate) to the leaked specs we reported on last week.
AMD also says that the RX 9070 GRE is 6% faster than the RX 7900 GRE on average across 30+ games at native 1440p Ultra, which is a bit underwhelming in my book. Not that I'd consider that performance to be slow, exactly, but I can't help but feel it's a roundabout way of saying "slower than the RX 9070 and RX 9070 XT." Which is to be expected, of course, but still.
The pre-order price is listed at 4,199 RMB. That equates to around $588 at today's exchange rates, although it's difficult to do a direct comparison with a Chinese pre-order price versus current Western market GPU prices, given that most models are still yo-yoing around in pricing and availability.
One thing's for sure, though: It's currently slotting in as the least-powerful card in AMD's latest generation lineup—although, should it become available in western markets for a sensible price tag, it might stand a chance of giving the RTX 5070 something to think about.
Still, we're getting towards the sort of time we expect to see an RX 9060 XT release, which will hopefully flesh out the RX 9000-series desktop range good and proper. Computex 2025, perhaps? It's not long now until I pack my bags for Taiwan once more, so I'm hoping AMD might have something a little more exciting (or at least, available) to show off there.
Best CPU for gaming: Top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game first.
Once again, it's X user Haze2K1 and the shipping data website NBD that's furnishing us with an insight into Intel's graphics machinations. But this time, instead of the G31 GPU itself popping up in the data, it's supporting components for the chip, including socket stiffeners (ooo, er!) and cylindrical multicontact connectors, whatever they are.
The shipments were to Intel's Vietnam facility, where Haze2K1 claims the company's reference-design Limited Edition boards are produced. You could take from that, should you wish to be optimistic, something along the lines of Intel making preparations to start production on a line of G31-based gaming graphics cards.
But hang on, even if that's true, exactly what can you expect from G31? As I explained last time around, Intel's latest Arc B580 and B570 GPUs use the BMG-G21 chip and the bigger G31 GPU has long been rumoured as the next step up in the Battlemage hierarchy.
There's no official information, but the rumour mill has settled on 32 execution units (EUs) for G31, fully 60% more than the 20 EUs of the existing Intel Arc B580. Going by the raw performance of the B580, you might then expect performance up at Nvidia RTX 5070 levels or even beyond.
In practice, that seems pretty ambitious. But if G31 can at least get near the 5070, well, that would be pretty exciting, especially if Intel retained its typically aggressive GPU pricing strategy. Hence, my speculation a few weeks ago that G31 could have the makings of a $400 RTX 5070 killer.
Of course, you have to wonder why, if Intel is indeed planning to release a G31-based graphics card, it's taking so long, what with the B580 launching last December. There's been some talk that G31 suffers even more acutely from the excessive CPU load that has compromised the appeal of the Arc B580 board. But ultimately, we just don't know.
That said, if Intel could get a G31-based graphics card out in 2025, it should have over a year to win customers. We wouldn't expect properly new mid-range GPUs from Nvidia until spring 2027. Likewise, AMD's RDNA 4-based GPUs, including the RX 9070 and RX 9070 XT are pretty new on the market, with the RX 9060 series not even on sale yet.
The bottom line is that there's still time for Intel to make an impact. So, get those fingers and toes crossed and keep watching this space.
Best CPU for gaming: Top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game first.
Nope. Certainly far from in the clear.
If you can wait to buy a graphics card, my advice is that you absolutely should. It's the wild, wild west out there right now and, if you're not desperate, it's better to wait it out.
Can't wait any longer? Maybe your current card's given up the ghost or you fear you don't have the graphical grunt to run Elden Ring Nightreign at a decent lick. Fair enough, but just know that it won't be easy. If refreshing Amazon every five seconds or camping outside your local retailer to try and score a cheap GPU deal is not your thing, I've got you.
I've rounded up the five GPUs I'd buy right now if I had to buy one, so you can waste your money efficiently and overpay just a bit instead of a lot. Don't say I didn't warn you.
MSI RTX 4060 Ventus 2X Black OC | 8 GB GDDR6 | 128-bit | 3072 CUDA cores | 2.46 GHz boost clock | 115W | $380 at Newegg | ?249 at Amazon UK
I know that last-gen cards aren't particularly exciting anymore, not when we're all drooling over the likes of the RTX 5090 (which, by the way, apparently can't run Oblivion Remastered at a steady 60 fps. Go figure). But I didn't promise you excitement; I promised you GPUs at prices you'd be able to stomach, and that's why the RTX 4060 is at the top of the list.
With upscaling and DLSS 3, you'll still run most games at high settings at 1080p. Can the RTX 4060 handle 1440p? Yeah, it can, but you'll need to be conservative with the graphics settings.
In our RTX 4060 review, you'll find that the card can comfortably maintain 60 fps in most games even if you drive the settings all the way up to max. Cyberpunk 2077 on ultra with ray tracing on is not a thing for this card, though, at least not without DLSS 3 and Frame Generation.
Going last-gen has one glaring downside: No DLSS 4. You can't quadruple your frame rates with Nvidia's Multi Frame Generation. But, seeing as you can still double them, I say you'll be fine for a few years.
Price-wise, I'm sorry to report that the RTX 4060 is, like every other card, selling above MSRP. Like I said, the GPU market is awful right now. But, I found one for sale for $380 at Newegg (or ?249 for us lucky Brits) and that's as good as it gets right now. View Deal
MSI Ventus 3 | 16 GB GDDR7 | 256-bit | 8960 CUDA cores | 2.45 GHz boost clock | 300W | $900 at Newegg | ?808 at Amazon UK (Palit card)
For gamers with deeper pockets that don't quite go all the way to the RTX 5090, the RTX 5070 Ti is the only option right now. With the RTX 5080 selling for $1,280 and up on Amazon, you don't have much of a choice.
Back to the RTX 5070 Ti. Polarizing graphics card, that one.
PC Gamer's reviewed two different models of it, the Asus TUF Gaming RTX 5070 Ti and this MSI Ventus 3X RTX 5070 Ti, and while one scored a hefty 86 out of 100, the other one fares much worse with a 68 score. It's not that one was great and the other one was not; it's all about the price.
If we just close our eyes and pretend for second the pricing issue isn't a thing (mmm, what a world...), the RTX 5070 Ti is a decent GPU. It can almost match the RTX 5080 with the right overclock.
The RTX 5070 Ti is roughly 20% faster than the RTX 4070 Ti Super, and around 15 to 20% slower than the RTX 5080. In our 1440p benchmark, it scored between 61 fps (Cyberpunk 2077 with ray tracing on ultra settings) and well above 100 fps in games that don't make a mockery out of every GPU. It's brilliant at 1440p, good enough at 4K, and overkill at 1080p.
The RTX 5070 Ti is also overpriced, no surprise there. The cheapest one I spotted was $900 at Newegg, and it also gets you Doom: The Dark Ages for free, which normally costs $100. If you're over on this side of the pond, Amazon UK sells the RTX 5070 Ti for ?807.View Deal
Sapphire Pulse RX 7800 XT | 16 GB GDDR6 | 256-bit | 60 compute units | 2.43 GHz boost clock | 263W | $580 at Newegg | ?461 on Amazon UK (XFX card)
Listen, I'm a sucker for AMD, and I love the RX 9070 XT. I just don't think you should buy one right now, so I'm still recommending the RX 7800 XT instead.
The RX 9070 XT is one frustratingly pricey affordable GPU these days. Its big selling point was that it was massively undercutting the Nvidia competition and delivering on gaming performance. Scarcity and opportunism, however, has bumped the $599 MSRP to the moon. The absolute cheapest option on Newegg is $859, and just typing that is upsetting. At that price point, just get the RTX 5070 Ti, which wins in ray tracing and offers MFG.
The RX 7800 XT is an acceptable alternative if you're willing to forgo (some of) that fancy-schmancy ray tracing stuff. And if not, I've got two more Nvidia options for you below, so what are you still doing here?
The RX 7800 XT didn't break any records when it first launched, but it offered top-notch value for the money, and even now, when it costs $580 at Newegg and $590 on Amazon (or ?461 on Amazon UK), that side of it is still true. When we look at pure rasterization, this is a GPU that rivals the RTX 3080 and the RTX 4070, and you'll generally find it for less than either.
A 1440p card through and through, the RX 7800 XT delivers some of its best work at that resolution—although it certainly wouldn't mind if you scaled down to 1080p. At 1080p, it easily hits 60 fps in every game bar Cyberpunk 2077; at 1440p, it gets absolutely obliterated by the dystopian blockbuster, huffing and puffing as it averages 27 fps with ray tracing on. Outside of Cyberpunk, it's an admirable GPU, and it hits anywhere from 60-ish to 120-ish fps in every other game in our test suite.View Deal
Gigabyte RTX 5060 Ti Gaming OC | 16/8 GB GDDR7 | 128-bit | 4608 CUDA cores | 2.57 GHz boost clock | 180W | $530 at Amazon | ?450 at Amazon UK (MSI card)
I admit that I had low hopes for the RTX 5060 Ti. After many hours spent ranting about the stupidly narrow memory bus on the RTX 4060 Ti and how close those two GPUs were to the RTX 3060 Ti (don't even get me started), I have come to accept that Nvidia is doing the same thing yet again. That's what they want, right?
Imagine my disappointment then, when all my whining was for nothing, and the RTX 5060 Ti actually turned out to be a decent GPU? Preposterous.
We took the Palit Infinity 3 RTX 5060 Ti 16 GB and the MSI Gaming Trio OC RTX 5060 Ti 16 GB out for a spin, and found that the former was 20% faster than the RTX 4060 Ti. This is a 1080p card, but reach into Nvidia's bag of goodies (aka DLSS and Multi Frame Generation) and you can pull off 1440p. If you force it to run at 4K, you'll find that the extra bandwidth gained from using speedier GDDR7 RAM actually puts it 40% ahead of the last-gen GPU.
Performance-wise, I'm pleased; price-wise, this GPU is stuck in the same hellscape we're all living in right now. The cheapest one I've found is this Gigabyte Gaming OC 16 GB model, priced at $530 (or an MSI Inspire 2X model for ?450 in the UK).
The 8 GB version of the RTX 5060 Ti finds itself adrift in the current Nvidia lineup. You can buy one for $420 (?360), but the question is, should you? At 1080p, absolutely go for it. Anything beyond and you're better off with the 16 GB version.View Deal
Gigabyte WindForce RTX 5070 | 12 GB GDDR7 | 192-bit | 6144 CUDA cores | 2.51 GHz boost clock | 250W | $604 at Newegg | ?519 at Amazon UK (Palit card)
While the RTX 5060 Ti was a pleasant surprise, the RTX 5070 sort of disappointed me. But it's only just a bit pricier than the RTX 5060 Ti right now, and that gives it some merit.
This GPU is considerably slower than the RX 9070 XT, so if both were sold at MSRP, I'd be sending you a whole bunch of links to buy the AMD flagship instead. It's only around 13% faster than the RTX 4070 Super, but the access to MFG elevates it to a whole new level, unless you're not a fan of them 'fake frames' that is.
Nvidia's CEO Jensen Huang promised us RTX 4090-level performance with this one. Well, we're only about 80% behind that figure at 4K, so he was almost right. Kind of.
But if you compare it to cards in a similar pricing bracket, the RTX 5070 is not too bad. It's just not outstanding, but at $604 (?519), I'll take it.
Given the lacklustre gen-on-gen improvement, you might as well get the RTX 4070 Super, but that old GPU is actually pricier right now. The RTX 5070 quietly slides onto this list with no applause, and no fanfare, but as a means to an ends during a rough market.View Deal
If you're itching to get yourself a new GPU, I feel your pain. I was stuck in that same limbo back in 2022, with an aging PC that cried quietly when I tried to play Elden Ring. There were just no GPUs to be had, and my poor GTX 1060 had to keep on keepin' on until I finally sent it where GPUs go to retire in 2024.
I waited it out. So should you.
The problem, right now, is twofold. One: GPUs are almost never sold at MSRP, knocking down the value for money aspect. Two: Even if they're ever sold at a reasonable price, they're also quick to sell out. Okay, I lied, because here's… Three: There are no last-gen cards available at MSRP either.
If you've got an unlimited budget and a GPU that's currently on fire as we speak, go ahead and buy one of the five graphics cards I listed above. Otherwise, my recommendation is patience.
The prices are wild, but they're getting better. It's already possible to score a graphics card near MSRP in the UK, and hopefully, the US market will catch up eventually. Until then, keep pushing your old GPU to do its best. I hear words of encouragement don't work, but it never hurts to try.
]]>That's coming from Benchlife, which says that, despite recent rumours to the contrary, "we have reliable sources telling us that there is currently no plan to stop supply or cancel it. As for the news from the market, it is just a rumour. The main reason is as mentioned earlier, it is entirely due to the reaction to the GeForce RTX 5060 Ti" (machine translated).
It's true, as Benchlife says, that Nvidia only sent out 16 GB versions of the card for review. Plus, there's the fact that some early RX 9060 XT listings that showed up were all 16 GB versions. However, it's unclear why these two facts would make people assume that AMD would cancel the 8 GB version entirely.
Whatever the reason, according to Benchlife's sources, we can still expect an 8 GB version in addition to a 16 GB version. We're hoping these will get announced at AMD's Computex press conference towards the end of this month. AMD specifically mentioned gaming in its announcement of this press conference, so fingers crossed.
Despite all that people like to decry 8 GB of VRAM as not enough for gaming in 2025, with prices being what they are, a budget 8 GB card from AMD might be just what's needed to spice the market up at the low end. Plus, we already know that even some big AAA games are fine with just 8 GB of VRAM—it's all game-dependent.
That being said, everything will, as always, depend on stocks. Without ample stocks, AIBs can charge what they like and people will still presumably lap it up.
As for what to expect in terms of performance, we have some indication that it'll be about half as powerful as the RX 9070 XT. Although this isn't official, recent leaks have it that the RX 9060 XT will have the same number of shader cores and whatnot as the RX 7600 XT, but with a higher boost clock.
Plus, of course, you'll be getting all that sweet RDNA 4 goodness, including FSR 4 upscaling and frame gen. Which, I might add, is a pretty impressive generation of that tech that all but closes the gap on Nvidia. For more speculation about the actual performance of the RX 9060 XT, though, you can check our Jeremy's story linked above, as he gives a good picture of what we might be in store for.
With an RX 9060/9060 XT launch seemingly just around the corner, the RTX 5060 being "available in May", and even RTX 5060 laptop GPU benchmarks starting to surface, it's looking like the lower end of the GPU market might finally—finally—be kicking into gear. I'll take more options over less any day of the week, 8 GB or otherwise.
Best CPU for gaming: Top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game first.
As reported by VideoCardz (and attributed to a post on Weibo), a Japanese shop has taken to informing potential customers it will not sell RTX 5090 and RTX 5080 GPUs to those looking to take them outside of Japan.
A memo was spotted underneath the sign for a Zotac RTX 5090 in an electronics store in Osaka, which, when roughly translated with Google, says "RTX 5090/RTX 5080 is only sold to customers for use in Japan. If the purchased product is to be taken out of Japan, it will not be sold."
The Zotac RTX 5090 in said electronics store sells for ?452,800, which equates to roughly $3,170. This price is inclusive of sales tax, though those with foreign passports can apply for tax-free shopping at many retailers when paying over ?5,000 ($30).
Some tourists may opt to buy their GPUs this way, both for the lack of sales tax and to take advantage of dips in the price of Yen. Presumably, this store has seen quite a few tourists picking up cards to bring home or to sell, as it's cheaper (or more readily available) to pick up than at home.
However, no information is given on how exactly this policy could be enforced. It doesn't specify tourists, so a test wouldn't quite work, and the likelihood of a store asking for proof of residence before allowing someone to purchase an item is quite slim. This is before mentioning that a Japanese resident could feasibly purchase a card for a potential buyer and give it to them outside of a store—you know, like a teenager chancing their arm at getting alcohol to impress their friends.
Back around the launch of the RTX 30-series cards, UK electronic seller, Overclockers UK, halted sales to the US due to high demand, and recent tariffs have also stopped RTX 50-series sales shipped to the US, but you can still buy any card in the UK and bring it across should you want to. This new policy in Japan is quite different as it is about stopping customers from buying in the physical shop, specifically.
According to VideoCardz, some Japanese stores opted to deny customers looking to buy GPUs without sales tax, but tourists still bought the cards at full price. Some tourists reportedly found it was cheaper to fly to Japan and buy a card than buy it in their home country, even at an inflated price with included sales tax.
Still, the store is likely putting a metaphorical line in the sand here, even if it feels very hard to enforce in any real way.
Best gaming PC: The top pre-built machines.
Best gaming laptop: Great devices for mobile gaming.
Yup, Nvidia will sell you such a card, and it's not based on an irrelevant GPU built specifically for AI processing. In fact, the Nvidia RTX Pro 6000 Blackwell Workstation Edition uses the same GB202 chip as the mighty GeForce RTX 5090. Except the RTX Pro 6000 gets a very nearly fully enabled GB202 GPU with a ridiculous 24,064 CUDA cores, up from the feeble 21,760 cores of the RTX 5090.
Add in that 96 GB of VRAM, three times the 32 GB you get with the 5090, and you have an absolute beast of a video board. It's the ultimate in future-proofing, surely?
Well, possibly, but there are a couple of pifflingly minor caveats. First, the RTX Pro 6000 has an MSRP of $8,565. That's not great value considering the RTX 5090 with the same GPU, albeit slightly cut down, has an MSRP of $1,999. Even if you factor in the $3,000 that 5090s tend to go for in the real world, you're paying awfully heavily for that extra 64 GB of VRAM.
The other snag is that, despite the fact that the RTX Pro 6000 uses exactly the same gaming-optimised GB202 silicon as the RTX 5090, it doesn't have the same gaming-centric drivers. In reality, it could be a fair bit slower than an RTX 5090 in games.
That's despite a higher peak boost clock of 2,600 MHz to the 5090's 2,407 MHz. The RTX Pro 6000's base clock is actually around 400 MHz lower than the 5090 at 1,590 MHz. All with a 600 W TDP.
Moreover, if you could somehow wangle an RTX 5090 at MSRP, then do the same with its successors, presumably the RTX 6090 and RTX 7090, you could probably enjoy all three over a six year period for near enough what you're paying for the RTX Pro 6000.
As for that rumoured RTX 5070 Super with 18 GB, we're talking about a mere forum post referring to such a card. So, it's the most wafer-thin of rumours. But it does make sense.
There's a new variant of GDDR7 memory with 3 GB per chip, which would allow for 18 GB on the RTX 5070's 192-bit memory bus. Indeed, it's those 3 GB chips which allow Nvidia to create a 24 GB RTX 5090 laptop GPU with a 256-bit memory bus, where you'd normally expect 16 GB.
Likewise, an 18 GB RTX 5070 Super would sit more easily in the GeForce range now that Nvidia has released the RTX 5060 Ti with an optional 16 GB variant. It's a little odd to find the $429 RTX 5060 Ti 16 GB is cheaper than the $549 RTX 5070 with just 12 GB.
Whether Nvidia actually releases an RTX 5070 Super with 18 GB, we'll have to wait and see. At the right price, it could be a decent long-term investment. But expecting an Nvidia GPU at the right price is typically an exercise in frustration.
Best CPU for gaming: Top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game first.
GeForce hotfix display driver version 576.26 has just launched, offering 10 additional fixes on top of version 576.15, which launched April 21. This solves problems found primarily with RTX 50-series cards in games, such as Black Myth: Wukong, Red Dead Redemption 2, and Horizon Forbidden West. However, issues were also spotted with LG monitors and problems tracking GPU temperature after waking PCs from sleep mode.
Currently, in the Nvidia App, you can only download version 576.02 as part of the drivers tab, which was released alongside the launch of the RTX 5060 Ti on April 16. This is because hotfixes aren't published to the app and are something users have to specifically seek out if they want to enable them.
Nvidia says in its latest driver support page (with direct links to download the hotfix): "These fixes (and many more) will be incorporated into the next official driver release, at which time the Hotfix driver will be taken down. To be sure, these Hotfix drivers are beta, optional and provided as-is."
One commenter says that the latest driver has fixed problems they saw with connecting to their LG monitor via DisplayPort 2.1 with HDR enabled. Another comment claims that the Horizon Forbidden West freeze fix did not actually solve the crashing issues they have been having.
Nvidia feedback threads are the dedicated spot for, as you might be able to guess, giving feedback, so it's certainly nothing new to see negative responses in them. However, with hundreds of comments, the early feedback on the latest hotfix is looking pretty mixed at this time.
Looking through the Nvidia Customer Care X account, plus the Nvidia feedback threads, hotfixes are a relatively rare update for Nvidia, historically. Despite this, both versions 576 and 572 in the last few months had two separate sets of hotfixes. Back in December, version 566 got a single hotfix, and so too did Nvidia Broadcast in June last year. The last hotfix publicised on the Nvidia Customer Care page before that was on February 28, 2024.
Nvidia does clarify that hotfixes are "run through a much abbreviated QA process" and that "the sole reason they exist is to get fixes out to you more quickly." For a cleaner, safer upgrade process, Nvidia recommends waiting for a WHQL-certified driver.
This is where drivers go through the appropriate channels and are tested by third parties. When a driver makes it to the Nvidia App, it's a guarantee that it has been properly tested. Though, evidently, not necessarily a guarantee that it will definitely work on your specific system
We should expect to see the full driver update in the near future.
Best CPU for gaming: Top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game first.
I'm not normally one for game-themed graphics cards, but in this case I'll make an exception. Asus has announced the ROG Astral GeForce RTX 5080 OC Doom Edition graphics card, just in time for the launch of Doom: The Dark Ages on May 13, and I reckon it's a bit of a looker.
Asus has partnered with Bethesda and id Software to celebrate not just the "legendary tenure of Doom" but also the 30-year anniversary of Asus designing and manufacturing graphics cards.
Oh, and apparently it represents Nvidia's longstanding reputation as the world's premier GPU designer, too. A trio of celebrations, then, although I'm less concerned about what it represents and more about how good an Asus ROG Astral card looks in some non-standard colours.
Asus says it'll be sold exclusively on the Bethesda Doom Gear store, and you'll also get a mouse mat, a t-shirt, a yellow keycard (!) and an exclusive Doom Slayer Legionary in-game skin. The listing's not up on the store page yet, but pre-orders are said to be starting shortly, although I'd imagine it won't be cheap.
Not that any RTX 5080 is at this point, but still. A Doom-themed RTX 50-series GPU and a cool yellow keycard? Yep, that'll be all the monies, I reckon. Still, at least you can actually buy one, unlike the Starfield-themed RX 7800 XT of yore, which was only available as a giveaway.
It's a ROG Astral edition card, which means you get the full benefit of all of Asus' cooling-related enhancements. This includes four Axial-tech fans, a patented vapour chamber design, a phase-change thermal pad, and what Asus says is a "vast fin array" that helps provide "effortless cooling in the heat of battle."
Not that the RTX 5080 is known for being a particularly hot-running card, even with a significant overclock on the OC-ed Founders Edition. Our hardware overlord, Dave James, managed to push the chip in our FE sample to 500+ MHz with very little effort (or any scary temperature leaps), but the Asus ROG Astral cards seem better equipped than most to deal with any "hotter than hell" overclocking moments.
Speaking of Asus overbuilding GPUs, it's also been revealed the ROG Astral lineup has some hidden hardware. Users have noticed that the included GPU Tweak III software has an Equipment Installation Check feature that uses an onboard gyroscopic measurement sensor to tell if the card is sagging in its slot, and provide a warning if it's off-kilter by a certain amount.
Yep, I couldn't quite believe it either. The ROG Astral lineup comes with an anti-sag device by default (as does the Asus TUF Gaming RTX 5070 Ti OC Edition I reviewed earlier this month, which comes with a weeny little GPU post of its own), but I'm mightily impressed that there's an actual sensor on board to tell if your GPU is on the droop.
The Slayer himself would be proud. After all, if you're going into battle with the demons of hell, it's probably best to make sure all your gear is in tip-top shape, isn't it?
Best CPU for gaming: Top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game first.
Or it could have 64 lollipops, 16 ice creams, and 64 jelly sweets (via Videocardz). The formatting here could be clearer, but those numbers would line up with the top GPU in the RX 9000-series desktop stack, the RX 9070 XT. That's quite the performant graphics cruncher, although what wattage this mobile variant might run at is still up for debate.
???9??0??8??0??M 6??4?? 1??6?? 6??4??9??0??7??0??M XT 4??8?? 1??2?? 4??8??9??0??7??0??M 3??2?? 8?? 3??2??9??0??7??0??S 3??2?? 8?? 3??2??9??0??6??0??M 2??8?? 8?? 3??2??9??0??6??0??S 2??8?? 8?? 3??2??April 25, 2025
Underneath that is the rumoured RX 9070M, said to have 48 CUs, 12 GB of VRAM, and 48 MB of L3 cache, which would match the specs of the China-only (for now, at least) RX 9070 GRE, which has now turned up for pre-orders on the AMD website after leaks of its existence came out last week.
After that comes the RX 9070M and RX 9060M, lower spec 8 GB GPUs expected to use the Navi 44 GPU, with reported "S" variants. That's presumably indicative of lower-wattage models, similar to Nvidia's Max-Q mobile GPUs of old.
Should these rumoured mobile GPUs be on their way in the near future, it'd seem likely that AMD's recently-announced press conference at Computex 2025 on May 21 would be the time and place to unveil them, although that's all speculation for now. Still, the event promises the announcement of "key products and technology advancements across gaming, AI PC, and enterprise", so it seems a fair bet we'll get some more concrete details then.
Or not, if AMD's CES 2025 RX 9000-series announcement is any indicator. We were briefed on some of the specs of the new desktop cards beforehand, but the presentation itself amounted to a minimal amount of info.
I'll be at Computex in person again this year, so it'll be interesting to see if the new mobile GPUs make a full appearance among all the inevitable AI feature announcements I've come to love so dearly.
It's also interesting to see the RX 9080 nomenclature rolled out for a mobile GPU, of all things. I'm not expecting AMD to announce an RX 9080 desktop card (after AMD graphics chief Jack Huyhn made it very clear that this generation was aiming for the mid-range), but stranger things have happened.
As for whether we'll see the new mobile chips in many gaming laptops? That's unclear, too. The RX 9000-series desktop cards have been popular, so it's reasonable to think that AMD will be hoping some of that shine will rub off on the mobile variants—although whether laptop vendors adopt AMD GPUs en masse remains to be seen.
And what of the RX 9060 desktop cards? AMD confirmed that multiple "RX 9060 products" would be arriving in the second quarter of this year, which would suggest we might see an entry-level desktop GPU at Computex, too. Something to give the RX 5060 and RX 5060 Ti some serious competition at the budget end of the market? I'm hoping so, at the very least.
Best gaming PC: The top pre-built machines.
Best gaming laptop: Great devices for mobile gaming.
Videocardz reckons the new GRE variant will only be available in China at first, but we'll have to wait for the launch to be sure where exactly it will go on sale.
In the meantime, the RX 9070 GRE is based on the same AMD Navi 48 chip as the RX 9070 and RX 9070 XT, but with some elements disabled. It's rumoured to run 3,072 Stream Processor cores. That compares with 3,584 for the vanilla 9070 and 4,096 for the 9070 XT.
The GRE is also expected to sport a 192-bit memory bus and 12 GB of GDDR6, both of which are a step down from the 256-bit bus and 16 GB of the other 9070 variants.
Consequently, the 9070 GRE ought to be significantly cheaper than the 9070 and 9070 XT, which have MSRPs of $549 and $599, respectively. Of course, you'd do well to buy either card at MSRP currently. It's tricky enough to grab one at all.
So, exactly where the RX 9070 GRE will land in terms of real-world pricing is just about anyone's guess. But it could be priced to take on the new Nvidia RTX 5060 TI, which is MSRP'ed at $379 for the 8 GB version and $429 for the 16 GB option.
You'd expect the 7090 GRE to have the edge over those cards for pure raster performance, but perhaps not for ray-tracing. Path tracing is arguably too big an ask for any card at this price point.
But Nvidia's typically strong feature set, including the ML upgrades that come with its latest DLSS 4 upscaler, not to mention multi-frame generation, means that the choice is rarely as simple as comparing basic raster performance.
Of course, the GRE may end up a China-only GPU, in which case the comparison will be moot in most markets. But we suspect it will make it to global markets at some point.
For now, there's no word on an official launch date. But with product images leaking, it probably won't be too far away.
Best CPU for gaming: Top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game first.
What I will say is this is born of my simply booting into the game, slapping each and every slider, from view distance to hair quality, up to Ultra—including Hardware Lumen RT, of course—but to get a vaguely comfortable frame rate I've had to enable both DLSS and Frame Generation. Obviously because I'm an Nvidia apologist.
Still, I was surprised to only be knocking on the door of 60 fps once I'd escaped the stinky Picard-infested sewers and emerged into wider Cyrodiil. This is the most powerful graphics card of today, running only in a pseudo 4K mode because I'm upscaling and actually just running the game at a far lower res.
I honestly wasn't expecting that Oblivion Remastered would be a game that actually necessitated Frame Generation to get the sort of frame rates I've become accustomed to with the RTX 5090. But it is 2025 and it seems like every new game is coming with the stipulation that upscaling and now frame gen are almost requisite for the top settings.
Stick standard 2x Frame Generation on and I can relax, knowing that I'm getting triple digit frame rates. Although Nvidia's own Frameview and overlay monitoring software don't seem to read the generated frames, claiming that I was still languishing around the 60 - 70 fps mark. That had me initially thinking that FG was giving me practically nothing, and had me rather concerned for the thousands of dollars worth of GPU silicon struggling away under the Elder Scrolls load.
Thankfully both Rivatuner and Oblivion Remastered's own fps monitors were showing the same higher frame rate, which seems more redolent of the actual performance you would expect from adding in some extra frame smoothing goodness. Though it is worth noting that Rivatuner can be a bit funky when it comes to the 1% Low metrics.
I am able to push that overall frame rate up to around the 170 - 200 fps mark using the DLSS Override feature of the Nvidia App and enabling 4x Multi Frame Gen—and it so far looks 辞办补测—but absolutely having to use RTX Blackwell's one neat trick to get higher frame rates and consistently nail a triple digit frame rate is really something.
Without upscaling or Frame Generation, I'm back down to a native 4K experience of around the 50 fps mark, and even with DLSS enabled I'm often dropping down below 60 fps in the outdoors areas.
That obviously changes indoors, with a far more limited view distance and the game engine having to do far less graphically intensive work. With 4x MFG I'm suddenly hitting a heady 200+ fps in those dark corridors.
But still, it's interesting to me that when you go cavalier with the in-game graphics settings in Oblivion Remastered that even the RTX 5090 will struggle. Thankfully, it's still a PC game with the bones of OG Oblivion at its heart, so there are myriad ways to bump up the frame rate, whether that's purely by stepping back the overall settings a touch, or being more aggressive in the upscaling level you're aiming for. And if you've somehow managed to bag yourself an RTX 50-series GPU you get to use Frame Generation in all its forms, too.
You could even do what I did the first time I played the original Oblivion back in 2006 and just run it in a tiny window on your 14-inch CRT monitor, y'know, just to squeeze out a playable frame rate. My old GeForce 6600 LE really did not cope with that game…
]]>Appearing alongside the usual suspects, such as Nvidia CEO Jensen Huang giving a keynote just before the main event on May 19, AMD refuses to be overshadowed. AMD announced its own Computex press conference for May 21, at 11 am UTC+8 as both an in-person and streamed event.
Details about what will be discussed are expectedly thin. What we know so far is that Computing and Graphics Group Jack Huynh, plus guests, will take to the stage to discuss "how AMD is expanding its leadership across gaming, workstations, and AI PCs." As vague as that is, it could be a pretty exciting stageside chat as a little sleuthing from our Jacob suggests we'll also hear more about AMD's affordable RX 9060 cards.
Last we heard, the RX 9060 cards would arrive some time between April and June. With the slightly delayed launch of AMD's RX 9070 and RX 9070 XT cards, it makes sense that perhaps AMD's plans for the RX 9060 cards have also been pushed back. That means a more detailed announcement in May would still vaguely line up ahead of a potential, properly-summer release.
As a perpetually chilly Brit, that would mean I have to survive England's seemingly never-ending false-spring long enough to get into actual-spring, but let's set aside my temperature-based trials and tribulations. Besides unveiling a few more affordable graphics card options for the cost conscious, AMD could also turn heads with a deeper dive into its Ryzen Z2 APU pitched for future handheld gaming PCs.
At any rate, it's definitely not unexpected that AMD would mention gaming and AI in the same breath—for one, Nvidia made bank on AI in its last financial year, and for another, the company's DLSS 4 deployment has made a Nvidia turncoat of our Andy.
Perhaps AMD is looking to regain some ground in the not-too-distant future. How will they do that, exactly? Well, you'll have to watch a very particular space; be sure to tune into AMD's press conference livestream at 8 pm PT/11 pm ET on May 20.
Best CPU for gaming: Top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game first.
These hotspots seem to essentially and most directly be caused by power delivery components being placed too close together on the PCBs and connected via too few traces, although it seems that at least some of this density could in part be a result of an oversight in an Nvidia guide for board partners.
That's because, according to Igor, who references an RTX 40-series Thermal Design Guide due to an NDA looming over the one for RTX 50-series ones, the guide is "far too imprecise and incomplete", especially concerning the specific area where the hotspots are occurring.
Although the two cards in focus are the Palit GeForce RTX 5080 Gaming Pro OC and PNY GeForce RTX 5070 OC, Igor claims the hotspot can be seen on many more cards than these. It can apparently be seen in the same spot on "cards from major board partners such as Palit, PNY and MSI as well as variants from other manufacturers."
This can even occur on "cards that at first glance appear to be uncritical in terms of performance, such as an RTX 5060 Ti" because "even cards with a peak power of just 180 watts can be affected if the topology is concentrated on just a few supply phases and the problem persists due to the concentration."
So what's the issue exactly? Essentially—and to simplify something that Igor explains in much more detail—these 50-series PCBs seem to be designed in such a way that the densely packed VRMs concentrate the heat generated by the high current flows into a small area. It's not just about having too few VRMs, either, as it's especially pronounced where the vias (vertical connections through the PCB) pass through the copper power planes. They're closely routed together, resulting in a lack of vertical heat dissipation.
This leads to the hotspots that Igor identified. In the RTX 5080 he tested, the temperature in this spot reached over 80 °C and in the RTX 5070 he tested, it reached over 107 °C. While these temperatures might not cause immediate instability, the problem is that this might not be good for long-term stability. Igor says:
"Current-carrying paths are often routed up to the load limit of the thermal and electrical specification, with no additional leeway for ageing, manufacturing variation or load peaks. This results in structures that function reliably under ideal conditions but react sensitively to minor deviations in cooling, placement or load distribution."
The key phrase here is "under ideal conditions", and therein lies the possible criticism of Nvidia's Thermal Design Guide and one possible root cause of this hotspot problem.
The Thermal Design Guide is what Nvidia gives out to AIB partners, cooler developers, and so on, to set the standard(s) to compare different components' thermal planning against. According to Igor, using the RTX 4090's Thermal Design Guide as an example, there are some problems with Nvidia's guide or the AIB partners' use of it.
The overarching problem is that "many of the parameters are idealized assumptions that are relativized in practical implementation by various influencing factors." In other words, its standards don't seem to be applicable to real-world use, at least not without risking thermal issues. In fact, he presses this point home in regards to the hotspot issue he identified, saying: "It is remarkable that this particular area is not highlighted as a critical area in its own right in the official Thermal Design Guide."
Igor also goes on to demonstrate that it wouldn't take much to solve the issue. All one needs to do is create a "thermal relief" (a thermal pad) on the back of the graphics card where the hotspot is located, to help dissipate the heat.
Best CPU for gaming: The top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game ahead of the rest.
This kind of solution is something that cooler manufacturers for these graphics cards could have presumably implemented had they known about the problem. Which is something that, also presumably, the Thermal Design Guide could and perhaps should have alerted them to.
As with most things in the industry, though, root causes are rarely simple to identify. Igor points out from the start that there are multiple kinds of actor involved in the production of a graphics card, and it might in fact be the poor communication and collaboration between different actors that can lead to such issues:
"Layout managers at board partners are often directly dependent on external PCB suppliers and have to adapt their designs to cooler solutions, which in turn are created under separate design and manufacturing premises. The lack of interoperation between these development lines then leads to design compromises which—as in the present case—can have unfavorable thermal effects without one of the parties involved being able to fully oversee this in isolation."
Too many cooks can spoil a broth, perhaps?
Whatever the case, I wouldn't worry too much about hotspots on your RTX 50-series GPUs just yet. These kinds of temperatures shouldn't cause short-term issues; the concern is more that GPU lifespans could be shortened over the course of a few years.
This isn't great, of course, but you shouldn't have to worry about your graphics card melting because of one of these hotspots any time soon. I'd probably be more concerned about those power cables…
]]>X user Haze2K1 (via Videocardz) spotted a listing for an Intel BMG-31 GPU in a manifest on shipping data website NBD. Intel's latest Arc B580 and B570 GPUs are based on the BMG-G21 chip, while the G31 GPU has long been rumoured as the next step up in the Battlemage hierarchy.
Previous leaks have indicated the G31 GPU packs in 32 execution units or EUs. That compares with the 20 EUs of the existing Intel Arc B580. In really rough terms, then, G31 should be about 50% faster and roughly competitive with an Nvidia RTX 5070 or an AMD Radeon RX 9070.
The catch here is that the mere existence of an Intel graphics cards with a G31 GPU being shipped across the world does not prove that Intel plans to launch a retail product. Another rumour from around three weeks ago suggested that the G31 chip as a retail product was cancelled late last year.
Likewise, the shipping manifest lists the card as an "R&D" or research and development item. Of course, that's exactly what a pre-launch card would be listed as even if Intel was indeed planning on launching a retail G31-based graphics card, perhaps branded as Intel Arc B770.
If anything, the main reason to doubt that Intel is actually planning on pushing through a G31 graphics card into the retail market is that the launch window for a competitive offering is narrowing. The longer Intel leaves it following the release of the Nvidia and AMD competition, the less of an impact G31 is likely to make.
Arguably, G31 would have to launch sometime in 2025 to make any sense at all, given you would expect Nvidia and AMD to release their own follow ups in the $500 GPU space towards the end of 2026 or early in 2027.
All that said, we'd welcome a G31-based gaming graphics card at almost any time, provided it's priced right. Intel's B580 board has several very promising attributes, including strong ray-tracing performance, and Intel's XeSS upscaling technology is pretty decent, too.
Given how aggressively Intel has priced the B580 and B570, you'd also expect an G31 board to undercut the likes of the RTX 5070 pretty comprehensively, raising the prospect of a card with performance that's roughly competitive with an RTX 5070 for around the $400 mark. Anywho, we'll have to wait and see with all our fingers and toes crossed to see if Intel does ever wheel out a G31-based GPU and if it does, just how fast it is and how well it performs.
Best CPU for gaming: Top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game first.
Some care and attention have been paid to the design of this triple-fan graphics card. That's clear to me as soon as I remove it from its crinkly packaging and spot the dragon motif cut away from its metal backplate. That's not the sort of flare you'll find on either MSRP card from PNY or Palit, with the latter actually lacking a metal backplate altogether.
Metal, plastic, who cares? For this graphics card, it actually does matter. The backplate has a few benefits on the Gaming Trio beyond looking good. First off, it offers some structural rigidity to help prevent any sag on the card, but more importantly it offers a way to help cool the four GDDR7 memory chips loaded onto the rear of the PCB.
You only get 16 GB of VRAM on the RTX 5060 Ti by doubling the four RAM chips used on the 8 GB model. That's some simple maths I can get down with. In practical terms, however, that means using what's called 'clamshell' mode to stick another four 16 Gb (2 GB) chips on the opposite side of the PCB to the GPU. These memory chips are not covered by the GPU's heatsink and will run hotter without some other cooling solution in place.
On the Gaming Trio, some thermal pads are applied between them and the backplate, which then acts as a surrogate heatsink. Aww. Though you can get away with none, as proven by the Palit RTX 5060 Ti 16 GB Infinity 3 with a plastic backplate, no pads, and a maximum average temperature across all chips during a never-ending loop of 3DMark Steel Nomad only marginally higher than this Gaming Trio (68°C to 62°C).
Still, data be damned, I don't like the idea of leaving these memory chips without any cooling over years of use. So, that's one benefit to the MSI Gaming Trio. Another are its dashing good looks.
This is by far the best-looking of the three cards I've tested. Admittedly, that's only three cards, but the other two I'm not a fan of, and they're the affordable ones. I'm a sucker for translucent plastic on just about anything—I was a see-thru Game Boy Advance kid—and there's some of that effect to the shroud on this card. It has RGB lighting zones on either side of the central fan and around the MSI badge on the tip of the card, which is easy to see through the glass-fronted MSI Pano 100R PZ case it's currently mounted in.
Good-looking, runs cool, well-built… this card has a lot going for it. But then we do have to talk about the price at some point.
In my Palit RTX 5060 Ti 16 GB Infinity 3 review, you'll find this sentence: "You should walk away immediately at anything above $550. That's RTX 5070 money." Now, want to take a guess as to the price of the Gaming Trio?
It's $550.
That's 28% more than the reference MSRP for this card. It's not that unusual for a third-party graphics card to charge the same price as the 'next tier up'—ie an RTX 5060 Ti for the price of an RTX 5070—but it's never an easy pill to swallow. A sign of the times to be charged over $500 for the bottom-rung GPU of the entire RTX 50-series, too.
This card is overclocked out of the factory to the tune of 2647 MHz, which is 75 MHz above the reference clock. That's not a large overclock for this card, a mere 3%, and we'll push it further in just a moment, but here's how it fares out of the box.
The RTX 5060 Ti is able to deliver a healthy performance bump over the RTX 4060 Ti 8 GB. In my tests, the Gaming Trio is 20% ahead at 1080p, 23% ahead at 1440p, and 41% ahead at 4K. I should qualify that we're talking pretty slim 4K frame rates most of the time, often sub-30 fps, and you'll need to tap DLSS 4 and Multi Frame Generation to access more playable frame rates—you can, of course, as this card supports all that.
PC Gamer test rig
CPU: AMD Ryzen 7 9800X3D | Motherboard: Gigabyte X870E Aorus Master | RAM: G.Skill 32 GB DDR5-6000 CAS 30 | Cooler: Corsair H170i Elite Capellix | SSD: 2 TB Crucial T700 | PSU: Seasonic Prime TX 1600W | Case: DimasTech Mini V2
Compared to the reference clocked Palit RTX 5060 Ti Infinity 3—the one with a plastic backplate—the extra money doesn't get you any more frames at 1080p or 1440p. They equal out for gains/losses across our benchmarking suite.
The Gaming Trio does run cooler than its counterparts—it's a couple of degrees lower than the Palit at stock clocks and way better than the PNY Dual Fan OC. The Gaming Trio is, as the name suggests, a triple-fan unit, but it's also a little larger than the Palit at 300 x 125 x 44 mm. That is MSI's measurement, inclusive of the PCIe slot. It's about 110 mm wide just the shroud itself. The other measurements seem bang-on.
The Gaming Trio also maintains its cool composure when overclocked, which is the one place where it really shines.
Many of the RTX 50-series cards I've tested have kept power limits under lock and key. Not the Gaming Trio. You're free to tinker with the power limit, voltage, and, of course, clock speeds to your liking. I managed to push this card with a 410 MHz offset, which is pretty high considering the factory OC of 75 MHz already applied, and altogether pushes the card in excess of 3100 MHz while gaming.
I also pushed the memory a little but ran into some instability, which forced me to dial back a touch. I'm sure with a little more time, someone could cook up a mean overclock on this card and drop back the power demands, which would be the ideal scenario for long-term use.
This untuned overclock netted me a 10% increase in average frame rates at 1080p. 10%. That's excellent, and exactly what I'm after on an expensive third-party card such as this. At 1440p, that shrinks a little to just under 9%. I also managed to push the core clock on the Palit Infinity 3 with a 400 MHz offset, which handed me almost as much of a leap, but it is still slower than the MSI. The Gaming Trio leads it by 2.4% at 1080p and 1.5% at 1440p.
? You want to overclock the RTX 5060 Ti: Even with a fairly lazy overclock, you can pull 10% higher frame rates from this graphics card. That's immense.
? You could throw more money at your GPU: You can find an RTX 5070 for near to the price of this RTX 5060 Ti. Albeit likely not as high-quality cards, but I'd rather the bigger GPU in the long-run.
You don't often see these sorts of gains in either clock speed or resulting frame rates on modern processors. Take 'em while you can. The big companies don't like to leave performance on the table like they used to, which has made us wonder why Nvidia's left so much headroom on the RTX 50-series. Their loss, our gain.
Even with a lick more megahertz, the RTX 5060 Ti is still nowhere close to the RTX 5070 in terms of raw performance. That's an awkward admission to make with the MSI stalking the better card's price tag.
But would you get an RTX 5070, especially one in a card of this quality, for anything close to $550? That's a good question. Right now, as of April 17, 2025, you can buy an Asus Prime RTX 5070 for $600. That should be an MSRP card, and it's unlikely to have all the bells and whistles on this MSI, but it houses a much larger chip and makes better use of its 12 GB of memory for it. In the UK, you can buy an RTX 5070 for ?500—or ?30 less than MSRP—which is the same price as MSI will ask for this card in dear ol' Albion. That's a very basic model, not that dissimilar to the Palit I've reviewed, but it's darn affordable.
By far my favourite RTX 5060 Ti of the lot so far, the MSI has been built to a much higher standard than its cheaper counterparts. That does count for something, but not quite the $121 premium it's asking over the MSRP models. If you can splurge on this MSI, you can probably also save another $50 for an RTX 5070, or just wait for that card to drop to $550. It will happen, eventually, and I'd much rather the RTX 5070 for the same money in the long-run.
]]>I've found myself especially tempted by the Zotac RTX 5060 Ti Solo, a dinky thing with a single fan. It's just been announced by Zotac, alongside Amp and Twin Edge (OC) designs for RTX 5060-series cards.
Single-fan cards are nothing new, of course—especially not for Zotac—but something about the RTX 5060 Ti Solo tickles my fancy in just the right way.
I'm not sure whether it's the diagonal lining, subtle brown colouring, or just the fact that it's a relatively low-power GPU (compared to the rest of the RTX 50 series) that will presumably not be too hampered by the stubby single-fan design—whatever the reason, something about it is calling out to me the current GPU market, which is a hellscape of out of stock signs in the US and apparently completely fine in the UK.
Nvidia's RTX '60 Ti cards are often good choices for SFF builds because they tend to offer decent entry-level current-gen performance without all the heat production and power guzzling of RTX '70 and '80 cards. In our Jacob R's RTX 5060 Ti review, he found the latest Nvidia GPU to perform about 20% better than the RTX 4060 Ti and only consume a little more power and produce just a little more heat (though much depends on the specific AIB design, of course).
We tested two triple-fan cards for that review, and both largely stuck around the mid 60 degrees celsius mark. The twin-fan PNY RTX 5060 Ti Dual Fan OC, however, hit mid 70 degrees at times. This single-fan design? Well, let's just say a 180 W TGP is quite a lot for a compact card to dissipate, but it's not impossible.
We're not given any actual pricing for the RTX 5060 Ti Solo yet, though, so I might have to eat my words. But let's be optimistic, eh?
About this particular card, Zotac says: "For true SFF-PC enthusiasts, the single-fan ZOTAC GAMING GeForce RTX 5060 SOLO remains a top choice for its maximum compactness, and the ability to sit comfortably in 99% of PC cases on the market. Despite being the smallest GPU of ZOTAC GAMING’s 50 Series line-up, the compact SOLO offers the same great performance."
That 99% figure comes from the fact that it's two slots wide, and of course a little horizontally challenged—which is a good thing. Zotac's well into the SFF market, as we've seen with the Zotac ZBox Magnus One SFF gaming PC, and I reckon a single-fan RTX 5060 Ti makes a lot of sense as another step in this direction.
That, or I'm just looking for ways to justify finally upgrading my GPU… don't judge, okay.
Best CPU for gaming: Top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game first.
Yes, an MSRP-priced RTX 5060 Ti 16 GB is, in fact, in stock in the UK right now. In fact—hold up—two MSRP-priced RTX 5060 Tis are in stock: the Gainward GeForce RTX 5060 Ti Python III for ?399.95 at Overclockers and the Palit GeForce RTX 5060 Ti Infinity 3, also for ?399.95 at Overclockers.
It should be noted that these are essentially the same card (thus, probably, why these two are the MSRP-priced ones in stock). Palit owns Gainward, and the two cards share the same specs and even look very similar.
The Palit card also happens to be the exact one that our Jacob R tested for his RTX 5060 Ti review. He found it to perform about 20% faster than the RTX 4060 Ti, although it is, of course, a little more power-hungry. And don't forget that it has all the benefits of the 50-series architecture, including the ability to generate fake fra- sorry, I mean, to generate multiple frames for each traditionally rendered one.
The big thing here, though, is simply that these are ?399 MSRP cards (okay, technically ?0.95 over MSRP at ?399.95, if we're being pedantic) that are actually in stock two hours after launch. And that's not me being facetious—I don't think we've seen an MSRP card in stock for so long since before Covid (back in the glory days).
Your guess as to why that might be is as good as mine. The listings do say they're available to "UK and IE only due to high demand", and if the little red Overclockers pop-up is to be believed, a fair few have been sold already. Perhaps Palit/Gainward just hasn't realised how biting the cost of living crisis is over here and therefore how low demand is—who knows?
Whatever the case, they're here, and they seem to be staying here, at least for the moment. How long that moment will last, I'm not sure. I know it's certainly tickling my RTX 3060 Ti-owning 'should I?' bone. Maybe I should hit that buy button… for king and country?
Best CPU for gaming: Top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game first.
The release notes (PDF) for the new Nvidia driver (version 576.02) list 25 "Fixed General Bugs" and 15 "Fixed Gaming Bugs" (and yes, I did count those line by line). Crucially, these general fixes include ones for various black and blank screen issues.
We've seen fixes for the infamous black screen issues pop up in GeForce driver notes before, but nothing on this scale. I counted 12 (twelve!) general fixes that mention "black" or "blank" screen in them, including one aptly titled "Random Black Screen issues." There are others listed that are probably to do with black/blank screens, too.
Another fix caught my eye, too, this being… ah, yes, 5117518—we all know 5117518, right? This is the issue identifier for a problem with Varjo Aero VR headsets seemingly not working with RTX 5090 graphics cards, which we reported on back in March.
Just a few days later, Nvidia acknowledged the problem as an "open issue." Now, one month later, it seems the problem is solved, alongside about a billion others.
Of course, apart from these extensive apparent bug fixes, the driver also has something positive on offer. Primarily that it's required to get your new RTX 5060 Ti up and running.
Nvidia says, "This driver is required for users adding the new GeForce RTX 5060 Ti to their systems. It also includes support for games adding or launching with DLSS 4 with Multi Frame Generation, including Black Myth: Wukong and No More Room in Hell 2."
In addition to this, there's "support for 19 new G-SYNC Compatible displays" and "6 new Optimal Playable Settings profiles" for Deadlock, GTA V Enhanced, Half-Life 2 RTX, inzoi, Monster Hunter Wilds, and The Last of Us Part II Remastered.
Presumably, we'll now be able to play these games with little risk of a random black screen. Fingers crossed, anyway.
Best CPU for gaming: Top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game first.
With prices up in the air and what some might call 'macro economic headwinds' affecting global moods, I'm not sure how well this 'good deal' statement will age. In fact, I've just had a look around at retailers in the hours preceding publishing this review and the 8 GB card costs as much as the 16 GB card and the 16 GB card is just under $500. That's not good.
Yet in my pre-release isolation, I've been reasonably impressed with the RTX 5060 Ti 16 GB.
It offers a decent upgrade on the RTX 4060 Ti from a relatively small improvement to its silicon, spurred on by much speedier memory. Its has little 'wow' factor, slipping in right where you'd expect it to in the existing stack, but for a more competitive price tag than last-gen models. It's faster than an RTX 4060 Ti but not close enough to the RTX 5070 that a judicious overclock will cannibalise that card—though it does take to overclocking with aplomb.
I'll be focussing on the 16 GB version of the RTX 5060 Ti for this review. Specifically, the Palit RTX 5060 Ti Infinity 3 16 GB, which I'm told will be available at MSRP in the UK. We've not been provided an 8 GB model for launch, though I have included figures for another two 16 GB cards: the PNY RTX 5060 Ti OC, which will be available at MSRP in the US; and the MSI RTX 5060 Ti Gaming Trio OC Edition.
? You want an efficient graphics card for 1080p or 1440p: A 20% performance uplift over the previous generation for just 20 W more. That's impressive scaling.
? You are on an older graphics card, perhaps a GTX: You can probably live without the extra 20% and Multi Frame Generation if you're already rocking an RTX 40-series graphics card. However, as someone on an older card, the RTX 5060 Ti offers a great upgrade path, providing it's something close to MSRP.
? You want more VRAM and the means to use it: The 16 GB version of this card is more appealing for all that VRAM, but this GPU isn't totally equipped to put it to great use at higher resolutions.
? Prices end up way over MSRP: Perhaps I don't need to say it, but if you're spending over $500 on this card, you're no longer getting a good deal. Over $550 and you're paying RTX 5070 money…
The RTX 5060 Ti 16 GB is a good entry-level graphics card and a worthy upgrade for some PC gamers.
It benefits from a power-efficient architecture, offers a real-terms performance uplift over the previous generation, and wields a higher VRAM configuration that, at MSRP, is without a controversial price premium. It's a good proposition, even if you prefer to wave away nascent AI-powered features like Multi Frame Generation.
It's an especially good proposition if you're looking to upgrade from an older graphics card, especially one beginning with 'GTX'. Nvidia does recommend using a 600 W power supply with this card, which might nix an easy upgrade for some, but even Intel's Arc B580 requires a 600 W PSU, so you might just have to suck it up and buy a new one if you intend to upgrade.
The RTX 5060 Ti naturally excels at 1080p, though 1440p is easily within its reach. If you're playing a game with support for Multi Frame Generation, you can dual-wield it and DLSS for genuinely impressive frame rates even at the higher resolution. Just be cautious of overstretching the card, both with and without MFG, as it only features slightly improved specs compared to its predecessor, the RTX 4060 Ti. Yet that and its 16 GB of speedy GDDR7 is enough to maintain around a 20% lead at 1080p and 1440p in our testing.
Available in two memory configurations, 16 GB and 8 GB, the latter feels like a small amount of memory in 2025, especially as there have been many cards over the years with more memory for a similar price, including the cheaper Arc B580. But I've yet to see much evidence that the larger VRAM buffer will be a huge boon on what is ostensibly a small GPU compared to others in the RTX 50-series, and intended for sub-4K resolutions. 16 GB is nice, but it's nicer when the GPU and memory bus can make the most of it.
There are big question marks over price and availability with this card, due to prior and ongoing issues affecting the existing RTX 50-series. And then there's AMD's teased but not yet officially announced RX 9060-series, which should arrive within a couple of months. If all that sounds like reasons to hold off purchasing a new graphics card until later in the year, you might be saving yourself some hassle. But if you can't wait any longer and your graphics card is on its last legs, the RTX 5060 Ti at, or near, MSRP is a smart buy.
Let's start with the top line, the all-important specifications. The RTX 5060 Ti is a pretty good deal on paper. It features more cores, more RT chops, higher clocks, and AI TOPS than its predecessor, the RTX 4060 Ti, and for nominally less cash.
There's slightly less to get excited about when you dive into the GPU silicon, though. An increase in Streaming Multiprocessors (SMs) of just 5.88 % cascades through the core counts, RT cores, and Tensor cores. That's hardly a big number to wave in your friends' faces once you get hold of this card. However, the move to much faster GDDR7 memory with the RTX 50-series offers a much more bragworthy digit of a 55.55% increase in memory bandwidth to 448 GB/s, to match an increase in memory speed from 18 Gbps to 28 Gbps.
That memory speed and bandwidth improvement is even more pronounced compared to last generation on the lower-end of the RTX 50-series, as the cheaper RTX 40-series cards used slower GDDR6 memory instead of the GDDR6X chips found on enthusiast models.
That's not the only reason to focus on memory with the RTX 5060 Ti: it's available in 16 GB or 8 GB variants.
All of my testing in this review is for the 16 GB variant, with three models of the 16 GB card being the first to arrive on my desk. This card's predecessor, the RTX 4060 Ti, also came in both 8 GB and 16 GB variants, but the larger capacity model arrived later and to little fanfare.
To help put the inevitable memory debate into perspective, the RTX 4060 Ti 16 GB was a tough sell at launch. Launching later than its 8 GB variant for $100 more, it only really benefited a few niche cases, and often not by much. It felt like a cynical launch to us; a reaction to the discontent brewing for a near-$400 card with 8 GB of memory and a 128-bit memory bus. Even Nvidia seemed reluctant to talk much about the card, and that's why we never saw one for review. As such, all of my results for the RTX 4060 Ti are for the 8 GB variant.
Has much changed with the RTX 5060 Ti 16 GB?
Exactly like its predecessor, the RTX 5060 Ti opts for the same, rather paltry, 128-bit memory bus across both variants. For the 16 GB model, it uses a 'clamshell' memory configuration. Essentially, this means that memory has been attached on both sides of the PCB—the memory acting as the bread of the sandwich, so to speak—requiring cooling pads affixed to the graphics card's backplate. Increasing the memory capacity in this way doesn't increase throughput, as memory bus width and speed dictate that, but there's less chance of a hoggish game running out of room and hampering performance.
Yet, unlike the RTX 4060 Ti 16 GB, which cost $100 more than the 8 GB version, the RTX 5060 Ti 16 GB costs just $50. That puts it much more in the mix, providing real-world prices actually reflect this.
Memory matters most when gaming at higher resolutions, ie 4K, but we're mostly talking about 1080p/1440p performance in regards to the RTX 5060 Ti. That bears out in my game testing in the performance section below, where the RTX 5060 Ti 16 GB is able to extend its lead over the RTX 4060 Ti 8 GB to a much greater extent at 4K compared to 1440p and 1080p.
But I want to be clear about my thoughts on future-proofing and VRAM: opting for the 16 GB variant is not a panacea for performance in four, five, or six years' time. It might help with the odd case of a game running like pants due to poor optimisation, though these issues can be, and often are, fixed with a little dev work. Ultimately, the fundamentals remain the same either way: the GA206 GPU and 128-bit bus will ultimately end up limiting factors for future performance.
The RT cores, Tensor cores, and CUDA Core counts are still likely to wane in efficacy over time, as new games make better use of more modern features. You're still better off saving up for the xx70 card or a Radeon option if memory performance and longevity matter most to you.
And there have been some edge cases for high VRAM memory usage, such as The Last of Us Part 1, but we've since seen memory usage improvements with its sequel. The Last of Us Part 2 uses a smart asset management system that improves VRAM usage and doesn't trip over its own shoelaces when presented with an 8 GB card. That's not to say all games will follow in its footsteps, but it's a promising sign.
The ideal solution is that we see Nvidia adopt 12 GB as a standard on its low-end cards and ditch 8 GB altogether, while matching existing prices. That is, rather than this 16 GB stop-gap on a GPU not designed to make the most of it. Nvidia argues that 8 GB helps keep costs low globally, in markets beyond Europe and the US, which is fair, and I don't have the bill of parts in front of me to belabour the point, but perhaps bringing back a desktop xx50 card would help solve that one?
So, would I buy the 16 GB card if presented with it for $50 more than the 8 GB option? Sure. Therein lies the duality of PC gamers and our allergy to reason when building PCs. Altogether, we can chalk up the 16 GB card as a 'nice to have', but I wouldn't pay over the odds for it.
Moving on, the RTX 5060 Ti runs faster than most RTX 50-series cards. That's to be expected for a smaller die, and the GB206 is the smallest of the lot at 181 mm2. With a reference boost clock of 2572 MHz, it is only a small amount slower than the RTX 5080 FE at 2620 MHz, but its base clock is the highest yet at 2400 MHz.
That's for a 180 W TGP (total graphics power)—20 W higher than the RTX 4060 Ti and 70 W lower than the RTX 5070.
What's pretty impressive is how each one of those extra watts over the RTX 4060 Ti converts neatly into a single percentage point gained in our testing at 1080p and 1440p—the RTX 5090 at nearly 600 W could only dream of that sort of power scaling.
There is no Founders Edition for the RTX 5060 Ti in either 16 or 8 GB variants. That's a shame, as we've come to appreciate the Founders Edition model for its low temps and noise with other 50-series GPUs, but most of all for its price. MSRP cards are not often found in today's world, and the lack of Founders Edition from Nvidia only serves to remove another opportunity to buy an RTX 5060 Ti at the asking price.
With no Founders Edition, I'll be focusing on the Palit RTX 5060 Ti Infinity 3 and for this review. I've been assured this will launch at MSRP in the UK. That's ?399 for the 16 GB model, or ?349 for 8 GB. Whether this specific card will be widely available in the US or close to MSRP is still a mystery, however.
We've also tested the PNY RTX 5060 Ti Dual Fan OC, which we've been promised is an MSRP model in the US, and the MSI RTX 5060 Ti Gaming Trio OC Edition, which is going to be priced higher. So, all bases are covered in the performance section below.
What you get with the Palit Infinity 3 is a triple-fan shroud with a slim heatsink, measuring 290 x 102.8 x 38.75 mm. It's nowhere near as thick as the MSI RTX 5080 Ventus 3X OC I've close to hand, which I thought to be a fairly slim GPU, but the Infinity 3 is only aiming to dissipate a reasonable 180 W and sticks to reference clock speeds.
I was slightly surprised to see the single 8-pin power connector on this card, as opposed to the 12V-2x6 connector on the MSI card. Though the PNY showed up shortly after the other two and also uses a single 8-pin power connector, so that might be a bit of a theme.
Opening up the rear of the Palit Infinity 3 to gaze upon its clamshelled memory, I was surprised to find there are no extra thermal pads to cover these rearward chips. In fact, the entire backplate is made of plastic, so it wouldn't be a wise idea anyway. That makes for a stark comparison to the MSI Gaming Trio OC and PNY OC, which both feature a metal backplate and a couple of thermal pads for those rear memory chips.
Due to the lack of thermal pads, I've not pushed a memory overclock on this card, as I usually would the RTX 50-series. I kept an eye on memory junction temperatures while benchmarking Metro Exodus, and they stayed below 70°C. This suggests there's no major issue at stock speeds, though these are average temperatures across all memory chips, which might hide the worst fluctuations on those specific rear-facing chips.
A 20% improvement versus the last generation, that's what Nvidia was touting for this card before we got our hands on it, and it's a fair assessment. My own testing bears that out, with the RTX 5060 Ti 16 GB 20% faster than the RTX 4060 Ti 8 GB at 1080p, and 23% at 1440p—that's raw raster, too, upscaling and frame generation notwithstanding.
It's a good showing from the RTX 5060 Ti, considering it only asks for 20 W more from the wall. In my testing during Metro Exodus runs, it averaged a power draw of 26 W, too, with a peak of 207 W. All only a hair's breadth from the RTX 4060 Ti it replaces. That shows off the benefits of Blackwell, notably in its power usage through features like improved power gating, which we've seen make a big impact on gaming laptop battery life.
PC Gamer test rig
CPU: AMD Ryzen 7 9800X3D | Motherboard: Gigabyte X870E Aorus Master | RAM: G.Skill 32 GB DDR5-6000 CAS 30 | Cooler: Corsair H170i Elite Capellix | SSD: 2 TB Crucial T700 | PSU: Seasonic Prime TX 1600W | Case: DimasTech Mini V2
I don't suspect we'd have seen sweeping changes to those percentages had we tested an RTX 4060 Ti with 16 GB of VRAM. Maybe some improvements here or there. The larger memory buffer makes little difference to performance unless memory is the bottleneck, and in most cases, at least in our benchmarking suite, it's not. It will make a difference in some titles, but, for reference, our sister site Tom's Hardware reviewed the RTX 4060 Ti 16 GB and reported it largely tied for performance with the 8 GB variant at 1080p and 1440p.
At 4K, the 5060 Ti manages to outperform the 4060 Ti 8 GB by a massive 40% on average in my testing. This is massively skewed by a near-doubling of frames from 'bad' to 'improving but still bad' in Cyberpunk 2077—8 to 15 fps. Most of the time, a 30% increase is to be expected, especially in games with a passable frame rate. That does make the RTX 5060 Ti a near-enough 4K capable card in lesser demanding games, ie Homeworld 3 and The Talos Principle. Throw in upscaling and frame generation and it's absolutely ready to go.
But there are other comparisons we can make to show the RTX 5060 Ti's potential. Take one of AMD's best price/performance cards from the last generation, the RX 7800 XT. This card launched at $499, but has been available for as low as $430, matching the 5060 Ti 16 GB. It's a shame it's not still available at that sort of price, as it would've made for a great analogue to Nvidia's latest.
The RX 7800 XT is around 7–8% quicker than the RTX 5060 Ti, with 16 GB of memory to match. Right now, an RX 7800 XT will cost you around $650, in which case, you might as well look for a newer RX 9070, which runs circles around both cards. The 9070-series isn't the intended competition to the RTX 5060 Ti, however, that would be the RX 9060-series. These cards have been confirmed and are headed our way by June, so says AMD, though we know zilch (officially) about them yet.
Weighing up only Nvidia's options, you have the RTX 5070 sitting around 29% faster than the 5060 Ti at 1080p, 33% faster at 1440p, and, despite having only 12 GB of VRAM, 36% faster at 4K. That's a great example of why memory capacity isn't everything—you need the supporting silicon too. With a price tag supposedly of $549, this card will cost you $949 if you want to buy one in the US right now. In the UK, however, it's been spotted below MSRP. Make sense of that one… I'll have to try to shortly.
With an easy overclock, I am able to get the RTX 5060 Ti to within 20% of the RTX 5070 at 1080p, and 23% at 1440p.
The RTX 5060 Ti is a solid overclocker. The rest of the 50-series lot has been the same. Though there is more to gain on an entry-level graphics card by increasing performance by a frame or two. Or, in some games, five or six.
With a 400 MHz offset on this reference clocked card, I saw an average increase in frame rates of 7% at 1080p and 8% at 1440p.
With the Palit lacking any thermal pads on the memory chips attached to the rear of the PCB, I took a memory overclock out of the equation. No offset for this card, which is a shame, as Samsung's GDDR7 chips will easily run well above 28 Gbps on most others I've tested.
For the core clock, that was easy. I set a 425 MHz GPU core offset with minimal fuss, looping 3DMark's Steel Nomad test in the background to show any immediate issues. None came up. I then took to Metro Exodus Enhanced Edition, which is a great measure of overclock stability and usually highlights any gremlins in the machine. That it did, too, and it was back to the drawing board after a subsequent crash.
A couple tweaks later, and I've found a stable +400 MHz offset, which lands me with an average clock speed of 3040 MHz in Metro Exodus Enhanced Edition at 4K. Anything over 3000 MHz fulfils my need to see a big number on-screen and, as I've since found with an MSI model, going much over 3000 MHz doesn't eke much more out of this chip. The benefit there is that, if your GPU allows it, you can work backwards from there by using power limits to reduce your overall power consumption.
I kept it simple with limited time available to me. Even so, this set-and-forget overclock offered a very respectable performance uplift with no impact on power draw or temperatures, as I never touched the power limits.
It wouldn't be an RTX 50-series card without support for DLSS 4 and Multi Frame Generation. These two features are arguably the more practical ways to tap into Nvidia's self-proclaimed "Age of Neural Rendering" and utilise AI to improve frame rates and visual fidelity.
The RTX 5060 Ti and its fellow 50-series cohorts can all enable Multi Frame Generation—this essentially allows your graphics card to generate two or three 'fake' frames between 'real' ones, in 3X or 4X modes, respectively. RTX 40-series graphics cards can also make use of Frame Generation, however, only in 2X mode—that is, they generate one 'fake' frame between each 'real' one.
The good news is, even on a small GPU such as this, these features have the ability to make a big difference. You still need to play on a resolution and graphics quality that makes sense for an RTX 5060 Ti, hence why I dropped our usual 4K tests down to 1440p. I ran tests across Alan Wake 2 and Cyberpunk 2077, and Dragon Age: The Veilguard, and I was able to crank up frame rates to as much as five times what the card could muster at native 1440p. That's pretty spectacular, and is thanks to both Multi Frame Generation and DLSS in Quality mode.
I would argue for moderation in some games, however. Alan Wake 2 looks superb and runs heaps faster for DLSS and Frame Generation, up to 2X. After that, using 3X and 4X modes, or Multi Frame Generation, which tips the odds in favour of 'fake' frames, I found the picture quality lacking in areas. This happens most notably in 4X mode, which causes some excessive ghosting on objects and introduces over 130 ms latency at times, leading to a noticeably sluggish response with a keyboard and mouse. It feels slightly like using a controller, but don't let your console friend read this.
It's a funny thing, increasing your frame rate for a more sluggish-feeling game, but it's not always the case with MFG. In Dragon Age: The Veilguard, the higher base frame rate without MFG enabled makes for a much smoother, low-latency experience with it on. Importantly, for this single-player game that isn't massively snappy at the best of times, it feels absolutely worth the extra fluidity overall. It feels much more like native frame rate with Frame Generation (2X) enabled, and 3X is a good trade-off too, though similarly to Alan Wake 2, it does have some more noticeable artefacts at 4X.
In Cyberpunk 2077, too, MFG 4X works extremely well. I mooched around a bazaar in the game and ran the benchmark through many times, and many of what I assumed to be MFG 4X artefacts turned out to just be the game's native bugs. I was sure one sign flickering in and out of existence was because the AI had got it wrong, but it wasn't. You can still move around at speed and get a little queasy at the blurriness, which does slightly counteract the unnaturally high frame rate, but overall, I was thoroughly impressed with this implementation.
If you enable MFG on one game and not on another, you'd be doing it right. It's not an 'always on' feature, and as such, I'd not lean on it entirely for performance analysis of the RTX 5060 Ti—the rasterised performance is still crucial. Yet it's a very good feature when used sparingly and one that does strengthen the RTX 5060 Ti proposition, though so too does standard Frame Generation, which is also on the 40-series.
It is impressive how Nvidia can land a graphics card right where it wants it in the market. I joked with colleagues when the RTX 5060 Ti showed up that we could take the median frame per second between an RTX 4060 Ti and RTX 5070 and end up with the RTX 5060 Ti result. Lo and behold, that's pretty much where it is in testing, if a little slower at times.
To see a roughly 5% increase on all the key specs—CUDA cores, RT cores, Tensor cores—turn into 20% is impressive, though it does make you wonder if the Blackwell architecture had a lot more gains to offer throughout the stack, had Nvidia needed to do more. The truth is, it probably didn't need to, as it turns out a 20% uplift in performance or thereabouts is a decent deal considering the price reduction, and AMD's currently MIA at this price point.
AMD plans to announce/launch the RX 9060 series by the end of June. That leaves only a few months to get things finalised and out the door. I won't spend this review pondering the rumoured specs to see where AMD's entry-level lands, as that'll be out of date in a week's time, but we can hope for something competitive after the RX 9060-series, which massively benefited from clever pricing on AMD's behalf.
I would think that Nvidia's close pricing of the 8 GB and 16 GB RTX 5060 Ti this time around—just $50 separates the two, compared to $100 with the last gen—is intended to act as a preemptive blocker on AMD. AMD isn't able to parachute a card into No Man's Land between Nvidia's cards, as there's simply no room. Instead, it'll have to compete with Nvidia's offerings head-on, or massively undercut both. Otherwise, there are no nasty surprises for AMD in terms of performance here, so it's unlikely to be in panic mode.
That only works if Nvidia, or AMD's, prices remain roughly analogous to their MSRPs, which also depends on whether that MSRP is in any way realistic or attainable. On that point, there's obviously a lot to talk about.
The RTX 5060 Ti's price is a sticking point. It's been a sticking point for Nvidia's entire RTX 50-series, and most of all an issue for cards lacking a Founders Edition, such as the RTX 5070 Ti and, yes, this one. The huge premium placed on top of Nvidia's MSRP has been in large part to blame for the rocky reception these cards have received, and I've looked around very quickly this morning, April 16, and seen mostly evidence that prices will be high for the RTX 5060 Ti.
Both the RTX 5060 Ti 8 GB and RTX 5060 Ti 16 GB are on Newegg already, listed at $420 and $480, respectively.
The question is whether these inflated prices stick around or not.
US readers are likely thinking "you're having a laugh, mate", but in American. Prices for the existing RTX 50-series graphics cards have shown little sign of coming back down to Earth. Similarly, you can argue that it's Nvidia's MSRP that's the product of wishful thinking, as AIBs seem to be universally bumping prices up beyond it and getting the backlash as a result.
Nvidia told me in a pre-briefing that it "can work with our partners to get these out at reasonable prices, which we are doing," and the whisperings we've heard from retailers are positive about the stock situation. But Nvidia has confirmed that these prices are "not going to be inclusive of tariffs," and when pressed on that hot topic said: "There's not much we can do about that."
Right now, the Trump administration's tariffs currently exclude computers and semiconductors coming into the USA, which wasn't the case when I asked Nvidia about them. They might be back on, even increased, by the time you read this. This isn't Nvidia's problem to deal with alone, and all computer component manufacturers will be worrying about their bottom lines, but it does have the potential to make mincemeat of the value proposition of this card, as it does all others.
We'll have to see how that one plays out, but it's not all bad news.
In my native land of the UK, those sticky price points aren't sticking around. As I type this, there is ample stock of near-MSRP, MSRP, or even sub-MSRP RTX 5070 and RTX 5070 Ti cards. Normality has returned, and there's reason to believe the same will be true of the RTX 5060 Ti and forthcoming RTX 5060. These cards will only reach the top of the Steam Hardware Survey by being mostly affordable to most people.
So, I'm not all doom and gloom, though I do find the ramp-up of prices and lack of MSRP models by manufacturers to be a huge concern. The RTX 5060 Ti 16 GB absolutely loses its shine above $500, and you should walk away immediately at anything above $550. That's RTX 5070 money. Or it should be.
]]>This info was spotted in an SEC filing, in which Nvidia says that "first quarter results are expected to include up to approximately $5.5 billion of charges associated with H20 products for inventory, purchase commitments, and related reserves."
According to Nvidia, this is off the back of the US government informing the company on April 9 that a license is required for export to China "H20 integrated circuits and any other circuits achieving the H20’s memory bandwidth, interconnect bandwidth, or combination thereof."
Following this, again according to Nvidia, "On April 14, 2025, the USG informed the Company that the license requirement will be in effect for the indefinite future."
The H20 is essentially a modified version of the H100 GPU, a powerful 'Grace Hopper' architecture that sits on the datacentre side of the aisle just across from 'Ada Lovelace' RTX 40-series processors. Both of these have been succeeded by 'Blackwell' architecture chips, but Hopper chips are still incredibly powerful and populate many of the biggest tech companies' server racks.
The H20 was made to comply with China export restrictions that started to come into effect in 2022 and later restricted export of powerful chips such as the H100 and even less powerful ones such as the H800 and A800. Thus the scaled-back H20 was born, and since then it's been the most powerful AI chip that China's been able to get its hands on.
Now, according to Nvidia's SEC filing, it looks like even this chip will no longer be allowed to be exported to China without license from the US government. And clearly the China H100 market must have been a big one if Nvidia is claiming $5.5 billion charges associated with the new regulations.
According to Reuters, "two sources familiar with the matter" claim that Nvidia didn't warn some Chinese customers about these new export rules. This apparently meant that some companies were expecting H20 deliveries by the end of the year.
Regulations such as these are no joke, either, as we've already seen breaking them can risk some serious repercussions. TSMC, for instance, might be fined over $1 billion for allegedly breaking export rules after one of its chips was found in a Huawei processor.
Still, while the $5.5 billion charges surely must sting, that's nothing compared to the amount that Nvidia is planning on investing in US-based chip production. Just a few days ago, the company announced plans to invest $500 billion in "AI infrastructure" in the US.
With these new reported export rules and the looming threat of semiconductor tariffs, the chip industry writ large—not to mention, of course, the burgeoning and booming AI industry—is in uncertain waters.
And while PC gaming tech is a little downstream from all this, it's most definitely the same stream. Here's hoping that after this $5.5 billion Nvidia will still has the money to pump into more RTX 50-series stocks.
Best CPU for gaming: Top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game first.
With 4,608 shaders (along with 144 TMUs and 48 ROPs), the RTX 5060 Ti is only a fractionally larger GPU than its predecessor, the RTX 4060 Ti. Even the boost clock is just a few MHz higher, but in its favour, the RTX 5060 Ti sports GDDR7 running at 28 Gbps—that gives it a total VRAM bandwidth of 448 GB/s, significantly more than the 4060 Ti's 288 GB/s.
As with all RTX 50-series graphics cards, you're getting the full DLSS 4 suite of upscaling, frame generation, and other AI goodies.
When Nvidia announced the prices for the RTX 5060 Ti, we were all pleasantly surprised by the fact that Team Green hadn't bumped them right up. There's no Founders Edition from Nvidia for the RTX 5060 Ti but there's a decent range of base models from every AIB partner to choose from.
However, we have no idea what stocks are going to be like, nor what the eventual non-MSRP models will sell for. Given the current uncertainties about US tariffs on computer components, it's probably going to be quite messy.
US RTX 5060 Ti retailers:
UK RTX 5060 Ti retailers:
Below you'll find all the models that are already listed at MSRP, or we've been told they will be at MSRP at launch. Note that for the US market, there's not a single 8 GB RTX 5060 Ti listed for $349 or 16 GB for $399, whereas the UK does have a few models at the suggested retail price.
It's worth noting that more 5060 Ti cards might appear at MSRP after launch, but given the current tariff situation in the US, that might not happen at all. And for those in the UK that are listed at MSRP, there's always the chance that the price rapidly increases as stocks dwindle.
Here's the list of non-MSRP models, and you can see that for the US market, it's slim pickings. It's not a case that retailers are displaying forthcoming models without prices; this is literally all that's on show right now.
Over in the UK, there's a much wider range of 8 GB and 16 GB RTX 5060 Ti cards at non-MSRP, but many of them have placeholder values (e.g. ?7,000!) or nothing whatsoever, so I've just listed the retailer's entry as is.
This was the case even before the Trump tariffs, but the uncertainty surrounding the administration's tariffs and whether they apply to computer parts now or in the future, certainly doesn't help.
Nvidia has said that the RTX 5060 and RTX 5060 Ti GPU MSRPs are "not inclusive of regional VAT or any tariff." This means should a tariff be put on PC parts, Nvidia is at least for now washing its hands of the whole pricing debacle that would surely follow.
In fact, Nvidia's GeForce product manager Justin Walker says there's "not a whole lot we can do about tariffs." There's at least an attempt at reassurance: "We can work with our partners to get these out at reasonable prices, which we are doing."
Credit where credit's due, I'd rather this honesty than hands-over-ears *la la la tariffs what tariffs?* But it still stings, as reality often does.
The RTX 5060 and RTX 5060 Ti were announced today and will start at $299 for the RTX 5060, $379 for the 8 GB RTX 5060 Ti, and $429 for the 16 GB RTX 5060 Ti. The sweet spot will likely be that $379 model.
But it'll only be sweet if that's actually the price we find the 8 GB version of the RTX 5060 Ti graphics cards retailing for. Currently, while the UK is faring a little better, in the US it's difficult to find RTX 50-series cards for anything close to their MSRPs, and that's been the tale ever since their respective launches.
Throw the new US tariffs into the mix, too, and I'm not filled with confidence. It's not as if we're getting a clear picture of what these tariffs will be, either, since there's a lot of seeming flip-flopping going on. The latest is that computer parts and semiconductors have been made exempt from tariffs, according to a White House press release, which had already been watered down, though there is still a threat of further tariffs to come on, according to Donald Trump on Truth Social.
Likely in response to growing pressure on US businesses to build things in the US Nvidia has announced plans to build AI chips and supercomputers in the US for the first time. Some chips are already in production at TSMC's Arizona plant, but there are plans for new plants in Texas in collaboration with Foxconn and Wistron.
Best CPU for gaming: The top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game ahead of the rest.
Though whether Nvidia feels the pressure to change its MSRPs due to tariffs isn't the main threat to PC gamers' purse strings. The card partners building and shipping these cards across the globe might be sweating over their bottom line and considering price increases.
For now, any immediate price rises might be eased by the ongoing exemption, but if that changes or prices rise, the customer will bear the brunt of it.
Let's not end on that note. Maybe the RTX 5060 and RTX 5060 Ti will be the return of reasonable pricing in 2025—it's not impossible, I suppose, and they are expected to be available in even higher volume than the RTX 5070 thanks to that ickle GB206 GPU.
And there's always the RX 9060 and RX 9060 XT to consider, with the latest rumours having the latter seeming like an RX 9070 XT cut in two. Judging by AMD's initial announcement, these cheaper RDNA 4 cards should be launching relatively soon, and if the RX 9070 XT giving it to the RTX 5070 Ti is anything to go by, they might offer some decent competition against the RTX 5060 and RTX 5060 Ti.
Then again, it's hard to find RX 9070 and RX 9070 XT cards in stock at MSRPs, too… Damn, and there was me not wanting to finish on a sour note.
]]>Emphasis on "starting" there, for a few reasons. The RTX 5060 will launch with an MSRP at $299. Then there's the RTX 5060 Ti with 8 GB of VRAM at $379 (?349), and above that the RTX 5060 Ti with 16 GB of VRAM at $429(?399).
If you're not au fait with the original RTX 40-series pricing, the RTX 4060 was $299 at launch, the RTX 4060 Ti 8 GB was $399, and the RTX 4060 Ti 16 GB was pointless. I mean, $499. So, generally, the new RTX 50-series looks a better deal that the last lot.
Though there is another dynamic at play, and that's whether we'll see any of the above pricing actually come to fruition on launch day. Or thereafter. The positive spin is that, in the UK anyways, there are some RTX 50-series cards already available at MSRP, or in one rare case, less than MSRP. The negative spin is this is not the case in the US right now.
Right now, US tariffs exclude computer parts and semiconductors, but on that question specifically, Nvidia said: "'These [prices] are not going to be inclusive of tariffs, there's not much we can do about that."
But what it can do is talk to its graphics card partners about the prices they're charging. "We can work with our partners to get these out at reasonable prices, which we are doing," says Nvidia. Though how much impact that will have is going to be something that will only come to light when the cards are launched.
The RTX 5060 Ti, in both 16 GB and 8 GB, is set to launch April 16 (yep, tomorrow). The RTX 5060 won't show up until next month. There's no exact date for that launch yet, which I presume is intended to offer Nvidia some leeway in case AMD tries to spoil the party with its entry-level RX 9060-series cards, which are on their way sometime before June is up.
There's also no Founders Edition for either card, which leaves it up to board partners to decide the prices.
So, here are the specifications. Or should I say specification? Nvidia has only released info on the RTX 5060 Ti, which lands with us first, but the whole entry-level line-up uses the same GPU, codename GB206, powered by the Blackwell RTX architecture.
Compared to the RTX 4060 Ti, there's been a small increase across the GPU on the RTX 5060 Ti. Two more Streaming Multiprocessors (SMs) amount to a few more CUDA cores at 4608, two more RT cores, and eight more Tensor cores. Otherwise, the largest improvements come from the improvements baked into the Blackwell architecture and the faster GDDR7 memory on-board.
That memory will make a difference. At 28 Gbps, this GDDR7 is a lot faster than the GDDR6 found on entry-level RTX 40-series cards. As such, the memory bandwidth on the RTX 5060 Ti is 55% higher than the RTX 4060 Ti at 448 GB/s.
There's also your choice of 16 GB or 8 GB of it, as I previously mentioned. Nvidia says it made the decision to stick with 8 GB on two of three of the cards to "optimised for price and performance".
"Could you put more memory on this? Sure, but what does it get you?" says Nvidia GeForce product manager Justin Walker of the decision.
Since we've got the rough TFLOPs for the RTX 5060 and the rest of the info for the RTX 5060 Ti, it's not too difficult to work out where the RTX 5060 lies either. I've thrown those figures in, too, though note they're not officially official. We do have some performance figures to go off for both cards at least.
As you might expect, Nvidia rolled out Multi Frame Generation, its latest frame increasing feature, for most of its own figures. We'll have more to share of our own soon, but in the meantime, here's what Nvidia says.
I was told we can expect around a 20% improvement in regular rasterised gameplay from the RTX 5060 Ti and around 20 to 25% with the RTX 5060. Check the slides for game-by-game figures with Multi Frame Generation enabled, largely in 4X mode (which is an RTX 50-series/DLSS 4 exclusive feature), which obviously bumps those numbers up a lot compared to older cards.
That's what Nvidia says it's targeting with this launch, too: older cards, maybe even of a 'GTX' variety. It's hoping that it might sway some GTX 10-series or 16-series users to buying these cards, which is certainly possible, providing the price is right and you can actually buy one.
If past RTX 50-series launches are anything to go on, it might take a while for these cards to really hit the market at their intended prices. Who can say when that'll happen, but these cards are intended to sell at volume to a large number of people, which presumably means Nvidia will be shipping absolutely tons of them. Stay tuned for our review.
Best CPU for gaming: Top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game first.
"Nvidia Blackwell chips have started production at TSMC’s chip plants in Phoenix, Arizona," the company said in today's announcement. "Nvidia is building supercomputer manufacturing plants in Texas, with Foxconn in Houston and with Wistron in Dallas. Mass production at both plants is expected to ramp up in the next 12-15 months.
"The AI chip and supercomputer supply chain is complex and demands the most advanced manufacturing, packaging, assembly and test technologies. Nvidia is partnering with Amkor and SPIL for packaging and testing operations in Arizona."
Nvidia said its AI supercomputers "are the engines of a new type of data center created for the sole purpose of processing artificial intelligence," and regardless of what AI ultimately delivers there's no question it's the thing of the moment and Nvidia clearly has high hopes for its made-in-America plan: The company said manufacturing all these chips and supercomputers "is expected to create hundreds of thousands of jobs and drive trillions of dollars in economic security over the coming decades."
“The engines of the world’s AI infrastructure are being built in the United States for the first time,” Huang said in the statement. "Adding American manufacturing helps us better meet the incredible and growing demand for AI chips and supercomputers, strengthens our supply chain and boosts our resiliency."
The Trump administration was quick to take credit for Nvidia's announcement, calling it "the Trump Effect in action." But others have said the move is not the result of current policies but rather the CHIPS Act, signed into law in 2022 during the Biden administration: TSMC, one of the manufacturing partners named in the Nvidia announcement, received nearly $12 billion in direct funding and loans as a result of the CHIPS Act. Ironically, Trump has previously expressed a desire to kill the act, calling it a "horrible, horrible thing" in March and saying the same results could be achieved through the imposition of tariffs.
Nvidia's announcement comes amidst the chaos of US president Donald Trump's on-again, off-again tariffs against the rest of the world, primarily China: On Friday, the Trump administration exempted phones and computer hardware from the massive 145% tariffs being applied to goods made in China, but by Sunday Trump said there was no exemption, and that those products are simply "moving to a different tariff 'bucket'," which may or may not be announced this week.
The announcement of new US-based manufacturing comes just days after after Nvidia reportedly managed to avoid the imposition of export controls on its H20 chip, the most powerful chip it produces that can be legally exported to China: Two sources told NPR that the H20 walkback followed Huang's attendance at a $1 million-per-person dinner at Trump's Mar-a-Lago resort.
2025 games: This year's upcoming releases
Best PC games: Our all-time favorites
Free PC games: Freebie fest
Best FPS games: Finest gunplay
Best RPGs: Grand adventures
Best co-op games: Better together
That's according to Videocardz which cites no sources for the information, other than "recent information from AMD board partners", but I don't think they're too outlandish. After all, the full Navi 48 GPU in the Radeon RX 9070 XT sports 4,096 shaders so the smaller chip in the 9060 XT is unlikely to have anywhere near as many.
If what is being claimed is correct, then the 9060 XT will essentially be a 9070 XT hacked in two—you're getting half the number of shaders and a memory bus that's half as wide. Or to put it into numbers: 2,048 Stream Processors (with 128 TMUs and 64 ROPs) and a 128-bit memory bus. That's exactly the same as a Radeon RX 7600 XT but the 9060 XT gains ground by virtue of its 3.2 GHz boost clock, 16% higher than the 7600 XT's.
Regardless of what the final specs are like, AMD is going to be pitching the RX 7600 XT against Nvidia's RTX 5060 Ti, which is looking increasingly likely to come in two VRAM configurations: 8 GB and 16 GB. We've already seen claims that the RX 9060 XT will also have the same option but the one thing it doesn't have is super-fast VRAM. As AMD still uses GDDR6 for price reasons, you'll only be getting a total of 320 GB/s or so of memory bandwidth.
While we don't know what RAM speed the RTX 5060 Ti will use, one can hazard a guess at the minimum, given that the slowest GDDR7 used so far in the RTX 50-series has been 28 Gbps (RTX 5070 and RTX 5090). If the 5060 Ti uses that, it'll boast 448 MB/s of bandwidth—a hefty 40% more than the RX 9060 XT.
To counter this, AMD uses a complex but powerful cache system in its RDNA graphics chips, and like the 7600 XT, I expect the RX 9060 XT will have 32 MB of L3 Infinity Cache to make up for the relatively narrow memory bandwidth.
Of course, what PC gamers are going to care about is the price, availability, and for some, how much power it'll use. If I pop on my wizard hat and stare into my crystal ball, I can take a wild stab in the dark at all of this. Let's say Nvidia launches the RTX 5060 Ti at $375 for an 8 GB version: AMD will almost certainly pitch the RX 9060 XT at less than this, perhaps by as much as $50, but whatever it does, it'll probably be quite close to the 7600 XT's launch price of $329.
Back then MSRPs made sense, but these days I wouldn't be surprised if very few board partners offered anything at sub-$350 and I should imagine there will be a few 9060 XT models reaching close to $499.
Best CPU for gaming: The top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game ahead of the rest.
Hopefully, there will be a decent supply of all these cards, especially the more affordable ones. But even if there is, the demand for GPUs is sky-high at the moment so stocks will probably disappear very rapidly for the first few weeks or even months.
As to the power consumption, I reckon it will be north of 230 W. The 7600 XT is a 190 W graphics card but the 9060 XT is clocked much higher and the GPU housed dedicated matrix units for accelerating the AI-powered FSR 4 upscaling system. The 4,096 shader 9070 XT uses up to 304 W so it's clear that RDNA 4 loves a decent amount of power.
Until AMD officially launches the Radeon RX 9060 XT, all of this is guesswork and rumour. Whether it has the measure of the RTX 5060 Ti won't be certain until we've run both cards through our full benchmark suite, but I suspect the Nvidia card will be the faster of the two, albeit with a higher price tag. Throw in DLSS 4 and it becomes trickier still, as AMD doesn't have anything yet to counter Nvidia's Multi Frame Generation.
But the way things are at the moment, any semi-decent GPU with a price tag that doesn't require the selling of an organ or three on the black market is going to sell well.
]]>For the record, the card in question was a Zotac RTX 5070 model paired with a Seasonic Focus GX-750 power supply. It's worth noting that it was the cable that suffered unambiguous damage here, not the graphics card or even, seemingly, the power connector or socket.
PCから煙が出てきたΣ(?д?;)人生初経験...正直ネットニュースとかで見たことあるけど超低確率だと思ってた。。電源つけて直ぐ2秒後すごい煙の量でびっくり!コンセント抜いて換気扇回してる?端子の接続も何度見直しても、しっかりしてるし、奥までちゃんと刺さってる(続きの動画あります pic.twitter.com/EX1pP5yKFFApril 10, 2025
That's eerily familiar from investigations made earlier this year by YouTube creator Der8auer, where it was found that the power across the multiple cables in both 12V-2×6 and 12VHPWR connectors attached to the Nvidia RTX-50 series was highly imbalanced.
Der8auer tested the six live cables on these connectors hooked up to an RTX 5090 and found that the power isn't evenly balanced across all six wires. In fact, just one wire was loaded with 250 W and thus roughly half the load generated by the 5090.
Consequently, that heavily loaded wire was getting dangerously hot. An RTX 5070 is clearly a much lower-power GPU, rated at 250 W TDP versus the 5090's 575 W. But if a 5070 is pulling all of that 250 W or something close to that over a single wire, well, you get the idea.
As for why this happens at all, in other words why the power isn't more evenly distributed over the cables, I explained back in February, "some AIB RTX 5090 designs include per-pin power sensing, which would presumably stop this kind of power imbalance from happening. But, surprisingly, Nvidia's own FE design apparently does not. Instead, it essentially amalgamates all six live pins into a single power source as soon as it arrives on the GPU's PCB."
If RTX 5070 typically have the same engineering approach, it's possible some 5070 cards could also be pushing excessive power through a single cable, leading that cable to overheat.
Frankly, none of this seems to make much sense. The whole point of having multiple wires, surely, is to spread the load. To then engineer the connection in a manner that undermines that built-in safety factor seems pretty odd, to say the least.
It's also a bit of a worry that it's a 12V-2×6 power cable implicated here with its updates designed to address all those stories of melting 12VHPWR connectors on Nvidia graphics cards that date back to 2022. This is the latest power supply hardware applies to the latest GPU technology.
Of course, this could well be a very isolated incident that doesn't imply a broader problem. But where there's smoke in a high-performance gaming PC, the immediate culprit is not necessarily an Nvidia GPU fire, but those power cables and sockets are certainly suspect.
Best CPU for gaming: Top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game first.
The catch is that the card is being offered in the UK, so this is not a US or worldwide opportunity. But for the record, the Gainward GeForce RTX 5070 Python III can be had right now for ?518.99 from Overclockers.
The official UK MSRP is currently ?529, slightly lower than its original ?539 launch MSRP in the UK. It's also worth noting that Overclockers is one of the UK's most prominent retailers of PC hardware. So, this is very much a legitimate offering from a well-established vendor.
Gainward RTX 5070 | 12 GB GDDR7 | 6144 shaders | 2512 MHz boost | ?518.99 at Overclockers
The RTX 5070 isn't a huge leap over the RTX 4070, but it is that little bit faster. And it comes with Nvidia's fabulous feature set. This Gainward card slips a little under the UK's ?529 MSRP, which is certainly novel in current graphics card environment of ever-escalating prices.
Now, if this is an appealing deal for an RTX 5070, the question remains if the RTX 5070 itself is a strong buy. On the downside, it represents relatively little advance over the old RTX 4070. That's why it scored a lowly 61% in our review.
That low score wasn't due to some hideous flaw or lack of features. Viewed outside of the prism of pricing, the RTX 5070's silicon is seriously slick and the RTX 50-series as a whole offers clearly the best GPU feature set.
But it also represents so little advance over the last-gen RTX 4070 for both performance and features, it's very hard not to be disappointed. All that said, we are where we are with current GPU technology and pricing and, in the current environment, a 5070 for below MSRP is a strong buy.
That's partly because AMD's Radeon RX 9070 and 9070 XT are selling above MSRP in the case of the former and very hard to find at all in the case of the latter. So, you can have an RX 9070 non-XT for about ?580 in the UK. But it doesn't have as strong a feature set as the 5070.
Anyhow, it's very hard to say where GPU prices go from here. Will non-US pricing continue to slide with more and more options at or below MSRP? Could the US see a huge spike thanks to those supposed 145% tariffs? It's all terribly hard to predict.
Best CPU for gaming: Top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game first.
Nvidia recently started to use SK Hynix GDDR7 for the RTX50 Graphic Card. Started with RTX5070 first.April 8, 2025
Benchlife reports that SK Hynix memory modules began shipping to AIB partners for production at the end of March. Nvidia currently uses Samsung GDDR7 modules in the new cards, so it's unclear if this switch would mean using both suppliers at once for consumer GPUs, or moving over wholesale to SK Hynix memory for the foreseeable future.
Should these reports turn out to be correct, however, that would mean that new graphics card stock (of the RTX 5070, at least) may be on its way at some point in the very near future.
Ifs, buts, and maybes. Still, Nvidia already has agreements with both memory suppliers in place across its portfolio, including Samsung's eight-layer HBM3E memory used for low-tier AI processors sold to the Chinese market and SK Hynix 12-layer HBM3E in its all-singing, all-dancing Blackwell AI products.
If a hold up in Nvidia's consumer GPU supply chain was down to Samsung memory production, then clearing that bottleneck with an SK Hynix solution would be a help to supply.
Samsung has previously admitted struggling to meet Nvidia's standards in the memory production arena, although this was more about meeting requirements for chip performance rather than satisfying overall supply. Still, one may affect the other, of course, but this was in relation to HBM3E and not GDDR7 used in gaming cards.
And then there are tariffs to throw into the equation. Given that the Trump administration seems to change its mind on US tariffs on a daily basis at the moment, it's reasonable to think that a supplier switch might help Nvidia navigate some troubled waters, although both manufacturers primarily operate out of South Korea for GDDR7 production.
Supply chains are complicated things, though, and the great tariff wheel-of-fortune we all appear to be spinning on is more complex still.
Tech manufacturers the world over look to be considering their options at the moment, and the question of who makes what and ships it from where is an ever-present one when producing any product in these troubled times.
Anyway, more GPU supply sounds like a good thing to me. At the very least, an influx of RTX 50-series cards might get the market moving back in the right direction, by which I mean, gamers being able to purchase a new GPU for a reasonable amount of money. Wouldn't that be nice?
Best CPU for gaming: Top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game first.
Again, on paper. The RTX 5070 Ti was announced at a $750 MSRP, but there's no reference card, meaning that a combination of reasonable demand, relatively low supply, and dubious AIB pricing strategies has resulted in a plethora of GPU options listed at well above that slightly-teeth-clenching figure. The Asus TUF Gaming RTX 5070 Ti OC Edition is no exception—it's currently listed at $1,000/?970 on the Asus website.
So, before we even get started, we have a pricing problem. The RTX 5080 FE has an MSRP of $999, and it's a significantly beefier card. Again, though, you simply can't get hold of one for anywhere near that kind of cash right now. Reviewing a graphics card in this kind of market becomes an exercise in hypotheticals as soon as you start factoring price and availability into the equation. It's tough out there, folks.
Still, what you get for your money is the same GB203 Nvidia chip, the same 8960 CUDA cores, the same 16 GB GDDR7 as the other, similarly-ludicrously-priced RTX 5070 Ti cards on the market. The difference here lies in two things: The boost clocks, and the ancillary components added to the board.
In the case of the former, the Asus GPU has a max rated boost clock of 2,610 MHz compared to the 2,452 MHz max speeds of the standard, non-overclocked RTX 5070 Ti—although, as you'll see when we get to the benchmarks, this particular card boosts way above that.
And in the latter, it's been fitted with a host of what Asus describes as "military-grade components" including upgraded chokes, MOSFETs and capacitors, a phase-change thermal pad, dual-ball fan bearings for its tri-fan cooler, and even a protective PCB coating to resist excess moisture and dust.
It seems Asus takes this whole "TUF" thing rather literally. Good. I like an overbuilt piece of hardware, and this card feels like it the moment you get it in your hands. It's a weighty, satisfying object to hold, feeling closer to a technologically-advanced cinderblock than a graphics accelerator.
Still, you're not going to be holding it for long. If you're anything like me, you're going to be desperate to wedge it in your machine. And wedge, in my case, is the right choice of words. It's a three slot design with 329 x 140 x 62.5 mm dimensions, and in the case of my otherwise reasonably spacious Micro-ATX chassis, requires some Tetris-like trial and error to squeeze in. Measure, then measure again if you plan on picking up one of these.
Anyway, enough hemming and hawing about price, component choices, and size. This is a factory overclocked version of a very speedy card, so let's see how it does when pushed to the limits in our benchmark rig.
The first point of comparison is the MSI RTX 5070 Ti Ventus 3X OC, a card that also comes with a factory overclock—according to MSI, a whole 45 MHz over the standard spec in Performance mode. However, over the course of three Metro Exodus Enhanced runs at 4K Ultra settings, it becomes clear that all of our RTX 5070 Ti samples are happy to range well above that, with the Asus TUF topping the chart with an average clock speed of 2,716 MHz.
Still, it's not much of a clock speed gain compared to the other cards. Given the 24 MHz practical difference between the Asus and the MSI (and the 57 MHz difference between the Asus and the Gigabyte RTX 5070 Ti Eagle OC Ice SFF, also a factory overclocked GPU), you're probably expecting the gaming performance of these three cards to be very similar under real-world conditions. And, well, you'd be right.
At 4K, the Asus is within two to three fps of the MSI and Gigabyte cards in either direction in the average fps results. When it comes to minimums, however, there are some more significant differences to ponder over. The TUF GPU manages a seven fps lead over the MSI in Black Myth Wukong, although it's two fps off the pace when compared to the Gigabyte. In Cyberpunk at Ultra RT settings, it's neck and neck with the other RTX 5070 Ti models, although a single fps behind the MSI.
It's a similar story at 1440p. Really, given that gaming benchmarks can be somewhat twitchy (although we perform multiple runs of each and record our own results with Nvidia's Frameview tool to get the most accurate numbers), it's remarkable how closely each of these cards perform to each other. Minimum fps figures are always going to show the odd quirk, but averages remain within a six fps range throughout our test suite.
But hey, there's always upscaling and Frame Generation to consider. All of our RTX 5070 Ti samples are on even footing here, and that translates directly to the similarity of results between the three. The major anomaly comes in the form of the RX 9070 XT with FSR enabled in F1 24 and The Talos Principle 2 at Ultra settings, which somehow manages to give even the RTX 5080 a sound thrashing in the averages.
That's a GPU with a $599 MSRP, and it's impressive to see just how closely it matches (and sometimes outright beats) each of our RTX 5070 Ti samples. Are my price comparison woes over? Nope. We haven't seen one at that price for a while, and a quick scan of the listings reveals a smattering of cards for $1,000+. I told you this was going to be tough, didn't I?
But what about power and thermals? All of those high spec capacitors, MOSFETs and more must move the needle somewhat, right? Especially given the size of that massive tri-fan cooler.
Sort of. The Asus card manages a lower average power figure than the MSI— but slightly higher than the Gigabyte, by a whole three watts. It's significantly less power-hungry than the RTX 5080 FE, of course, but then it's significantly slower in the gaming benchmarks, too, like all of our RTX 5070 Ti cards.
Who would have thought a 10752 CUDA core card would use significantly more power and deliver significantly more performance than the card below it in the stack? Me, for one, and I would imagine your good self, too.
Thermal benches tell a slightly happier story for the beefy Asus. Thanks to that thermal pad and a sizable cooler, it runs four degrees cooler than the MSI on average, and manages a six degree cooler peak temperature result. It's the RX 9070 XT that wins the day once more, however, as the Asus Prime model stays remarkably chilled with a 59 °C peak and a 56 °C average.
And then there's productivity, where… are you starting to spot a theme here? Still, one thing to note is that the overbuilt Asus is actually a little off the pace on average across our productivity tests compared to its RTX 5070 Ti brethren. It's by a relatively insignificant amount, though, and I'd say the results are still well within margins of error.
PC Gamer test rig
CPU: AMD Ryzen 7 9800X3D | Motherboard: Gigabyte X870E Aorus Master | RAM: G.Skill 32 GB DDR5-6000 CAS 30 | Cooler: Corsair H170i Elite Capellix | SSD: 2 TB Crucial T700 | PSU: Seasonic Prime TX 1600W | Case: DimasTech Mini V2
The last thing to talk about is overclocking. Given that the Asus TUF has good thermal performance and some high-spec ancillary components, on paper it looks like it might stand a better chance of hitting higher overclocked speeds than its competitors.
My benevolent hardware overlord Dave James managed to achieve a stable +450 MHz chip boost in tandem with a +1000 MHz memory overclock on the MSI card, and I'm happy to report that the Asus manages the same. In fact, it's utterly rock solid at this particular magic combination. I've been running the Asus in my personal rig for the past several weeks at these speeds, and it's not flinched once.
Push that chip to +475 MHz, though, and like the MSI, it begins to twitch and crash.
A +450 MHz boost is where it likes to sit, meaning you're not really gaining any meaningful performance over our other overclocked RTX 5070 Ti cards, either. It'll manage the same equivalent boost as the others, but nothing tangibly more in terms of real-world gains that could edge it ahead of the pack.
One thing I can say from personal experience is that it's much more stable than the Colorful RTX 5070 Ti Vulcan OC with its one-click OC VBIOS enabled. Small wins, and all that.
Here's the thing: I like the Asus TUF Gaming RTX 5070 Ti OC Edition, silly name and all. I'm a fan of overbuilt hardware, and it's certainly that. It's also chilled out under load, imperceptible over my CPU cooler when it comes to noise, and is more than capable of delivering a tasty slice of 1440p and 4K performance that makes gaming on it a genuine pleasure.
It's also a relatively stable overclocker, and a card that engenders confidence to do so. I'm not sure whether "military-grade" components really do last all that much longer when a card is pushed to its limits, but there's a sense that Asus has really thought about making this card live up to its TUF moniker, and that's pleasing to see. It may well end up being a little more resilient than other cards in terms of wear and tear, at least.
? You don't want to worry about thermals: The Asus TUF Gaming card runs cool and serene, even when pushed hard.
? You want a stable overclocker: The Asus is rock solid at the same sort of clock speeds that can trouble lesser-built GPUs.
? You don't want to overpay: All RTX 5070 Ti's are too expensive right now, and this one is no exception.
? You've not got a lot of case room to spare: I've seen bigger, but not by much. That substantial cooler really is a squeeze to fit into many cases.
Is it worth paying $250 more than MSRP for that intangible (and difficult to test) sense, though? No. But then I see RTX 5070 Ti variants listed for $1,000+ on the daily. Even the Gigabyte RTX 5070 Ti is now well above that price, and that's a card that, at $900, our Jacob already thought was far too expensive. He's right, naturally, but it's a poor state of affairs out there for you, the potential buyer.
So really the advice is, if you absolutely must have an RTX 5070 Ti right now, the Asus is potentially the one to go for—as long as you can't find an equivalent GPU for cheaper. It only really makes sense while both the RTX 5070 Ti and RX 9070 XT cards are hovering around the $1,000 mark, and for how much longer that remains true is anyone's guess.
With my own personal funds? I'd wait instead. The GPU market is about as volatile as it's ever been right now, not to mention the continuing economic uncertainty that surrounds it, and who knows how that might change in future. We're expecting more stock at some point, which may push prices down to more palatable levels, but even that seems uncertain.
It's just a terrible time to recommend almost any new GPU given current pricing, and I wouldn't be doing my job properly if I didn't point that out.
Ultimately, it comes down to this: The Asus TUF Gaming RTX 5070 Ti OC Edition is a very good graphics card in the same way a titanium-framed fountain pen is a very good writing device. It's undeniably true, but the real question is, would you pay for any fountain pen, overbuilt or not, at vastly over-inflated prices? Me, right now, with my own cash? I think not.
]]>As spotted by popular hardware leaker harukaze5719, the South Korean National Radio Research Agency (RRA) certification body has listed both 16 GB and 8 GB variants of Gigabyte RX 9060 XT graphics cards (via VideoCardz). These were apparently certified just under a week ago on April 4, 2025.
GV-R9060XTGAMING OC-16GDGV-R9060XTGAMING OC-8GDGV-N506TEAGLE OC-16GDGV-N506TEAGLEOC ICE-16GDGV-N506TWF2OC-16GDGV-N506TWF2-16GDGV-N506TGAMING OC-16GDGV-N506TAERO OC-16GDGV-N506TAORUS E-16GDhttps://t.co/bRIEURH9hmhttps://t.co/GeRhCQQTj4https://t.co/XNuWCc8PhS… pic.twitter.com/ezqyZkO90nApril 9, 2025
In addition to this, hawk-eyed harukaze5719 also spotted a bunch of RTX 5060 Ti listings, these being Eagle, Windforce, Aero, Gaming OC, and Aorus models (some OC and non-OC). These are all 16 GB cards, although there are rumoured to be 8 GB versions in the works, as well. A bunch of MSI RTX 5060 Ti listings have been spotted, and these too are all 16 GB cards.
We'd seen lots of talk surrounding the RTX 5060 Ti's specs previously, though, so what's really new here is seeming confirmation of both 8 GB and 16 GB variants of the RX 9060 XT in the wild.
AMD told everyone back in February that more affordable RX 9060 graphics cards will arrive in the second quarter of 2025. Though no further details were given at the time (AMD seems to have a habit of keeping its cards close to its chest until right before launch).
The hope, of course, is that the RX 9060 XT can bring something actually affordable to the GPU market which is currently in—and not to exaggerate—absolute shambles. A circa $300-$350 card to challenge the Intel Arc B580 and some last-gen Nvidia GPUs would be ideal.
And while people like to decry 8 GB of VRAM as not enough these days, if AMD can deliver an 8 GB competitor to the B580, it might—might—be able to do so on the cheap. Then we'd have all that RDNA 4 goodness, plus presumably more consistent drivers and gaming performance than the Intel card, all for a nice price.
This is assuming that AMD prices the RX 9060 XT cards to undercut Nvidia's RTX 5060 Ti, of course, as the RX 9070 XT undercut the RTX 5070 Ti. And it's assuming there's stock there to boot, which currently doesn't seem massively promising given only two RX 9060 XT cards have been spotted but a bunch of RTX 5060 Ti cards have.
Best CPU for gaming: The top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game ahead of the rest.
There's also a very large and smelly elephant in the room, this being that it's currently very difficult to find even AMD cards at close to MSRP, making the whole 'AMD cards are a better value proposition' argument somewhat of a moot point.
Our Dave goes over all this in his AMD RX 9070 XT vs Nvidia RTX 5070 Ti comparison, and the essential point is: there's little in it once the market becomes such a mess.
Here's hoping some entry-level cards can straighten things out. We don't ask for much, really, do we? Just something moderately affordable and in stock. Fingers crossed, but my breath certainly is not held.
]]>It all started with this post about one lucky poster stumbling upon a RTX 5080 card discounted down to $896 from $1280. The Reddit user found the card with the fabled yellow price sticker in the PC section of their local Walmart. Store employees explained to the poster that it was a return from the day before and that they'd been "placing bets [on] how fast it would sell" since putting it back on the shelf. But rather than being a one-off lucky find, this might actually be a 'one weird trick.'
A few days later, a different poster shared their spoils—plus a little bonus insight suggesting this might be somewhat replicable. Inspired by the aforementioned RTX 5080 post earlier that week, this Redditor checked out the PC components cabinet at their nearest Walmart and picked up a RTX 5070 card for the discounted price of $515. Not only that, but the box was still completely sealed.
So, what's going on? The short answer is 'returns.' Basically, due to the outsized demand for the RTX 50-series cards making things like in-person lottery systems a bad idea, Walmart is using an online-only system. However, when these cards are then returned in-store, some store employees have been putting them back out on the shelf at a discounted price whether the card has been opened or not.
Now, before you rush out to the big Mart of Wal', it's worth noting that there's no guarantee you'll be so lucky. For one, the comments are full of folks trying to do just that and returning to the post empty-handed.
For another, some schemers in the comments are floating the idea of buying a card at full price only to return it, wait for it to be put back on the shelf, and revel in the discount. This presents a few risks, including someone else pipping you at the post on buying back 'your' card and, if enough folks jump on the bandwagon, Walmart potentially wising up—because the entire scheme could be considered fraud.
All in all, there's gotta be an easier way to get a decent price on one of these RTX 50-series cards. Speaking of, may I suggest our very own Where to Buy lists for the RTX 5090 as well as the RTX 5080 card as a starting point? For a perhaps slightly more reasonable price point, you could always get a brand-new laptop with a RTX 50-series card instead. Happy hunting!
Best CPU for gaming: Top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game first.
This is the Zephyr GeForce RTX 4070 Sakura Snow X, as shown off on BilliBilli (via VideoCardz). And yes, that's a Chinese video site because, unfortunately, it looks like this might be a China-only card, for now at least.
It's also not the first we've seen of the single-fan RTX 4070 Sakura, as Zephyr launched one last year, and before that there were single-fan concoctions for other cards such as the Zephyr Sakura & Snow GeForce RTX 3060.
This new one is slightly bigger than last year's, has a bigger fan, and ditches the pink and purple colour scheme for a white and silver one, which will arguably have broader aesthetic appeal.
It comes in at just 178 mm x 128 mm, and what's especially nice about it—to my eyes, at least—is that the I/O bracket forms a cohesive part of the shroud, which is another difference between this version and the first. It looks incredibly smart.
If you're like me, however, you've probably already noticed that the fan is off-centre, which doesn't bother me with larger cards, but with one so small my mind is screaming at me: Why are you not central? I know, I know, it's probably placed right above the GPU, but that does little to hush my perfectionist inner demon.
The card is, of course, designed to fit in Mini-ITX builds, even though it's actually a dual-slot card. And I suppose you wouldn't have to shrink a card down to a single-fan form factor for that to be the case, but every little helps when you have little space to work with in your build.
The slight increase in shroud and fan size from last year's version, judging from the screenshots shown in the company's Billibilli video, shows a slight reduction of temperature during a FurMark stress test compared to last year's version.
Every little helps with a card so… little, I suppose. If this card makes its way to the western market, it looks like a solid choice for SFF system builders, and the RTX 4070 is still a pretty decent performer in today's games. There certainly aren't many RTX 50-series alternatives for SFF builds outside of the impossible to find FE cards right now, and with prices as they are, opting for a previous-gen card might not be an awful idea anyway.
Best CPU for gaming: Top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game first.
Most of those issues were eventually resolved via a series of patches, but like so many big-budget, mega-graphics games, if you fire it up at 4K on Ultra settings, the game will happily let you use more VRAM than you actually have. The TLOU1 screenshot below is from a test rig using an RTX 3060 Ti, with 8 GB of memory, showing the built-in performance HUD; I've confirmed that RAM usage figure with other tools and the game is indeed trying to use around 10 GB of VRAM.
So when I began testing The Last of Us Part 2 Remastered a couple of weeks ago, the first thing I monitored after progressing through enough of the game was the amount of graphics memory it was trying to allocate and actually use. To do this, I used Microsoft's PIX on Windows, a tool for developers that lets them analyse in huge detail exactly what's going on underneath their game's hood, in terms of threads, resources, and performance.
To my surprise, I discovered two things: (1) TLOU2 doesn't over-eat VRAM like Part 1 did and (2) the game almost always uses 80% to 90% of the GPU's memory, irrespective of what resolution and graphics settings are being used. You might find that a little hard to believe but here's some evidence for you:
The screenshots below of PIX show the amount of GPU local and non-local memory being used in TLOU2, in a CyberPowerPC Ryzen 7 9800X3D rig, using an RTX 5080 and RTX 3060 Ti graphics card. The former has 16 GB of VRAM, whereas the latter has 8 GB of VRAM. In both cases, I ran the game at 4K using maximum quality settings (i.e. the Very High preset, along with 16x anisotropic filtering and the highest field of view), along with DLAA and frame generation enabled (DLSS for the 5080, FSR for the 3060 Ti).
Note that in both cases, the amount of local memory being used doesn't exceed the actual amount of RAM on each card—even though they're both running with the same graphics settings applied. Of course, that's how any game should handle memory but after the TLOU1 debacle, it was good to see it all resolved for Part 2.
If you look carefully at the PIX screenshots, you'll notice that the RTX 3060 Ti uses more non-local memory than the RTX 5080, specifically 4.25 GB versus 1.59 GB. Non-local, in this instance, refers to the system memory and what's using that chunk of RAM for the GPU is the game's asset streaming system. Since the 3060 Ti only has 8 GB of VRAM, the streaming buffer needs to be larger than that for the RTX 5080.
During the gameplay loop I carried out to collate this information, the RTX 5080 averaged 9.77 GB of local memory usage and 1.59 GB of non-local usage, for a total of 11.36 GB. In the case of the RTX 3060 Ti, the figures were 6.81 and 4.25 GB respectively, with that totalling 11.06 GB.
Why aren't they exactly the same? Well, the 3060 Ti was using FSR frame generation, whereas the 5080 was running DLSS frame gen, so the few hundred MB of memory usage between the two cards can be partially explained by this. The other possible reason for the difference is that the gameplay loops weren't identical, so for the recording, the two setups weren't pooling exactly the same assets.
Not that it really matters, as the point I'm making is that TLOU2 is an example of a game that's correctly handling VRAM by not trying to load up the GPU's memory with more assets than it can possibly handle. It's what all big AAA mega-graphics games should be doing and the obvious question to ask here, is why aren't they?
Well, another aspect of TLOU2 I monitored might explain why: the scale of the CPU workload. One of the test rigs I used in my performance analysis of The Last of Us Part 2 Remastered was an old Core i7 9700K with a Radeon RX 5700 XT. Intel's old Coffee Lake Refresh CPU is an eight-core, eight-thread design, and no matter the settings I used, TLOU2 had the CPU core utilization pinned at 100% across all cores, all the time.
Even the Ryzen 7 9800X3D in the CyberPowerPC test rig was heavily loaded up, with its sixteen logical cores (i.e. eight physical cores handling two threads) being utilized heavily—not to the same extent as the 9700K but far more than any game I've tested of late.
TLOU2 generates a lot of threads to manage various tasks in parallel, such as issuing graphics commands and compiling shaders, but there are at least eight threads that are dedicated to DirectStorage tasks. At this point, I hasten to add that all modern games generate way more threads than you ever normally notice, so there's nothing especially noteworthy about the number that TLOU2 is using for DirectStorage.
The above PIX screenshot shows these particular threads across 80 milliseconds worth of rendering (basically a handful of frames) and while many of the threads are idle in this period, two DirectStorage queues and the DirectStorage submit threads are relatively busy pulling up assets (or possibly 'sending them back' as so to speak).
Given that it's not possible to disable the use of DirectStorage and the background shader compilation in TLOU2, it's hard to tell just how much these workloads contribute to the heavy demand of the CPU's time but I suspect that none of it is trivial.
...while it's not a flawless technique, it does a pretty damn good job of getting around any VRAM limitations
However, I recognise the biggest programming challenge is just making all of this work smoothly and correctly synchronise with the primary threads, and that's possibly why most big game developers leave it to the end user to worry about VRAM usage rather than creating an asset management system like TLOU2's.
The Last of Us Part 1, like so many other games, includes a VRAM usage indicator in the graphics menu for its games and this is relatively easy to implement, although making it 100% accurate is harder than you might think.
At the risk of this coming across as a flamebait, let us consider for a moment whether TLOU's asset management system is a definite answer to the '8 GB of VRAM isn't enough' argument. In some ways, 8 GB is enough memory because I didn't run into it being a limit in TLOU2 (and I've tested a lot of different areas, settings, and PC configurations to confirm this).
Just as in The Last of Us Part 2, any game doing the same thing will also need to stream more assets across the PCIe bus on an 8 GB graphics card compared to a 12 or 16 GB one, but if that's handled properly, it shouldn't affect performance to any noticeable degree. The relatively low performance of the RTX 3060 Ti at 4K Very High has nothing to do with the amount of RAM but instead the number of shaders, TMUs, ROPs, and memory bandwidth.
If you've read this far, you may be heading towards the Comments section to fling various YouTube links my way showing TLOU2 stuttering or running into other performance problems on graphics cards with 8 or less GB of VRAM. I'm certainly not going to say that those analysis pieces are all wrong and I'm the only one who's right.
Anyone who's been in PC gaming long enough will know that PCs vary so much—in terms of hardware and software configurations and environments—that one can always end up having very different experiences.
Justin, who reviewed The Last of Us Part 2, used a Core i7 12700F with an RTX 3060 Ti, and ran into performance issues at 1080p with the Medium preset. I used a Ryzen 5 5600X with the same GPU and didn't run into any problems at all. Two fairly similar PCs, two very different outcomes.
Best CPU for gaming: The top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game ahead of the rest.
Dealing with that kind of thing is one of the biggest headaches that PC game developers have to deal with, and it's probably why we don't see TLOU2's cool asset management system in heavy use—figuring out how to make it work flawlessly on every possible PC configuration that can run the game is going to be very time-consuming. That's a shame because while it's not a flawless technique (it occasionally fails to pull in assets at the right time, for example, leading to some near-textureless objects), it does a pretty damn good job of getting around any VRAM limitations.
I'm not suggesting that 8 GB is enough period because it's not—as ray tracing and neural rendering become more prevalent, and the potential for running AI NPCs on the GPU, the amount of RAM a GPU has will become an increasingly important commodity, for example. Asset streaming is also effectively useless if your entire view in a 3D world is packed full of objects and hundreds of ultra-complex materials because those resources need to be in VRAM right then and there.
But I do hope that some game developers take note of The Last of Us Part 2 and try to implement something similar because with both AMD and Nvidia still producing 8 GB graphics cards and mobile chips (albeit in the entry-level sector), it's still going to be a limiting factor for many more years to come and the argument over whether it's enough VRAM will continue to run for just as long.
]]>Under the inauspicious name of 'United States Patent Application 20250005842', AMD submitted a patent application for neural network-based ray tracing in June 2023, with the rubber stamp of approval hitting in January of this year. The document was unearthed by an Anandtech forum user (via DSOGaming and Reddit) along with a trove of other patents, covering procedures such as BVH (bounding volume hierarchy) traversal based on work items and BVH lossy geometry compression.
The neural network patent caught my attention the most, though, partly because of when it was submitted for approval and partly because the procedure undoubtedly requires the use of cooperative vectors—a recently announced extension to Direct3D and Vulkan, that lets shader units directly access matrix or tensor units to process little neural networks.
What the process actually does is determine if a ray traced from the camera in a 3D scene is occluded by an object that it intersects. It starts off as a BVH traversal, working through the geometry of the scene, checking to see if there's any interaction between the ray and a box. Where there's a positive result, the process then "perform(s) a feature vector lookup using modified polar coordinates." Feature vectors are a machine-learning thing; a numerical list of the properties of objects or activities being examined.
The shader units then run a small neural network, with the feature vectors as the input, and a yes/no decision on the ray being occluded as the output.
All of this might not sound like very much and, truth be told, it might not be, but the point is that AMD has clearly been invested in researching 'neural rendering' long before Nvidia made a big fuss about it with the launch of its RTX 50-series GPUs. Of course, this is normal in GPU research—it takes years to go from an initial chip design to having the finished product on a shelf, and if AMD only started doing such research now, it'd be ridiculously far behind Nvidia.
And don't be fooled by the submission date of the patent, either. June 2023 is simply when the US Patent Office received the application and there's simply no way AMD cobbled it all together over a weekend, and sent it off the following week. In other words, Team Red has been studying this for many years.
Back in 2019, an AMD patent for a hybrid ray tracing procedure surfaced, which was submitted in 2017. While it's not easy to determine whether it was ever utilized as described, the RDNA 2 architecture used a very similar setup and it launched in late 2020.
Best gaming PC: The top pre-built machines.
Best gaming laptop: Great devices for mobile gaming.
Patent documents don't ever concern themselves with actual performance, so there's no suggestion that you're going to be seeing a Radeon RX 9070 XT running little neural networks to improve ray tracing any time soon (mostly because it's down to game developers to implement this, even if it is faster) but all of this shows that AI is very much going to be part-and-parcel of 3D rendering from now on, even if this and other AI-based rendering patents never get implemented in hardware or games.
Making chips with billions more transistors and hundreds more shader units is getting disproportionately more expensive, compared to the actual gains in real-time performance. AI offers a potential way around the problem, which is why RDNA 4 GPU sports dedicated matrix units to handle such things
At the end of the day, it's all just a bunch of numbers being turned into pretty colours on a screen, so if AI can make games run faster or look better, or better still, do both then it's not hard to see why the likes of AMD is spending so much time, effort, and money on researching AI in rendering.
]]>The green team has had it largely its own way for the past few generations, with the red side of the GPU divide only really able to compete if you take ray tracing and upscaling features out of the equation.
Look back to the RTX 40-series vs. the RX 7000-series, and AMD itself will tell you that the only cards that were particularly well received were its more affordable RX 7800 XT and RX 7700 cards. And they were largely fighting on price, and maybe a little on performance and some VRAM numbers.
This time around, while AMD's latest GPU is theoretically considerably cheaper than its Nvidia rival, it's not just about the price. That's become evident where scarcity has pushed them far closer together and there is still a lot of positive feeling towards the Radeon card. The price/performance numbers are seriously impressive right now, and not just with the usual caveats.
With the RX 9070 XT AMD has closed the gap on ray tracing performance, has upped its upscaling game, and is giving Nvidia a tough time. But the RTX 5070 Ti definitely has some aces up its sleeve and there are some very key points where the GeForce crew can point to huge frame rate leads, so as the price gap tightens, who should come out on top?
As far as a head-to-head goes, in specs terms things are pretty darned close between the two cards. They both sport their respective manufacturers' latest GPU architecture—on the AMD side that's RDNA 4 and on the Nvidia it's the RTX Blackwell arch—and both contain very much mid-range graphics silicon.
They're also relatively close in terms of the raw GPU specs. The chips themselves are very similar in size—365.5 mm2 against 378 mm2—and contain a whole lot of nominally 4 nm transistors, too. There does seem to be a large disparity between their relative shader counts, but it's this spec in particular where it's impossible to truthfully do an apples to apples comparison. They are very different fruits, even if they deliver roughly the same end result.
There is also a clear divide between their rated boost clock speeds, too. Though, again, this is more of an on-paper distinction because in practice I've seen the RX 9070 XT and RTX 5070 Ti cards I've been testing both hovering around the 2,600 - 2,700 MHz mark, so essentially it's a level playing field on that front.
Memory is probably the part which has the biggest disparity, but only in terms of raw memory bandwidth. They're both running on a 256-bit memory bus, and they're both sporting 16 GB of video memory, but the RTX 5070 Ti comes with GDDR7 memory running at 28 Gbps, while the RX 9070 XT is using the older GDDR6 20 Gbps VRAM. That means there's 40% more memory bandwidth available to the Nvidia chip. In general, this makes the GeForce GPU better suited to high resolution gaming.
Then there's the power draw. Both the cards are sitting around the 300 W mark, with the RTX 5070 Ti coming in at 285 W and the RX 9070 XT with a TBP of 304 W. Though, again the reality is a little different in practice, with the average power draw of the two cards actually coming out at 303 W and 352 W respectively.
The relative pricing of the two cards should be the key point in this whole head-to-head, but such is the state of PC gaming hardware in 2025 that the actual on-shelf pricing of the RX 9070 XT and RTX 5070 Ti right now are disturbingly similar.
Disturbing, because they should be miles apart. The MSRP for the RTX 5070 Ti is $750, which is already a lot for a mid-range GPU, while the RX 9070 XT launched with a $599 MSRP that should have taken the Nvidia competitor out at the knees. With a $150 price delta, and nothing in it in terms of specs, it would be tough indeed to come down on the side of the Nvidia GPU.
And that might indeed be the situation in the future when stock will hopefully normalise and pricing trends down towards MSRP for both cards.
That is a heavy might, however, because manufacturers I spoke to admitted AMD's MSRP for the RX 9070 XT was only ever meant to be a launch price and that the card partners had always planned to hike prices the instant release day was done. So, whether we'll actually see any $599 RX 9070 XT cards in the future is looking pretty unlikely.
To be fair, the same can be said regarding the RTX 5070 Ti, as the scarcity of RTX 50-series GPUs in the retail market means that prices remain ludicrously high, and it's going to be a long time before prices come down. Again, I would be surprised to see that $750 MSRP being achieved at any point this year.
Right now, in the Spring of 2025, the best pricing in the US is over at Amazon where it's locking them behind Prime member deals. Though the best prices do have the RX 9070 XT higher than the original $750 MSRP for the RTX 5070 Ti, which in turn is $900. For Prime members, then, the $150 price delta remains, but for everyone else we're looking at $900+ price tags for both cards.
For the UK, things are definitely a bit better, though the RX 9070 XT is still well above MSRP, while the RTX 5070 Ti can be had for very close to, or at its own MSRP. Either way, things are essentially almost level pegging when it comes to real-world pricing right now.
This is the really important part of the conversation, because the relative pricing of the two cards means nothing if the gaming performance isn't there. But, thankfully for AMD, it absolutely is. The new RDNA 4 GPUs have been purpose built for a price-to-performance ratio that really takes the fight to Nvidia, and that's what they do.
AMD has also filled in the gaps where the Radeon GPUs of architectures past have fallen down compared with the GeForce alternative. And that means the ray tracing performance of the RX 9070 XT has been hugely improved and we're now seeing the first shoots of machine learning being brought into AMD FidelityFX Super Resolution (FSR) upscaler with the new 4.0 version.
In pure, native performance terms it's only really the super heavy ray tracing of Cyberpunk 2077 that causes any big delta in gaming performance at 4K or 1440p. Otherwise there are barely a handful of frames per second in it. And even that big difference starts to fade once we start talking about the real-world performance which brings in upscaling and frame generation.
If we're just mixing in standard 2x frame generation for both cards, and a DLSS or FSR Quality setting, then even Cyberpunk 2077 is a level race. In fact, the AMD card comes out ahead in that and F1 24, which both use a bunch of ray tracing effects.
That is just on the raw number side of things, however, and it is important to note that while AMD does have FSR 4 now running on machine learning algorithms, there are still precious few games which support it. That means, with Cyberpunk 2077 and F1 24, those numbers are all using the original FSR 3 and frame generation features which definitely don't have the visual quality of Nvidia's competing DLSS and more advanced Frame Generation tech.
In terms of image quality alone, Nvidia's upscaling and interpolating technology is still in advance of AMD's. And that's before we get into the real magic trick of the RTX Blackwell generation GPUs: Multi Frame Generation.
Not just a clever name, MFG is literally just the same feature but instead of adding in one extra AI generated frame in between each actually rendered one, the RTX 50-series cards are now able to generate up to three additional frames. This inevitably brings up the debate about fake frames and whether this is actually just frame smoothing, but the end result is a game that, no matter how you feel about what those resulting fps metrics say, feels smoother and faster than without it.
At the extremes, when the input frame rate is very low to start with, pushing to 4x MFG does add a ton of extra latency into games and that can make it feel sluggish even if the frame rate is presented as high. But, if the DLSS performance is delivering near 60 fps, then Multi Frame Generation can make a huge difference to the feel of a game.
The issue is that, like FSR 4, it is not in every game on the market. Unlike FSR 4, however, it is easily back-ported into games with existing Frame Gen support—though notably hasn't been ported into all of them—and there's a good chance that every big game release of the next year that comes with Nvidia's Frame Gen tech will come with MFG as standard.
When it comes to general GPU performance, the RTX 5070 Ti is the more efficient graphics card, offering lower power usage in general and therefore a much higher performance per watt metric. The RX 9070 XT, however, in the SKUs we're testing against each other here (the two Asus designs) the AMD chip runs a little cooler in practice.
Arguably the biggest delta between the performance of these two graphics cards is when we start to talk about the creator impact. The Nvidia GPU, largely by virtue of the support in the creative industry the green team engenders, has an impressive lead when it comes to both the AI image generation benchmark and the more traditional Blender 3D rendering tests.
This is not something I'm used to talking about with modern GPUs. Overclocking over the past few GPU generations has been more about hitting incrementally higher, but utterly arbitrary frequency numbers. The end result has often been marginally higher clock speeds but very little actual performance gain for your extra heat and power expended.
That's largely been down to the dynamic clock speeds of modern GPUs, where they are designed to run as fast as either their thermal or power budgets will allow. This is why rated clock speeds rarely bear any relation to what a graphics card's IRL frequency will actually be.
Those older GPUs are already running effectively as fast as their silicon allows, with only very marginal wiggle room.
This generation is different, across both Nvidia and AMD's latest architectures. I've had great success overclocking both the RX 9070 XT and the RTX 5070 Ti, though both benefit from different methods. With AMD's chips it's more about undervolting and giving the GPU more thermal headroom to boost its own clocks, while with Nvidia it's just about upping the clock speed offset as far as you can go.
Both of these GPUs I've found will take a whole lot of overclocking and the difference can mean an extra 10% higher frame rates with very little impact on either temperature or power draw, either. That makes it absolutely worth doing, but in this area there isn't much difference between just how far you can push the GPUs.
I have seen a greater delta between the extra performance you can squeeze out of the Nvidia GPU compared with the AMD. But it's only by a percentage point here and there. Though it is interesting that you can push an overclocked RTX 5070 Ti to the point where is can almost match an RTX 5080's native frame rates.
This is a surprisingly tricky final verdict to make. It certainly wasn't going to be when I first reviewed the AMD RX 9070 XT. The simple calculation was that you were essentially getting the same sort of gaming performance as the RTX 5070 Ti but for at least $150 less. But scarcity has created unforgivably high pricing for both these impressive mid-range GPUs.
So my verdict kind of comes in two parts. The first is about which card I'd recommend right now. At the moment, outside of Prime member deals, there is actually precious little difference in price between the RX 9070 XT and the RTX 5070 Ti—both are frankly offensively priced at between $900 and $1,000.
But if you had to buy one, then if prices are equal I'd pick the Nvidia card. Now, that might not be a popular opinion, and certainly not a fashionable one, but the GeForce GPU is more efficient, will still perform better in heavily ray traced games, has the edge at 4K resolutions, and DLSS still retains better visual fidelity especially when FSR 4.0 is in so few games.
And it comes with Multi Frame Gen. I understand folk are still not sold on 'fake frames' but even if you just take them as smoothing out the experience, unless you're pushing 4x frame gen on a game that's scraping 20 fps natively, then you are getting a far better experience in compatible games than with the Radeon card.
This is not going to be the situation forever, though. Some day there will be cards available at closer to MSRP and when stock normalises it will be hard to look past a considerably cheaper RX 9070 XT. Away from the current offensive GPU pricing, the fact that the RDNA 4 GPU is so competitive with the RTX 5070 Ti, with such good price/performance numbers, then the smart money will pick the AMD card all day long.
It's a great gaming GPU, can overclock/undervolt like a champ, and you don't need a fancy factory overclocked card to do it either. AMD has made a fantastic card that deserves to do well, if only that price properly starts to normalise.
]]>AMD's recently launched Radeon RX 9000-series comprises just two models at the moment: the RX 9070 and the RX 9070 XT. The latter is the 'full die' version, sporting 64 CUs (Compute Units), a boost clock of 2.97 GHz, a power limit of 304 W, and 16 GB of VRAM. The RX 9070 is very similar: just four fewer CUs and the same amount of VRAM, but with a much lower 220 W power limit and a boost clock of 2.52 GHz.
Since there's only a $50 difference in the cards' MSRPs, PC gamers have been snapping up the XT version en masse, leaving them in short supply and forcing their prices sky-high. But one curious modder has discovered that it's possible to flash the Asus Prime RX 9070 OC BIOS with the one from the Prime RX 9070 XT OC (via 3D Center), giving the weaker model the same clock and power limits as its bigger brother.
The BIOS in question can be found in TechPowerUp's BIOS database, along with the software tool required to do the job. Now, I hasten to add that any RX 9070 reading this who might be tempted to try the same thing should note that this BIOS will almost certainly not work on any other model and even if you do happen to have an Asus Prime RX 9070, you need to be very careful about doing the whole procedure.
As to why you should even think about it in the first place, the Asus RX 9070 OC has an outright maximum clock speed of 2.61 GHz, whereas the XT version can go all the way up to 3.03 GHz—that's a 16% increase, substantially more than what you'd normally get with a manufacturer's 'overclocked' model. And to help the card sustain that higher speed, the power limit is also increased to 304 W (38% more).
The modder reckons that his flashed RX 9070 is faster than a stock 9070 XT, but I'd want to see a lot more tests done to be certain of that because while the latter has a marginally slower boost clock, it does sport 512 more shader units.
Even though the Asus Prime RX 9070 only has two 8-pin power connectors to the XT variant's three, it's enough to handle the 304 W peak power consumption, so at least that's something we don't need to be concerned about. However, the flashed RX 9070 is apparently a little unstable in idle desktop mode due to the higher clock speeds and that is something to think twice about.
This is all very much classic AMD, and down to its rule-of-two release cadence where it generally offers two cards based on the same GPU at launch. The whole exercise reminds me very much of AMD's Vega 56 days. Everyone wanted the full-fat Vega 64 card, but just as with the 9070 XT, it was in high demand, resulting in low stock and high prices. However, the Vega 56 could be flashed with a Vega 64 BIOS, increasing its clock and power limits to the point where you could get a 64 card for 56 money.
It was the same around the very first RDNA cards, too. The RX 5700 XT was the top card, but the RX 5700 used the same chip and could be flashed with the top-spec card's BIOS to unlock its power and frequency limits—again, giving you essentially high-end GPU performance from one tier down.
Best CPU for gaming: The top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game ahead of the rest.
Ultimately, though, you have to ask yourself whether it's worth doing for the sake of 16% more performance. That might sound like a lot but a game running at, say, 50 fps would only go up to 58 fps and that's only if the game's performance was 100% limited by the GPU's clock speed. If you don't touch the VRAM clocks, for example, then you're unlikely to see as big an increase.
That all said, we've undervolted AMD's second-string RDNA 4 GPUs ourselves and with only minimal effort—and importantly very little impact on either thermals or power draw—have got an RX 9070 running, on average, 12 to 14% faster at 1440p and 4K. That puts it only around 2% behind the RX 9070 XT without the need to flash or tweak the BIOS and risk bricking your precious GPU hardware.
I used to spend hours experimenting with graphics cards, messing about with voltage mods and the like, but I don't want to deal with the increased heat output (and thus, increased fan noise) any more and while tweaking the voltage curves can counter some of this, it's all a bit too much work in my eyes for not enough gains—especially when I can just change a graphics option or apply upscaling to get an even bigger boost.
Still, I reckon that this news will be lapped up by many an RX 9070 owner and somebody, somewhere, will figure out how to make it all work properly, with no instability issues. After all, all of this is essentially a free, albeit risky, performance boost and in today's GPU market, free is a magical word.
]]>AMD's Radeon RX 9070 XT is slimmer pickings, as is the plain vanilla RX 9070. But, again as I write these words, you can buy the latter, albeit for a bit of a bump over the MSRP.
Palit RTX 5070 | 12 GB GDDR7 | 6144 shaders | 2512 MHz boost | ?509.99 at Overclockers
The RTX 5070 isn't a huge step over the RTX 4070, but it is a bit faster. This Palit board is currently hitting Nvidia's MSRP and we can't see prices getting much better than this any time soon, so now could be the time to pull the trigger.
RTX 5070 price check: Scan ?539.99View Deal
Where to start? That'll be the Nvidia RTX 5070. It's a slightly disappointing GPU versus the last-gen RTX 4070. But at least you can have it right now at UK MSRP of ?510. If that Palit card disappears, there are several others at a whisker over MSRP.
Indeed, the Palit board on offer from Overclockers.co.uk is actually being sold at a tiny bit of a discount from its Palit's ?539 MSRP, though it's not an overclocked card, so it's unclear why it might be worth any more than a reference RTX 5070.
Zotac RTX 5070 Ti | 16 GB GDDR7 | 8960 shaders | 2482 MHz boost | ?728.99 at Overclockers
An RTX 5070 Ti at MSRP? Yep! This is a superb graphics card, think of it as being like an RTX 4080 with DLSS 4 support and the price feels particularly decent when you consider the RTX 4080 originally launched at ?1,249 in the UK. View Deal
Stepping up a tier to the RTX 5070 Ti, it too can be had at the UK's ?729 MSRP for the Zotac RTX 5070 Ti Solid. If that one evaporates, the next one up is a fair chunk more at ?799.
Go a further tier up and it's perhaps not a huge shock to find that there are no RTX 5080s at UK MSRP. That includes out of stock cards, the lowest price of which seems to have sold out earlier today at ?979, a touch above the ?949 official UK MSRP for the 5080. Anyway, right now ?1,028.99 precisely is as good as it gets for a Zotac Solid RTX 5080.
Finally, what of the mighty RTX 5090? Inevitably, you can't get anywhere near the UK's official ?1,889 MSRP. The cheapest right now is a hefty ?710 premium at ?2,599 for a Gainward card.
As painful as that sounds, good luck buying an RTX 5090 in the US right now at any price. It's totally sold out. So, the UK is getting a better deal on the RTX 5090 for now, even at that elevated price.
But what of AMD GPUs, you cry? As I write, the Radeon RX 9070 XT seems to be sold out, though there are indications that there was some availability earlier today, albeit for ?720, and thus well above the UK MSRP of ?569.
ASRock RX 9070 | 16 GB GDDR6 | 3584 shaders | 2700 MHz boost | ?599.99 at Overclockers
The RX 9070 is really solid mid-range graphics card, provided you can get one at a sensible price. While this deal is a little on the pricey side, it's still better value for money than Nvidia's RTX 50-series, and FSR 4 upscaling and frame generation are the best AMD tech in years. The only problem is that the RX 9070 XT is available for only a little extra cash.
RX 9070 price check: Ebuyer ?609.99 | Scan ?628.99View Deal
As for the plain vanilla RX 9070, currently there's an ASRock board for ?599. That's a fair step up over the ?524 MSRP of the non-XT in the UK. But, hey, at least you can buy one.
Generally, this all bodes pretty well. Availability isn't fantastic in the UK, and yet many prices are at or at least reasonably near MSRP. So, it wouldn't be hard to imagine with a little more availability, those prices returning fully to normal. For "normal", read actually the MSRP.
In the meantime, we'd be pretty happy to pull the trigger on the few MSRP or very-near MSRP cards mentioned here. What with the tariff thing, we doubt prices are going to get better, that's for sure.
There's no official word yet as to, well, anything about the RTX 5060 Ti, but according to a post spotted on Board Channels by Videocardz, the pricing is rumoured to be the same as the previous generation RTX 4060 Ti, ie $399 for an 8 GB version and $499 for a 16 GB card.
Given all the VRAM-related furore created by the 8GB version of the RTX 4060 Ti, it would seem likely that the 16 GB variant would be the one many gamers would be eyeing up for their next graphics card purchase. However, if this rumour turns out to be accurate, then the 16 GB card would have some serious competition in the face of the AMD Radeon RX 9070, and maybe even an RX 9060 expected to launch soon, too.
The former is a 16 GB card we've tested very thoroughly, and found can deliver RTX 5070-beating performance in many benchmarks. It's also technically available for an MSRP of $549 (if you can find one at that price, although stocks are said to be improving this month), which makes sense when you put it next to Nvidia's mid-range card, but looks like a much better buy when compared to a theoretical $499 RTX 5060 Ti.
If the leaked specs turn out to be correct, the RTX 5060 Ti will have 4,608 CUDA cores, a mere 6% jump over the RTX 4060 Ti. It'll also have a 128-bit memory bus (just like the previous card) and a 180 W TGP, 20 watts more than the GPU it replaces.
Compare that to the $549 RTX 5070's 6,144 CUDA cores and 192-bit memory bus and its performance relative to the RX 9070, and you start to see where the problem may lie.
When two mid-range cards are a mere $50 more (at least, on paper) than Nvidia's Ti variant of a budget GPU, and said GPU might not deliver much of a jump in performance over the previous model, then the RTX 5060 Ti looks like a card that may end up being a limited value proposition, to say the least.
Of course, this is all speculation. No pricing, launch dates, or specs have been confirmed, so all we've got to go on is dubious sourcing for now. But as that rumoured launch date looms ever closer, there's a fair bet that retailers and AIBs at the very least might have some idea of what the final MSRP may be.
Last-minute price changes are far from unheard of, though, so we'll just have to wait and see what the pricing ends up being for the RTX 5060 Ti—if it even launches this month at all.
Should the 16 GB variant end up being a $499 GPU, however, and should those CUDA counts prove correct, I reckon Nvidia will have to lean pretty hard on the Multi Frame Generation benefits of the RTX 50-series to shift them in the face of such stiff AMD competition.
Best CPU for gaming: Top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game first.
But what transpired over the following months not only put me off Radeon cards for a good while, until the 9700 appeared in 2002, but it also set the tone as to how ATI and eventually AMD would fare against Nvidia in the graphics card market.
But let's start with why I was so looking forward to the first Radeon. At that time, I had around 12 graphics cards in a number of PCs—most of which were used for testing. I can't remember everything I had but I do recall an Nvidia Riva TNT2 Ultra and a GeForce 256, a Matrox Millenium G200, and a 3dfx Voodoo 3 3000.
The Radeon DDR made them all look rubbish, though. Its GPU had two pixel pipelines, each sporting three texture units, and with a clock speed of 166 MHz, it could churn out over 300 million pixels and almost 1,000 billion texels per second. The GeForce 256 couldn't get anywhere near those figures.
Admittedly, neither could the Radeon itself, as it just didn't have enough memory bandwidth to support those throughputs in actual games but no graphics card did at the time. ATI was determined to make its graphics processor more futureproof than Nvidia's and while both companies offered hardware acceleration for vertex transform and lighting (aka hardware TnL), ATI also added vertex skinning and keyframe interpolation into the pipeline.
Just a few weeks later, Nvidia launched its NV15 processor and within a few days, a GeForce2 GTS arrived on my doorstep. Essentially a pumped-up GeForce 256, the new Nvidia chip sported a much higher clock speed than its predecessor, along with twin texture units in each pixel pipeline. But that was it and the Radeon DDR still looked to be the better product—more features, better tech—even though it offered less peak performance.
Coding legend John Carmack agreed at the time: "The Radeon has the best feature set available, with several advantages over GeForce…On paper, it is better than GeForce in almost every way." When it came to actually using the two cards in games, though, the raw grunt of the GeForce2 GTS put it well ahead of the Radeon DDR, except in high-resolution 32-bit colour gaming.
That's partly because no game was using the extra features sported by the Radeon but it was mostly because the GeForce2's four pixel pipelines with dual texture units (aka TMUs) were better suited for the type of rendering used back then. ATI's triple TMU approach was ideal for any game that did lots of rendering passes to generate the graphics but none were doing this, and it would be years before games did. By then, of course, there would be a newer and faster GPU on the market.
Something else that Nvidia did better than ATI was driver support, specifically stable drivers. I spent more time dealing with crashes, rendering bugs, twitchy behaviour when playing in 16-bit mode, and so on than I ever did with any of my Nvidia cards. The Radeon DDR was also my first introduction to the world of 'coil whine' as that particular card squealed and squeaked constantly. I spent many weeks conducting back-and-forth tests with ATI to try and narrow down the source of the issue but to no avail.
Drivers would continue to be ATI's bugbears for years to come, even after AMD acquired them in 2006. By then, the brief glory of the Radeon 9700 Pro and 9800 Pro (and the abject misery of the GeForce FX) was gone, despite the likes of the Radeon X1900 and X1800. Nvidia's graphics cards weren't necessarily any faster or sported any better features, but the drivers were considerably better, and that made the difference in games.
Over the years, AMD would try again with all kinds of 'futureproof' GPU designs, such as the compute-focused design of the Radeon R9 Fury and its on-package High Bandwidth Memory (HBM) chips. Where Team Red always seemed to be looking ahead, Nvidia was focused on the here and now, and it wasn't until 2019 that AMD finally stopped trying to over-engineer its GPU designs and stuck to making them good at gaming, in the form of RDNA and the Radeon RX 5000-series.
But even now, AMD just can't seem to help itself. Take the chiplet approach of the last-gen RX 7900 XTX—it didn't make a jot of difference to AMD's profit margins or its discrete GPU market share, or even make the graphic cards any faster or cheaper. With the recent RX 9070 lineup, though, Team Red has finally produced a graphics card that PC gamers need now. Even its driver team is on the ball and I can't recall when I last said that about AMD.
Best CPU for gaming: The top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game ahead of the rest.
Arguably, Nvidia seems to have gone down an old Team Red road of late, what with iffy GeForce drivers and graphics cards that are barely any faster than the previous generation but are replete with features for the future (e.g. neural rendering). But when you effectively control the GPU market, you can afford to have a few hiccups; AMD, on the other hand, has to play it safe.
I just wish it had done so years ago when it was battling toe-to-toe with Nvidia at retailers. The Radeon RX 9070 XT is the best card that it's released for a long time and although its price is all over the place at the moment, the fundamental product is really good (and far better than I expected it to be). Just imagine how different things would be today if AMD had used the same mindset behind the design of the 9070 for everything between the X1800-series and the first RDNA card.
Will Radeon still be a household name in another 25 years? I might not be around to see if it comes to pass but I wouldn't bet against it, as despite all its trials and tribulations, it's still here and still going strong.
]]>It was, of course, the RTX 5090 and RTX 5080 boards that were the first to launch of the RTX 50-series, allowing them time to amass numbers for this March instalment of the Steam survey. Not only would you expect the cheaper RTX 5080, with its smaller GPU die, to be made and sold in bigger numbers. It's also a bit more likely to be bought for gaming than the monstrous RTX 5090, which is the kind of mega-GPU that people snap up for all kinds of weird and wonderful workflows.
Anyway, the RTX 5080 enters the list of most popular GPUs among Steam users with 0.2% market share. Not exactly a huge chunk, but then the single most popular GPU only has about a 5% share. That's still the RTX 3060, FYI, with the RTX 4060 in second place, just a few 10ths of a percentage point behind.
In case you're interested in this kind of thing, the first '70-class GPU on the list is the RTX 3070 in 8th place with 2.87% of Steam gamers, while the 4070 is in 11th spot with 2.49%.
As for AMD, a non-specific "Radeon Graphics" entry notches up 13th place and 2.17%, while the first AMD GPU called out by name is the RX 6600 with 0.89% share.
AMD remains utterly swamped by Nvidia in the list, of course. The new Radeon RX 9070 boards don't appear yet, which is to be expected. But even the RX 7800 XT doesn't make the cut, though the RX 7700 XT is in there at 0.22%, just a few spots ahead of the RTX 5080.
Other notable stats include AMD CPUs gaining 6.55% share, while Intel lost 6.59%. As for the aforementioned anomaly, well, the proportion of users with the Chinese version of Windows installed literally fell by half, from around 50% share last month to 25% for March.
Superficially, that seems hugely improbable. Very large installed bases don't change that fast. However, some observers point out that it's actually the February data that's anomalous, showing a huge spike in Chinese language Windows installs, with March a return to the norm.
However, even that February data may not be a weird as it seems. It coincided with Chinese new year, a period where many Chinese citizens have time off work which can lead to a sharp season spike in activities like online gaming.
Whatever, we'll be keeping our scanners peeled in the coming months for mention of the AMD Radeon RX 9070 and 9070 XT GPUs. AMD reckons they are the fastest selling Radeons ever, so surely they'll pop up in the survey, soon right...?
Best CPU for gaming: Top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game first.
As reported by TechSpot, developers of popular recently released games The First Berserker Khazan and Inzoi have put out notes to FAQs and patch notes respectively. These notes recommend those on RTX 40-series cards skip driver version 572.83, in favour of 566.36 which launched last December.
Those on RTX 30-series cards are advised to get the latest driver, but the Inzoi patch notes state, "If issues persist, we recommend installing version 566.36".
Both the Inzoi patch notes and The First Berserk Khazan FAQ recommend the same set of drivers for the same cards. As of right now, these are the only major development teams recommending users do so.
Neither set of notes states specific problems that those on RTX 40-series cards will run into with the latest drivers. The First Berserker notes state that "using version 572 may cause stability issues" but it doesn't specify what those stability issues are. RTX 50-series GPUs are recommended to get the latest drivers as normal.
Graphics driver 572.83 launched on March 18, 2025, and was subsequently followed by a feedback thread in the official Nvidia forums. This thread is filled with users reporting their rigs are stuttering or generally acting worse than before getting the update.
One user says "I have a 4090 mobile GPU, and even though I’m using the latest driver version, I’m also experiencing these issues." Another says "I believe this update completely bricked my HP Omen RTX 4080 laptop." Some users, however, report that their rigs are otherwise unaffected, with one user claiming their desktop RTX 4080 is "not really having any issues" while their mobile RTX 4080 is struggling to run games well.
If you are having the same issues, or just feel a bit cautious of potential problems, you can go back to previous graphics drivers by finding your product on the Nvidia drivers site and downloading the version recommended for your card.
My RTX 4070 Super rig has not seen any stuttering since installing the latest graphics drivers but, given a few major developers are recommending rolling back drivers, I'll be doing the same until some sort of fix is found.
We have reached out to Nvidia for comment on this story.
Best gaming PC: The top pre-built machines.
Best gaming laptop: Great devices for mobile gaming.
Alright, with that out of the way, let's turn our attention to the card at hand, the Gigabyte GeForce RTX 5070 Ti Eagle OC Ice SFF.
This is a triple-fan card with three 'Hawk' fans. The middle of the three spins in reverse, which is not an uncommon approach these days to help ease temperatures. The backplate comes in a clean-cut silver finish, though the card is predominantly white, hence the 'Ice' nomenclature. There's a cut-out on the far side to allow a little room for the heatsink to breathe and it does expel some rather toasty air in operation. You'll want some case fans blowing its way to shift any residual hot air away from your CPU and out of the case.
The card measures 304 x 126 x 50 mm. That earns it the title of 'Nvidia SFF-ready' and means it's good for a small form factor case. Though the definitions of small are being tested to their absolute limit with most 'SFF-ready' cards these days. This unit still demands over two PCIe slots and is a touch wider than the MSI GeForce RTX 5080 Ventus 3X OC White I've been testing around the same time. It also weighs a good chunk at 1223 g (2.69 lbs), heavier than the Ventus 3X, though there's a support included in the box to help prevent droop.
The Eagle OC is an otherwise sleek-looking graphics card. It has a surprisingly tasteful RGB LED strip along the bottom length of the card, which can be controlled via Gigabyte's Control Center software. I also tried to adjust the light bar through OpenRGB, however, this specific Gigabyte card is not yet supported. The Gigabyte Control Center package is quite a large download at just over 800 MB and I was hoping to not have to keep it around. Though, in its defence, it also offers fan profiles for the Eagle OC, and lets you set or find a suitable overclock. I ended up sticking with MSI Afterburner—if it ain't broke—but you could save on another download here.
Lastly, on the card there's a small switch to change between the Performance and Silent vBIOS profiles. All my testing below is done on the Performance profile as this isn't a loud graphics card to begin with. However, if you must drop it to Silent, I ran a few tests and found it generally ran more or less the same as Performance mode in terms of clock speed and power draw. It was consistently slower, however, and since it's not actually silent, I'm not sure it's worth the hassle. I didn't really notice much of a dip in noise levels whatsoever. It's good to have a backup vBIOS should in the unlikely event anything go wrong, at least.
The only way to unearth the innards of this card is to entirely remove the heatsink from the GPU and memory, so that's what I've done. Gigabyte has actually used a thermal putty on the memory and VRM here and has been quite generous with the dollop for each component. It all appears to make good coverage and there's not a spot missing, though it doesn't appear there's much pressure on the inductors to spread the putty out completely.
I've found this card listed over at Best Buy for $900. Not available, of course. To put that into perspective, the MSRP of the RTX 5070 Ti is $749, meaning this card is $151 over the usual asking price. There is no Founders Edition RTX 5070 Ti, and my attempts to try to find an MSRP RTX 5070 Ti at launch are well documented, so make of that MSRP what you will. To put it at a slightly different perspective again, this Eagle OC Ice is one of the more affordable third-party RTX 5070 Ti cards going.
You see the problem here? The RTX 5070 Ti, generally, is anything but affordable and made worse by extreme third-party prices. The Eagle OC Ice isn't the worst of the lot for that, but it's far from well-priced. To distill that down into one cohesive thought, however you look at it, the Eagle OC Ice is a tough sell.
But I'm yet to share the performance figures, which might help you make up your own mind. The 'OC' in this card's title means this card is a little juiced up versus the stated reference figure, and thus runs at 2542 MHz boost to the nominal 2452 MHz—a 90 MHz increase.
We've so far tested five overclocked RTX 5070 Ti cards, and I can say that across those cards any and all variation in boost clock has resulted in the square root of nothing. A frame or two separate the lot in most games, if they perform any differently whatsoever. Only our Homeworld 3 results are out of whack with that analysis, but this game is quite CPU-bound and the slight increase in performance seems consistent with a tiny and mysterious improvement in CPU performance, rather than GPU.
Most of the cards we've tested are more expensive than the Eagle Ice OC, including the MSI Gaming Trio and TUF Gaming OC. However, we have tested one nominally MSRP model, the Ventus 3X, which comes with a moderate factory overclock of 2,482 MHz as standard—just 60 MHz slower than the Eagle OC.
Excluding our results for Homeworld 3, the Eagle OC Ice is 0.43% faster than the Ventus 3X at 1080p, or 0.37% at 4K. Less than a percentile point difference for… $151 more.
Well, you clearly get more overclocking potential for that extra money, right? Unfortunately, no, not really. While it's true that the RTX 50-series does overclock extremely well, I couldn't get the Eagle OC Ice to match our +450 MHz offset overclock on the Ventus 3X. I had to settle for +440 MHz, which has remained stable throughout further testing. The difference between these overclocks is nothing in the grand scheme of things. Both cards perform near enough the same and these two samples are not indicative of the overclocks that might be possible on other cards even of the same design. Yet it's a bit frustrating considering the fairly unexplainable price difference between them.
PC Gamer test platform
Supplied by Cyberpower | MSI
CPU: AMD Ryzen 7 9800X3D | Motherboard: MSI MPG X870E Edge Ti WiFi | RAM: Kingston Fury Beast RGB 32 GB (2 x 16 GB) @ 6,000 MT/s | Cooler: MAG CoreLiquid i360 White | SSD: Spatium M480 Pro 2 TB | PSU: MPG A1000GS PCIe 5 | Case: MAG Pano 100R White
One likely reason for the Gigabyte's slimmer improvement is that it has a locked power limit at 100%, which means you're working with the card's existing power limit to find a suitable overclock. As such, if you're a judicious overclocker looking to eke more out of the surprisingly fruitful RTX 50-series, look elsewhere.
There's also the argument for AMD's RX 9070 XT. I've included two for comparison here, the MSRP Asus Prime OC and $650 XFX Swift OC. The former is much more appealing than the latter, though demand has seen that prices have soared for AMD's cards too. Nonetheless, as shown in the benchmark results, these Radeon cards are both competitive and draw attention away from Nvidia's RTX 5070 Ti.
? You have cash to burn and will pay over the odds for an RTX 5070 Ti: Okay, it's not that convincing, but if you are willing to spend $900+ on an RTX 5070 Ti, this one is at least bang on that amount, rather than creeping up into four-digit figures.
? You want the best value for money: There's nothing good about the performance-per-dollar ratio on this card. If you can find an MSRP or AMD alternative at a good price, that's the one to go for.
? You want overclocking controls: There's little tweaking to be had on the Eagle OC Ice.
In the Eagle OC Ice's defence, it runs fairly cool. Not more so than most similar designs but at 64°C on average during Metro Exodus, it's hardly toasty. Power draw is in a similar stasis, somewhere between average and bang-average.
What can I make of the whole thing? I'd like for this review to be useful in six months time, when there are heaps of MSRP or near-MSRP cards available. If the Eagle OC Ice dropped in price to a little over MSRP, you might excuse its middle-of-the-pack performance. However, I'm not convinced we'll see such a massive correction from the manufacturer-set price in the short or medium-term, and I have to judge this card on where it stands today.
I just can't really recommend this card. The RGB lighting is neat, and the white and silver colourway is one of my favourites of the 50-series family right now, but it's all let down by that important performance-per-dollar. Since there's no Founders Edition RTX 5070 Ti, it's going up against mostly overpriced competition, and often more so than itself, but I'm not convinced there's anywhere near enough here to justify that extra spend. There is also the RX 9070 XT, which performs similarly in a range of our tests, and theoretically available for a lower price still than the RTX 5070 Ti. Though, again, AMD's card is subject to hiked up prices.
Altogether it's tough to suggest a viable alternative for much less, though your time might be better spent chasing stock of a cheaper Radeon card.
]]>The Ventus 3X OC White has three Torx 5.0 fans and measures 303 x 121 x 49 mm, which means it's not top percentile size or weight for the RTX 50-series. It's pretty standard fare and should fit inside most cases as a result. The main draw of this particular Ventus 3X model is that it comes in white, though there's also the more or less the same Ventus OC 3X Plus in black and silver. Don't be a scaredy-cat though, I promise you'll find plenty of white PC parts these days to bring a build together, and this clean card should easily match most other components.
Removing the metal backplate and plastic fan shroud, I'm able to get a better look at the nickel-plated copper baseplate and heatsink, which I've measured at 27 mm thick. I'm not removing this entirely as I still need to use the card for comparative tests, but between the baseplate covers the memory with individual thermal pads and appears to make good contact. It also makes contact with the power delivery and, again, more thermal pads here.
The baseplate itself erupts out to eight heatpipes, four stubbier ones covering the PCB towards the IO, and four longer ones extending all the way out to the other side of the card. MSI has used a piece of thicker metal as a strengthening structure between the IO plate and across nearly three quarters of the heatsink, which does appear to help with the rigidity of the card in situ on my test bench.
Oh and a nice touch with the construction of this all-white card is that the shroud's fan cables are white, too. Little details and all that.
So, how does this card fare in testing? Much like the RTX 5080 Founders Edition for frames per second, though it also runs pretty cool. I measured it at 69°C during three runs of Metro Exodus Enhanced Edition at 4K, which is only a single degree hotter than the Founders Edition in the same test, which I can easily drum up to within test variance.
The MSI card runs a tad slower in this same Metro test compared to Nvidia's Founders Edition—the Ventus 3X hit an average clock speed of 2654 MHz. The FE managed 2736 MHz. That is slightly explained away by the higher power draw of the FE model versus MSI's OC card, over 8% higher power draw on average, though you have to wonder what OC stands for if not a higher clock than the FE model.
PC Gamer test platform
Supplied by Cyberpower | MSI
CPU: AMD Ryzen 7 9800X3D | Motherboard: MSI MPG X870E Edge Ti WiFi | RAM: Kingston Fury Beast RGB 32 GB (2 x 16 GB) @ 6,000 MT/s | Cooler: MAG CoreLiquid i360 White | SSD: Spatium M480 Pro 2 TB | PSU: MPG A1000GS PCIe 5 | Case: MAG Pano 100R White
For the record, the Ventus 3X OC Plus is rated to a higher clock speed than the FE model out of the box. That's 2,640 MHz boost, with an Extreme profile to boost it up to 2655 MHz, available in the MSI Center app. The FE model runs at 2617 MHz. While it's technically correct to call the Ventus 3X OC a factory overclocked card, it's one of the slimmest around. As such, it doesn't mean much when you start talking about actual game clocks, boosting algorithms, and overclocking.
For our Metro test, the Ventus 3X ran at 2654 MHz on average. The FE version ran at 2736 MHz, through what appears a slightly larger thermal envelope and adequate cooling. This stands as a good example of why those two letters 'OC' don't always mean a whole lot today—a GPU will boost beyond its limits on the box if there's headroom available, and you can tweak this further yourself pretty easily, as we'll get into now.
I opted for the standard boost profile for all of my benchmarking, as if you want to go above that, you might as well overclock the card. That's easy to do with the RTX Blackwell generation, and here's what I managed to set in MSI Afterburner without much tweaking whatsoever.
With a loop of 3DMark's Steel Nomad test, I could push in excess of both of these offsets, but I found the card would quickly fall over as soon as I'd run it through Metro Exodus or F1 24. With the settings noted above, I had a clean run through our benchmarks at 1440p and 4K.
? You can get your hands on one for a decent price: If you want an RTX 5080, you might have to pay a pretty penny for one. Ideally, you don't pay over the odds for this one, however, as it's not worth any more than its already inflated price.
? You want granular overclocking controls: You cannot tweak the power profile of this card, so it's basically a write-off for serious overclockers.
Though, again, I do have to report that the Ventus 3X OC is bested by the Founders Edition, which reached a higher stable core overclock at +525 MHz. Whether from this marginal overclock or the FE's natural proclivity for boosting beyond its rated speeds, it ends up performing a little better than the Ventus 3X OC in most games. Homeworld 3 is a bit of an outlier, but this game loves a speedy CPU, and it's currently enjoying higher frame rates with our 9800X3D than it has previously, which might help explain that one.
One thing to note on the Ventus 3X OC is that, unlike many other cards, the power limit and voltage cannot be tweaked here. It's locked in. So, if you're looking to push your card on a more granular level, you can't do that here, but it does mean that overclocking it is as simple as typing a few digits into Afterburner's offset and seeing what runs. So long as it's stable, job done.
There we have it, an 'OC' card that runs slower than the Founders Edition for a lot more money right now. If we ignore the fact you cannot buy one of these cards easily today, you should expect to pay around $1,300 for one. That's $300 more than the Founders Edition MSRP. That's not a very attractive proposition.
If I'm looking for a silver lining, this might be the card to buy in six months, providing demand has slowed, supply has improved, and AIBs adjust the price premiums they've whacked on these things. I'm not taking that bet, however, and I can only review the card in front of me, as it stands today. That means it's a decent enough card, and one that'd I'd happily stuff into a white PC build, were it not ultimately priced out of consideration.
]]>A new version of the Nvidia App has been rolled out with a couple of pretty trick features. The big news items with Nvidia App version 11.0.3.218 are custom DLSS upscaling resolutions and a new AI assistant called Project G-Assist that runs locally on your PC.
The first feature involves the ability to set the base resolution for DLSS upscaling with single-digit granularity. Within the app you can set the base resolution from which DLSS scales on a per-game basis, overriding any configurations the game developer has chosen for DLSS settings.
The "base resolution" means the resolution that the GPU's 3D pipeline renders at before upscaling adds pixels to generate a high-resolution final image. For instance for a final upscaled resolution of 4K or 3,840 by 2,160 pixels, the base resolution for Performance mode upscaling might be 1,920 by 1,080 or 1080p, while Quality mode will typically be 1440p base resolution or 2,560 by 1,440.
Base resolutions of anywhere between 33% and 100% of the final upscaled resolution can be selected. If 100% sounds like it doesn't make any sense—wouldn't 100% base resolution mean no upscaling at all?—it effectively applies DLAA or Deep Learning anti-aliasing, which is a more effective and faster anti-aliasing routine that traditional methods like MSAA or multi-sampling AA.
That's a pretty handy feature for hand tuning your own DLSS modes. But it's Project G-Assist that could more transformative for a greater number of gamers. It's an AI assistant based on a small language model that runs locally on your PC.
For now, it's only compatible with RTX 30-series and up desktop GPUs. Nvidia says laptop GPU support will be added later.
Anyway, Nvidia says G-Assist can help users, "control a broad range of PC settings, from optimizing game and system settings, charting frame rates and other key performance statistics, to controlling select peripherals settings such as lighting — all via basic voice or text commands."
It's not totally clear how flexible the natural language interface will be. In an ideal world, you'd be able to say something like, "hey, Half-Life 2 RTX is running badly, can you make it a bit smoother without impacting the image quality too much," and then after looking at the results say, "that's not bad but the textures look a bit fuzzy, can you make them sharper," or, "it looks smooth but feels laggy, can you fix that."
Project G-Assist is available as a separate download from the Home tab of the Nvidia App in the "Discover" section. Note, it will only be visible if you have a compatible GPU.
Nvidia says G-Assist uses a Llama-based model with eight billion parameters, which is relatively tiny compared to large language models like ChatGPT-4, which has around 1.8 trillion parameters. The smaller size of the model means it can run locally on a gaming GPU.
For now, we don't know how much VRAM G-Assist uses. Given that Nvidia's lower end GPUs tend to be a bit short of video memory as it is, that's a bit of a concern. But then maybe, just maybe, if features like this become more important, Nvidia will up VRAM allocations, just as Apple increased the base spec of its Macs from 8 GB to 16 GB to support AI features and Intel's Lunar Lake chip had to be minimum 16 GB to meet Microsoft's Copilot+ specification.
Well, I'm allowed to hope, right?!
Best CPU for gaming: Top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game first.
Is there a catch? Well, she was only talking about first-week sales. Speaking to Asus rep Ordinary Uncle Tony(otherwise known as Tony Yu and Asus's General Manager for China), Su said, "the 9070 XT has been a fantastic success. Actually, it's been the number one selling for all of the AMD Radeon generations for first-week sales by far, 10x higher than previous generations."
Su doesn't put actual figures on it, so it's hard to know how that translates into actual volumes. And of course, bumper sales for a week and then no availability for months, by way of example, wouldn't necessarily mean impressive overall volumes.
So, arguably the real question is long-term availability, not first-week sales. On that subject, Su says, "we're increasing the manufacturing so that we can have more gamers who have access." Again, the lack of specifics is a little frustrating even if we applaud the general sentiment.
Predictably, Su emphasised AMD's current focus on bringing features and performance to the relative mainstream of the market. "Everyone likes a very, very high-end GPU, but not so many people can access it," Su observes, adding that AMD remains committed to bringing, "the best gaming capability to a good price point."
Among other topics, Ordinary Uncle Tony also asked Su for her take on the view that chip manufacturing costs seem to be escalating while the performance benefits of new nodes are diminishing.
"It is true that the silicon scaling is getting more difficult. We saw this trend for the last five-plus years," Su says. The solution? Su thinks silicon still has some legs, but that it will require some companion technologies to maximise returns.
"From AMD's standpoint, we have invested in next-generation technologies, for example, our chiplet packaging technology. Our 3D stacking is another example," she says. "I think silicon still has a long way to go, but we have to continue to optimise not just on silicon but on package and also on system and also with software."
Anywho, it will certainly be interesting to see both how available AMD GPUs are over the coming months and then what impact that has on AMD's broader market share when the usual data-collating suspects chime in later this year with their estimates. Long story short, it seems clear there's major demand for AMD's new GPUs, the question is whether the company will actually make enough of them to gain significant market share. Watch this space.
Best CPU for gaming: Top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game first.
At this year's GDC event, Nvidia showcased all its latest RTX technologies, including Ace, its software suite of 'digital human technologies.' One of the first games to use it is Inzoi, a Sims-like game, and while chatting to Nvidia about it, I learned that the AI model takes up a surprising amount of VRAM, which raises an interesting question about how much memory future GeForce cards are going to have.
The implementation of Nvidia Ace in Inzoi (or inZOI, to use the correct title) is relatively low-key. The family you control, along with background NPCs, all display 'thought bubbles' which give you clues as to how they're feeling, what their plans are, and what they're considering doing in the future. Enabling Nvidia's AI system gives you a bit more control over their thoughts, as well as making them adapt and respond to changes around them more realistically.
In terms of technical details, the AI system used is a Mistral NeMo Minitron small language model, with 500 million parameters. The developers had experimented with larger models but settled on this size, as it gave the best balance between responsiveness, accuracy, and most importantly of all, performance. Larger models use more GPU resources to process and in this specific case, Inzoi uses 1 GB of VRAM to store the model.
That may not seem like very much, but this is a small model with some clear limitations. For example, it doesn't get applied to every NPC, just those within visible range and it won't result in any major transformations to a character's life. The smaller the language model, the less accurate it is, and it has the potential to hallucinate more (i.e. produces results that aren't in training data).
While Inzoi's AI system isn't all that impressive, what I saw in action at the GDC made me think that Nvidia's Ace has huge potential for other genres, particularly large, open-world RPGs. Alternatives already exist as mods for certain games, such as Mantella for Skyrim, and it transforms the dull, repetitive nature of NPC 'conversations' into something far more realistic and immersive.
To transform such games into 'living, breathing worlds,' much larger models will be required and traditionally, this involves a cloud-based system. However, a local model would be far preferable for most PC gamers worldwide, which brings us to the topic of VRAM.
Nvidia has been offering 8 GB of memory on its mainstream graphics cards for years, and other than the glitch in the matrix that is the RTX 3060, it doesn't seem to want to change this any time soon. Intel and AMD have been doing the same, of course, but where 16 GB of VRAM is the preserve of Nvidia's high-end GPUs, such as the RTX 5070 Ti and RTX 5080, one can get that amount of memory on far cheaper Arc and Radeon cards.
But if Nvidia Ace really takes off and developers start to complain that they're being restricted about what they can achieve with the software suite, because of the size of the SLM (small language model) they're having to use, then the jolly green giant will have to respond by upping the amount of VRAM it offers across the board.
After all, other aspects of PCs/computers have needed to increase the minimum amount of memory they sport because of AI, such as Apple with its base Mac spec and Lunar Lake laptops having 16 GB because of Copilot.
It's not often that one can say AI is doing something really useful for gamers but in this case, I think Nvidia's Ace and competing systems may well be what pushes graphics cards to consign 8 GB of VRAM to history. Not textures, ray tracing, or frame generation but realistic NPC responses. Progress never quite goes in the direction you always expect it to.
Best CPU for gaming: Top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game first.
News of the impending appearance of the GeForce RTX 5060 Ti was published by BenchLife via a post on X from wxnod, with April 16 being the official launch date. Just as it did with its predecessor, the 4060 Ti, Nvidia will have two variants of the 5060 Ti but they'll only differ in terms of the amount of VRAM: 8 and 16 GB of GDDR7.
However, Videocardz claims that the actual announcement of the RTX 5060 Ti will take place the day before (April 15) and that the RTX 5060 will also be in the line-up. For some reason, though, that particular model won't be available to buy until May. Videocardz also has the specs for the RTX 5060 Ti and 5060, so if they're genuine, this is what we're going to be looking at.
Starting with the 5060 Ti, it's said to sport 4,608 CUDA cores and a boost clock of 2,572 MHz—just 6% and 1% higher than the RTX 4060 Ti, respectively. That seems… umm…pretty terrible as a generational upgrade but given that both the RTX 5080 and RTX 5070 Ti overclock very well, some AIB (add-in board) partners may have a few decent OC models worth buying.
What the RTX 5060 Ti does have lots of, though, is memory bandwidth due to using 28 Gbps GDDR7. Where the 4060 Ti makes do with 288 GB/s, the 5060 Ti will boast 448 GB/s of bandwidth, if those memory specs are correct. That's a 56% increase and even if the newer Blackwell GPU has less L2 cache than the Ada Lovelace one (which I doubt it will), the significantly higher bandwidth will definitely help the 5060 Ti.
It's the RTX 5060 that most PC gamers will pay attention to and if Videocardz's specs are genuine, then it could well be the best mainstream card for a long time. With a reported 3,840 CUDA cores (25% more than the RTX 4060) and the same memory bandwidth as the 5060 Ti, it has the potential to be a real powerhouse for the money.
Ah, money. The metric that has come to utterly define the RTX 50-series. There's no word on how much any of the mainstream Blackwells will cost and I should imagine gamers who have been waiting for the likes of the RTX 5060 will be keen to know what the MSRP is going to be.
Best CPU for gaming: The top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game ahead of the rest.
That said, it hasn't helped that AIB partners have pretty much ignored Nvidia's stated MSRP and the few models that have been released at the recommended price have been low in stock or just not stayed at that price for very long.
Videocardz claims that Nvidia is demanding that all its partners have at least one model at MSRP on retailer's shelves for launch and while that may actually happen, there's no guarantee that you'll be able to get your hands on one.
If one ignores the whole pricing and stock debacle for a moment, Nvidia's RTX 50-series is somewhat polarizing in terms of how good each model has been. The RTX 5090 is incredible but the RTX 5080 is kinda meh; the RTX 5070 Ti is great whereas the RTX 5070 is just disappointing.
As things currently stand, the RTX 5060 Ti and RTX 5060 look like they might flip this around, with the latter being the better of the two, but only time will tell if that's the case.
]]>As TechPowerUp reports, Huang put his distinctive mark down on an Asus ROG Astral RTX 5090 Dhahab OC Edition card during GTC 2025, and Asus says that the uber-bling card bearing his name will be officially auctioned off to "support relief efforts for the California wildfires in Los Angeles."
Guess which graphics card just got anointed by Jensen Huang?#rtx5090 @ASUS_ROGNA @ASUSUSA pic.twitter.com/zWjd4W0hfuMarch 20, 2025
A noble cause indeed. 29 people lost their lives in the Los Angeles area as a result of the devastating fires earlier this year, with thousands of residents losing their homes—and the total property and capital damage is estimated to be between $95 and $164 billion.
While even a Huang-signed golden RTX 5090 is unlikely to make much of a dent in the financial fallout of those horrifying events, it could still potentially be one of the most valuable GPUs on the planet right now. That's primarily down to Jensen's scrawl, of course, but it's not like the card itself is of the cheap and cheerful variety.
M'colleague Jacob Fox covered the Asus ROG Astral RTX 5090 Dhabab OC Edition earlier this year, and it's a sheer slab of graphics card indulgence. Asus says the GPU is a "celebration of the rapid evolution of the Middle East, featuring a unique skyline silhouette that signifies a transition from the sands to the skies" and boy howdy does it look pretty, even to this miserly, sour-faced Brit.
Want to buy one for yourself? All I'll say is, good luck. While the Jensen-signed card is yet to go up for auction, the standard Asus ROG Astral RTX 5090 is listed for $3,360 at Newegg and Asus has not yet confirmed the pricing of the Dhahab OC edition.
My bet is, it'll be astonishingly expensive. Still, at least this particular card is helping to fund a good cause, so I'll save the full range of my bitterness for another time. Oh, to be rich instead of stunningly-good-looking. That's what I tell myself at night, anyway.
Best gaming PC: The top pre-built machines.
Best gaming laptop: Great devices for mobile gaming.
According to a Financial Times interview with Nvidia CEO Jen-Hsun Huang, Nvidia will "procure, over the course of the next four years, probably half a trillion dollars worth of electronics in total". Of that total sum, Huang says "We can easily see ourselves manufacturing several hundred billion of it here in the US."
It was revealed this week that Nvidia is now making chips in the USA. We don't yet have specifics on the type or quantity of chips but the announcement is a further declaration of Nvidia's shift from Taiwan-based manufacturing to closer to home (in theory, as the chips still have a long way to go around the world until they're in a functioning graphics card).
In a move that might help Nvidia get on the Trump administration's good side, Nvidia is also seemingly pretty happy with the administration's recent moves to deregulate AI. Just this week, the official White House website shared a quote from Huang saying “Having the support of an administration who cares about the success of this industry and not allowing energy to be an obstacle is a phenomenal result for AI in the U.S.”
President Trump's tariffs are an ongoing proponent of his plan to push for more manufacturing in the US. Effectively, a 25% tariff on Chinese goods will make them more expensive for the consumer to buy, and less worthwhile to import. The logic is that this will incentivize in-country manufacturing.
This has pushed many companies to adapt, in order to continue supplying to America. Some graphics card manufacturers have moved their production of actual cards out of China as a result.
Huang tells Financial Times "At this point, we know that we can manufacture in the US, we have a sufficiently diversified supply chain."
Nvidia is not the only major computing manufacturer to show increased investment in US production after threats of tariffs. Just earlier this month, TSMC and Trump announced a $100 billion investment into the US, with the intent to make three new factories. It is not yet clear if this is different from the $65 billion TSMC has already announced, which it had earmarked for building fabs in the US, some of which came from President Biden's CHIPS Act.
TSMC is the world's biggest semiconductor manufacturer and designer and doesn't just work with Nvidia. AMD and Intel, among many others, utilise it, with the former even said to be making some Ryzen chips in the US. Based out of Taiwan, its Taiwanese fabs are currently developing the most advanced node processes for chips. Despite continued investment in America, a spokesperson for Taiwan's president's office clarified, "The most advanced processes will remain in Taiwan."
Nvidia's Blackwell architecture, which can be found in RTX 50-series cards, uses a custom 4 nm node process, which is a spin on a slightly older process from TSMC. The same is true of Nvidia's current AI chips. Whereas the likes of the Apple M4 chip use a newer 3 nm process. Effectively, the smaller the process, the higher the transistor density, and the more efficient a chip can be.
Nvidia has rarely been on the cutting edge of the latest process, however, which means TSMC keeping the most advanced processes in Taiwan may not substantially affect its plans if it were to focus on more US production. Still, if it had wanted 3 nm chips, it wouldn't have been able to get those in the US. At least not from TSMC. Perhaps it could tap Intel for its cutting-edge 18A node once that's up and running.
TSMC fabs in the US are due to update to the 3 nm process in 2027, which could be soon enough for the RTX 60-series, though it will not be on the cutting edge by that time.
Best CPU for gaming: Top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game first.