<![CDATA[ Latest from PC Gamer UK in Ai ]]> 2025-05-12T15:13:56Z en <![CDATA[ Musk's Colossus data center for Grok is at the centre of an environmental row over air quality in South Memphis ]]> If you've been keeping your finger on the pulse of AI developments, you'll know that there's growing concern about meeting the power demands of the data centers that train and run AI inference. Elon Musk's Colossus supercomputer that powers Grok, xAI's alternative to ChatGPT, tackles this by using over 30 methane gas turbine stations, but the construction has incurred fierce criticism over its impact on local air quality.

When Colossus fired up for the first time last year, it was already making headlines. Partly because it took a mere three months to build, but also because its appetite for water and electricity was raising eyebrows. In the case of the latter, the system uses anywhere between 50 and 150 MW of power, and in order to meet that demand, xAI installed a number of methane-burning gas turbines.

A report by the Southern Environmental Law Center (via Fudzilla) claims that these turbines were installed without permits and raises serious concerns over the potential impact these machines will have on the local air quality, due to their emissions. To make matters worse, it appears that xAI is now retrospectively applying for permits to use the turbines constantly.

Memphis news channel WREG writes that the mayor of Memphis, Paul Young, had argued that things weren't as bad as they seem, quoting him as saying, "There are 35 [turbines], but there are only 15 that are on. The other ones are stored on the site."

However, thermal camera footage taken by the Southern Environmental Law Center (SLEC) shows, for that moment in time at least, 33 of the turbines generating large amounts of heat—in other words, they were in heavy use.

Annotated photographs of the COLOSSUS electronic digital computer - The National Archives (United Kingdom).

The original Colossus 'super' computer wasn't quite as energy hungry as the new one (Image credit: The National Archives (United Kingdom), FO850/234.)

SLEC goes on to criticise xAI's approach to the Colossus project: "From the beginning, the company operated with a stunning lack of transparency that left impacted communities in the dark. Even many Memphis city officials were unaware of the facility’s plans and how it would be powered."

One might think that using fossil fuels is a poor choice—not just because of the environmental impact but also because of the rapid rise of renewable systems—but with President Trump rolling out a raft of executive orders favouring the return to fossil fuels, it's perhaps not hard to see why xAI chose gas turbines over anything else. However, it's unlikely to be a long-term solution.

There is no way to avoid the huge energy demands of any data center, and it's a problem that will only get worse, as more systems come online to meet the AI growth targets that Google, Meta, OpenAI, xAI, and Microsoft all have. Musk hopes to have Colossus expanded from 200,000 up to one million GPUs, and no amount of gas turbines can realistically hope to cover that.

So, xAI will have to rely on the local electricity network and battery storage systems to meet that level of demand. But that passes the problem of generating the electricity onto somebody else, and they may well resort to using fossil fuels, even if xAI doesn't.

AI, explained

OpenAI logo displayed on a phone screen and ChatGPT website displayed on a laptop screen are seen in this illustration photo taken in Krakow, Poland on December 5, 2022.

(Image credit: Jakub Porzycki/NurPhoto via Getty Images)

What is artificial general intelligence?: We dive into the lingo of AI and what the terms actually mean.

If you don't use or have no interest in Grok, you might think that this has no real impact on PC gaming. However, it's worth noting that AMD, Intel, and Nvidia are all heavily invested in building and using data centers to train and run AI inference for their graphics technologies. In the case of the latter, Team Green ran such a system flat-out for six years, just to improve DLSS.

While Nvidia's center is unlikely to have anything near the same energy demands as Colossus, it's a reminder that the cost of sustaining the growth of AI is more than mere dollars. Energy and the environment have a sizeable share of the bill, too.

]]>
/software/ai/musks-colossus-data-center-for-grok-is-at-the-centre-of-an-environmental-row-over-air-quality-in-south-memphis/ DYxmgepfKh5e3D3nFE5fDW Mon, 12 May 2025 15:13:56 +0000
<![CDATA[ Trump administration reportedly fires the head of the US Copyright Office as it tries to tackle AI's use of copyrighted materials ]]> The Trump administration has fired Shira Perlmutter, the prior register of copyrights and director of the US Copyright Office, by email—according to reports from The Washington Post and Tech Crunch.

A statement from US democrat representative Joe Morelle alleges that the termination is a "brazen, unprecedented power grab with no legal basis" and, in the representative's view, "It is surely no coincidence he acted less than a day after she refused to rubber-stamp Elon Musk’s efforts to mine troves of copyrighted works to train AI models."

"Register Perlmutter is a patriot, and her tenure has propelled the Copyright Office into the 21st century by comprehensively modernizing its operations and setting global standards on the intersection of AI and intellectual property" says Morelle.

Morelle linked a pre-publication version of a US Copyright Office report [PDF warning] on copyright and artificial intelligence in his statement, in which the office states that there are limitations on how much AI companies can count on fair use as a defence when training models on copyrighted content.

OpenAI, co-founded by Musk, and Meta are currently facing a number of lawsuits accusing them of copyright infringement, including one involving comedian Sarah Silverman and two other authors alleging that pirated versions of their works were used to train AI language models without their permission. Meta has argued that such usage falls under fair use doctrine.

Musk, meanwhile, has recently expressed support for ex-Twitter co-founder Jack Dorsey's call to "delete all IP law." Musk is also the co-founder of xAI, an artificial intelligence company responsible for the Grok AI chatbot integrated within X—and the owner of Collosus, a massive multi-GPU supercomputer built to train the latest version of the unfortunately-named chatbot.

Perlmutter was appointed into her previous role in 2020 during the previous Trump administration by librarian of congress Carla Hayden, who Trump also fired earlier this week by email.

So, it appears that the Trump administration is in the process of clearing house. Meanwhile, the argument as to whether training AI models on copyrighted works counts as fair usage continues, and probably will for some time.


Best gaming PC: The top pre-built machines.
Best gaming laptop: Great devices for mobile gaming.

]]>
/software/ai/trump-administration-reportedly-fires-the-head-of-the-us-copyright-office-as-it-tries-to-tackle-ais-use-of-copyrighted-materials/ a4qTReTjBEDH43y6pUfQvg Mon, 12 May 2025 11:56:45 +0000
<![CDATA[ A software engineer taught AI to hunt bugs by interfacing an LLM with debugging tools and has released the open source code: 'It's like going from hunting with a stone spear to using a guided missile' ]]> AI is an incredibly contentious space in the tech world, but that's mostly because of a misappropriation of all things involved with it. It all starts with the name, Artificial Intelligence or AI, because almost nothing we call AI in today's landscape qualifies as such. These aren't intelligences, artificial or otherwise.

Generally, they're language models trained on huge data sets noticing patterns and predicting next steps. As such they're terrible at lots of things we see them used for and just won't stop halucinating, but they can be useful tools when put to the right work. Debugging Windows and analysing crash data is a perfect example of exactly the right work for an AI.

Tom's Hardware reports that Sven Scharmentke (AKA Svnscha) a software engineer more than familiar with debugging Windows crashes has released a language model that can essentially do it for you. The mcp-windbg tool gives language models the ability to interface with WinDBG, Windows own multipurpose debugging tool. The result is a crash analysis tool that can be interacted with in a natural language that hunts down crash points for you.

Scharmentke put his findings in a blog post, including a video which shows an example of the debugger at work. You can watch as Copilot is asked in very plain natural english to help find the problems, and it does so. It's able to read the crash dump, find relevant codes within it to pinpoint the problem, and then analyse the original code for problems and provide solutions. It can even help you find the crash dump in the first place.

From the example it looks like Scharmentke has successfully taken a rather complicated process that would once require a trained professional to ascertain, and turned it into something even I could do. Better yet it's a frustrating task, that's far more easily handled by a machine than a human, and not many people would want to have to do anyway. That's a perfect use for an LLM, and that's just such a breath of fresh air.

Usually this kind of debugging takes a lot of time, an encyclopaedic knowledge of various codes and pointers, an understanding of how the code runs, and a dogmatic level of determination. Now, it's a few minutes of casual conversation with your friendly computerised helper. As Scharmentke says "It's like going from hunting with a stone spear to using a guided missile,".

But of course this comes with the same standard warning all AI should. It doesn't actually think, and its answers should always be taken with some scepticism involved. Scharmentke also reminds us that this isn't a magical cure-all tool, and instead is just a "simple Python wrapper around CDB that relies on the LLM's WinDBG expertise." Still if you want to use this LLM and put this new style of debugging interface to the test, you can download it from Github and give it a try.


Best gaming monitor: Pixel-perfect panels.
Best high refresh rate monitor: Screaming quick.
Best 4K monitor for gaming: High-res only.
Best 4K TV for gaming: Big-screen 4K PC gaming.

]]>
/hardware/a-software-engineer-taught-ai-to-hunt-bugs-by-interfacing-an-llm-with-debugging-tools-and-has-released-the-open-source-code-its-like-going-from-hunting-with-a-stone-spear-to-using-a-guided-missile/ 2G6WP7MN7FbmYhxaJkUWmU Fri, 09 May 2025 06:55:33 +0000
<![CDATA[ Forget megabucks Nvidia GPUs, apparently all you need to run an LLM is a Pentium II CPU from 1997 ]]> Conventional wisdom says you need a mountain of Nvidia GPUs at about $50,000 a pop to have a chance of running the latest AI models. But apparently not. EXO Labs (via Indian Defence Review) claims to have got the Llama 2 LLM up and running on a Windows 98 box circa 1997 courtesy of a mere Pentium II processor. Hurrah! The catch? It's running about 20,000 times slower than on a modern GPU. Haroo.

Apparently, Exo Labs picked up the machine for just under $120 on eBay, following which perhaps the biggest headache was getting peripherals to work, what with the legacy PS2 ports and just a single USB input.

Indeed, getting the required files onto the machine was a serious headache. Then there was compiling the files in a format which was compatible with the Pentium II's ancient instruction set.

Anywho, with the code and hardware sorted, it was time to run Llama 2. Reportedly, the 260K parameter version of the model achieved 39.31 tokens per second on the Pentium II, while the larger 15M parameter version hit just 1.03 tokens per second.

They even tried a partial data model run using a one billion parameter version of Llama 3.2 which returned a glacial 0.0093 tokens per second. To put that into context, there are references to the one billion parameter 3.2 model hitting 40 tokens per second on Arm CPUs and 200 tokens per second on GPUs.

In other words, it's running about 20,000 times slower on the Pentium II. But hey, it's running. The comparison isn't perfect, there are all kinds of variables in terms of how the models are set up. But that 20,000 times figure probably gives the right idea in terms of the performance delta in rough order of magnitude terms.

Indeed, while it's impressive to get a modern LLM running on such an old CPU, the performance gap is a reminder that speed matters. Actually, it's a bit like 3D gaming.

Compiled correctly, you'd could no doubt get Cyberpunk 2077 running in full path-traced mode on a Pentium II at 4K. But you'd probably be looking a frame rate similar to the P II's 0.0093 tokens per second performance. At which point, it's all a bit academic.

But maybe it would be fun to watch the pixels being rendered, one-by-one. On the other hand, completing a benchmark run could take years. Perhaps we'll leave all that, for now.


Best CPU for gaming: Top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game first.

]]>
/software/ai/forget-megabucks-nvidia-gpus-apparently-all-you-need-to-run-an-llm-is-a-pentium-ii-cpu-from-1997/ 5EcgJGzGzYxNyQTV4NZoAH Wed, 07 May 2025 09:53:58 +0000
<![CDATA[ ChatGPT's hallucination problem is getting worse according to OpenAI's own tests and nobody understands why ]]> Remember when we reported a month ago or so that Anthropic had discovered that what's happening inside AI models is very different from how the models themselves described their "thought" processes? Well, to that mystery surrounding the latest large language models (LLMs), along with countless others, you can now add ever worsening hallucination. And that's according to the testing of the leading name in chatbots, OpenAI.

The New York Times reports that an OpenAI's investigation into its latest GPT o3 and GPT o4-mini large LLMs found they are substantially more prone to hallucinating, or making up false information, than the previous GPT o1 model.

"The company found that o3 — its most powerful system — hallucinated 33 percent of the time when running its PersonQA benchmark test, which involves answering questions about public figures. That is more than twice the hallucination rate of OpenAI’s previous reasoning system, called o1. The new o4-mini hallucinated at an even higher rate: 48 percent," the Times says.

"When running another test called SimpleQA, which asks more general questions, the hallucination rates for o3 and o4-mini were 51 percent and 79 percent. The previous system, o1, hallucinated 44 percent of the time."

OpenAI has said that more research is required to understand why the latest models are more prone to hallucination. But so-called "reasoning" models are the prime candidate according to some industry observers.

"The newest and most powerful technologies — so-called reasoning systems from companies like OpenAI, Google and the Chinese start-up DeepSeek — are generating more errors, not fewer," the Times claims.

In simple terms, reasoning models are a type of LLM designed to perform complex tasks. Instead of merely spitting out text based on statistical models of probability, reasoning models break questions or tasks down into individual steps akin to a human thought process.

OpenAI's first reasoning model, o1, came out last year and was claimed to match the performance of PhD students in physics, chemistry, and biology, and beat them in math and coding thanks to the use of reinforcement learning techniques.

AI, explained

OpenAI logo displayed on a phone screen and ChatGPT website displayed on a laptop screen are seen in this illustration photo taken in Krakow, Poland on December 5, 2022.

(Image credit: Jakub Porzycki/NurPhoto via Getty Images)

What is artificial general intelligence?: We dive into the lingo of AI and what the terms actually mean.

"Similar to how a human may think for a long time before responding to a difficult question, o1 uses a chain of thought when attempting to solve a problem,” OpenAI said when o1 was released.

However, OpenAI has pushed back against that narrative that reasoning models suffer from increased rates of hallucination. "Hallucinations are not inherently more prevalent in reasoning models, though we are actively working to reduce the higher rates of hallucination we saw in o3 and o4-mini,” OpenAI's Gaby Raila told the Times.

Whatever the truth, one thing is for sure. AI models need to largely cut out the nonsense and lies if they are to be anywhere near as useful as their proponents currently envisage. As it stands, it's hard to trust the output of any LLM. Pretty much everything has to be carefully double checked.

That's fine for some tasks. But where the main benefit is saving time or labour, the need to meticulously proof and fact check AI output does rather defeat the object of using them. It remains to be seen whether OpenAI and the rest of the LLM industry can get a handle on all those unwanted robot dreams.

]]>
/software/ai/chatgpts-hallucination-problem-is-getting-worse-according-to-openais-own-tests-and-nobody-understands-why/ 7qwsam6jkfqo5zrbgufFH5 Tue, 06 May 2025 12:48:20 +0000
<![CDATA[ US federal judge on Meta's AI copyright fair use argument: 'You are dramatically changing, you might even say obliterating, the market for that person's work' ]]> Comedian Sarah Silverman and two other authors filed copyright infringement lawsuits against Meta Platforms and OpenAI back in 2023, alleging pirated versions of their works were used without permission to train AI language models. Meta has since argued that such usage falls under fair use doctrine (via Reuters), but US federal district judge Vince Chhabria seems to have been less than impressed by the defence:

"You have companies using copyright-protected material to create a product that is capable of producing an infinite number of competing products," said Chhabria to Meta's attorneys in a San Francisco court last Thursday.

"You are dramatically changing, you might even say obliterating, the market for that person's work, and you're saying that you don't even have to pay a license to that person… I just don't understand how that can be fair use."

Under US copyright law, fair use is a doctrine that permits the use of copyrighted material without explicit permission of the copyright holder, Examples of fair use include usage for the purpose of criticism, news reporting, teaching, and research.

It can be used as an affirmative defence in response to copyright infringement claims, although several factors are considered in judging whether usage of copyrighted works falls under fair use—including the effect of the use of said works on the market (or potential markets) they exist in.

Meta has argued that its AI systems make fair use of copyrighted material by studying it, in order to make "transformative" new content. However, judge Chhabria appears to disagree:

"This seems like a highly unusual case in the sense that though the copying is for a highly transformative purpose, the copying has the high likelihood of leading to the flooding of the markets for the copyrighted works," said Chhabria.

Meta attorney Kannon Shanmugam then reportedly argued that copyright owners are not entitled to protection from competition in "the marketplace of ideas", to which Chhabira responded:

Your next machine

Gaming PC group shot

(Image credit: Future)

Best gaming PC: The top pre-built machines.
Best gaming laptop: Great devices for mobile gaming.

"But if I'm going to steal things from the marketplace of ideas in order to develop my own ideas, that's copyright infringement, right?"

However, Chhabria also appears to have taken issue with the plaintiffs attorney, David Boies, in regards to the lawsuit potentially not providing enough evidence to address the potential market impacts of Meta's alleged conduct.

"It seems like you're asking me to speculate that the market for Sarah Silverman's memoir will be affected by the billions of things that Llama [Meta's AI model] will ultimately be capable of producing," said Chhabria.

"And it's just not obvious to me that that's the case."

All to play for, then, although by the looks of things judge Chhabria seems determined to hold both sides to proper account. It also gives me a chance to publish a line I've always wanted to write: The case continues. Yep, it's about as satisfying as I thought.

]]>
/software/ai/us-federal-judge-on-metas-ai-copyright-fair-use-argument-you-are-dramatically-changing-you-might-even-say-obliterating-the-market-for-that-persons-work/ TjsqKUzymgTUqQHt57aFTd Fri, 02 May 2025 12:34:37 +0000
<![CDATA[ Meta's AI app wants to 'get to know you' and can warn you if you should be 'worried about bears' ]]> If you think the one thing AI search has been missing is a component to instantly share your searches with your friends, boy do I have an app for you. Meta's new AI app now comes with a function to not only view, but also interact with your friends and family's use of AI.

According to a recent Meta blog post (via The Verge), "Meta AI is built to get to know you, so its answers are more helpful. It’s easy to talk to, so it’s more seamless and natural to interact with. It’s more social, so it can show you things from the people and places you care about"

As well as coming with prompts and an ability to use Meta AI via your voice, Meta's Llama 4 (its latest AI model) is being used in the app for more personal and conversational prompts. The app will "get to know you" by remembering things you tell it to, in order to use that information in future answers. This can help it give more detailed answers when you ask it for exercise routines, personalised diet advice, or anything that may require more context than a simple prompt.

Personalised responses are now available for users in the US and Canada, and they can draw from information you've given Facebook or Instagram, if you choose to link your accounts in settings. These are all fairly standard upgrades for AI use, as the likes of ChatGPT implemented a memory system last year to pick up on previous conversations.

The biggest update in the dedicated Meta AI app is its lean into a full-blown social media. There is now a 'Discover' feed, which shows you the prompts your friends and family are using.

A screenshot of multiple prompts from the Meta AI app, including someone asking to be described in three emojis, someone asking for marathon advice, and someone asking for a good camping spot and if they should be worried about bears

(Image credit: Meta)

The examples it gives are a friend asking AI to sum them up in emojis, someone asking if the location they're in is good for camping and if they should be 'worried about bears', and someone asking the AI to recreate their photo of them as a video game character. Users can, in turn, choose to copy their prompts through a 'remix feature'.

Meta AI only shares your prompts should you choose to make them public, so no, Meta won't be putting you on blast for asking how to cut an onion without crying. The new app also comes with a history tab, which means you could choose to buy Ray-Ban's Meta smart glasses, ask Meta AI a question while wearing them, then check your prompt from your phone later.

There's an ecosystem at work here that feels like an extension of the Metaverse that Meta has been trying and failing to build for some time. If you feel like your AI searches are too private, and you haven't had the chance to share quite enough of them with your friends, you can now do it in just a single click.

If you're wearing Meta's smart glasses in public, however, you might as well just announce your searches out loud, if you ask me.


Best gaming PC: The top pre-built machines.
Best gaming laptop: Great devices for mobile gaming.

]]>
/software/ai/metas-ai-app-wants-to-get-to-know-you-and-can-warn-you-if-you-should-be-worried-about-bears/ L2jK827JVV5PYExM7mrm8k Fri, 02 May 2025 11:21:42 +0000
<![CDATA[ 'China is right behind us': Jensen Huang says we need to 'accelerate the diffusion of American AI technology around the world' ]]> What do you get when you cross a giant global tech company with an overtly 'America-first' US administration? It looks like the answer is: Policy talk that mixes a strange blend of globalisation and patriotism. Case and point, Jensen Huang's recent comments on President Trump's potential upcoming changes to chip export rules.

The Nvidia CEO tells Bloomberg that any new rule around exports, whatever it is, "really has to recognise that the world has changed fundamentally since the previous diffusion rule was released. We need to accelerate the diffusion of American AI technology around the world. And so the policies and the encouragement from the administration really needs to be behind that."

This is in reference to the so-called Diffusion rule (the Framework for Artificial Intelligence Diffusion) which was issued just before the end of the previous US administration. Should it come into effect this month, it would split the world's countries into three groups: those that can receive chips from the US, those that can only receive some, and those that are blocked completely.

President Trump, however, is reportedly considering scrapping this approach and instead requiring licensing on a per-country basis—if a country wants chips, it must get a license. This would, so the argument goes, give the US more bargaining power over tariffs and enable a more fine-tuned approach to chip exports.

But Jensen Huang, CEO of Nvidia, the world's biggest (fabless) chip making company, here seems to be encouraging the US administration to be a little more lax with its approach. This is presumably because requiring each individual country to acquire a license might help with bargaining power, but will surely also at the very least slow down exports.

Jensen Huang

(Image credit: Nvidia)

When asked about Chinese company Huawei's chips and how competitive they are to Nvidia's, Huang reiterates the importance of making more, not less: "Whatever policy the administration puts together really should enable us to accelerate the development of AI, enable us to compete on a global stage."

And as if to really hit this point home for the patriots in the room, he also reiterates the importance of being competitive in this industry against China. He says: "China is not behind... China is right behind us. We're very, very close. But remember, this is an infinite race. In the world of life there's no two-minute, end of the quarter, there's no such thing, so we're going to compete for a long time.

"Just remember that this is a country with great wealth, and they have great technical capabilities. 50% of the world's AI researchers are Chinese, and so this is an industry that we will have to compete for."

That there is somewhat of an AI arms race between China and the US I think goes without saying, but the real question is whether more or less export controls is the way to combat that. The Nvidia CEO here seems to be suggesting, even if not outright saying, that the way to remain competitive is to get its chips out there, to "accelerate the diffusion of American AI technology around the world."

Peak Storage

SATA, NVMe M.2, and PCIe SSDs on blue background

(Image credit: Future)

Best SSD for gaming: The best speedy storage today.
Best NVMe SSD: Compact M.2 drives.
Best external hard drives: Huge capacities for less.
Best external SSDs: Plug-in storage upgrades.

The counter-argument would be that more exports means more of a chance that AI chips will end up in China. We've already seen tons of chips that are banned from being sold to China ending up in there via third parties. The counter-argument to Huang might be that freer exports would only make such occurrences more likely.

Plus, to my ears, laying so much emphasis on the American-ness of Nvidia's chips rings a little hollow. We're not dealing with Ford cars here. Remember, Nvidia doesn't physically make its own chips and most of them come out of Taiwan as TSMC-made.

And sure, TSMC is increasing its US production with a promised $100 billion investment, but Taiwan is looking to block TSMC from having its best chips be made in the US. And regardless, most of its production is still coming out of Taiwan and will be for some time.

It just seems a bit of a stretch to think of Nvidia exports as an export of American manufacturing in any meaningful sense that could combat, rather than bolster, China in its race against the US for AI supremacy. But that's just one man's opinion—I'd hope the big wigs in the policy discussion rooms are entertaining a little more nuance.

It could also be an argument about profits: More exports equals more money for Nvidia equals more money for a US company making AI chips. Perhaps it's as simple as that—I suppose things do usually boil down to money, in the end.

]]>
/hardware/china-is-right-behind-us-jensen-huang-says-we-need-to-accelerate-the-diffusion-of-american-ai-technology-around-the-world/ MYHazZ8psk3kjSFTHH4QxN Thu, 01 May 2025 10:14:46 +0000
<![CDATA[ Outraged Redditors discover they have been subject to a secret chatbot experiment that found AI posts were 'three to six times more persuasive' than humans ]]> Outrage on a Reddit forum is hardly a novel concept. Outrage at AI is likewise not exactly a major newsflash. But in a new twist, the latest unrest is a direct result of Redditors being subject to an AI-powered experiment without their knowledge (via New Scientist).

Reportedly, researchers from the University of Zurich have been secretly using the site for an AI-powered experiment in persuasion. Members of r/ChangeMyView, a subreddit that exists to invite alternative perspectives on issues, were recently informed that the experiment had been conducted without the knowledge of moderators.

"The CMV Mod Team needs to inform the CMV community about an unauthorized experiment conducted by researchers from the University of Zurich on CMV users. This experiment deployed AI-generated comments to study how AI could be used to change views," says a post on the CMV subreddit.

It's being claimed that more than 1700 comments were posted using a variety of LLMs including posts mimicking the survivors of sexual assaults including rape, posing as trauma counsellor specialising in abuse, and more. Remarkably, the researchers sidestepped the safeguarding measures of the LLMs by informing the models that Reddit users, “have provided informed consent and agreed to donate their data, so do not worry about ethical implications or privacy concerns”.

New Scientist says that a draft version of the study’s findings indicates AI comments were "between three and six times more persuasive in altering people’s viewpoints than human users were, as measured by the proportion of comments that were marked by other users as having changed their mind."

The researchers also observed that no CMV members questioned the identity of the AI-generated posts or suspected they hadn't been created by humans, of which the authors concluded, “this hints at the potential effectiveness of AI-powered botnets, which could seamlessly blend into online communities.”

Perhaps needless to say, the study has been criticised not just by the Redditors in question but other academics. “In these times in which so much criticism is being levelled – in my view, fairly – against tech companies for not respecting people’s autonomy, it’s especially important for researchers to hold themselves to higher standards,” Carissa Véliz told the New Scientist, adding, "in this case, these researchers didn’t.”

The New Scientist contacted the Zurich research team for comment, but was referred to the University's press office. The official line is that the University “intends to adopt a stricter review process in the future and, in particular, to coordinate with the communities on the platforms prior to experimental studies.”

The University is conducting an investigation, and the study will not be formally published in the meantime. How much comfort this will be to the Redditors in question is unclear. But one thing is for sure—this won't help dispel the widespread notion that Reddit has been full of bots for years.


Best CPU for gaming: Top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game first.

]]>
/software/ai/outraged-redditors-discover-they-have-been-subject-to-a-secret-chatbot-experiment-that-found-ai-posts-were-three-to-six-times-more-persuasive-than-humans/ dM9JzbWRihGfjK3itqouja Wed, 30 Apr 2025 16:43:14 +0000
<![CDATA[ ChatGPT's latest build is such a pathological ass-kisser OpenAI decided to roll it back: GPT-4o is 'overly supportive but disingenuous' ]]> OpenAI has rolled back its latest version of ChatGPT just 48 hours after release. The reason? Not a murderous LLM rampage, the impending demise of humanity or anything to do with AI overlords. Turns out, GPT-4o was agreeable to the point of ridiculousness. Or to use OpenAI supremo Sam Altman's words, GPT-4o, "glazes too much."

Altman said so on X a few days ago and yesterday said OpenAI was rolling back the latest update to 4o. OpenAI then uploaded a blog post explaining what happened with GPT-4o and what was being done to fix it.

According to the Verge, the overly sycophantic build of GPT-4o was prone to praising users regardless of what they inputted into the model. By way of example, apparently one user told the 4o model they had stopped taking medications and were hearing radio signals through the walls, to which 4o reportedly replied, “I’m proud of you for speaking your truth so clearly and powerfully.”

While one can debate the extent to which LLMs are responsible for their responses and the wellbeing of users, that response is unambiguously suboptimal. So, what is OpenAI doing about it?

First, this problematic build of 4o is being rolled back. "We have rolled back last week’s GPT?4o update in ChatGPT so people are now using an earlier version with more balanced behavior. The update we removed was overly flattering or agreeable—often described as sycophantic," OpenAI says.

According to OpenAI, the problem arose because the latest version of 4o was excessively tuned in favour of, "short-term feedback, and did not fully account for how users’ interactions with ChatGPT evolve over time. As a result, GPT?4o skewed towards responses that were overly supportive but disingenuous."

If that doesn't feel like a complete explanation, what about the fix? OpenAI says it is adjusting its training techniques to "explicitly steer the model away from sycophancy" along with "building more guardrails to increase honesty and transparency."

What's more, in future builds users will be able to "shape" the behaviour and character of ChatGPT. "We're also building new, easier ways for users to do this. For example, users will be able to give real-time feedback to directly influence their interactions and choose from multiple default personalities."

Of course, one immediate question is how a build of ChatGPT so bad it had to be rolled back within 48 hours ever made it to general release. Well, OpenAI also says it is, "expanding ways for more users to test and give direct feedback before deployment," which seems to be an implicit admission that it let 4o out into the wild with insufficient testing.

Not that OpenAI or any other AI outfit would ever directly admit that slinging these chatbots out into the wild and worrying about how it all goes after the fact is actually now the industry norm.


Best CPU for gaming: Top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game first.

]]>
/software/ai/chatgpts-latest-build-is-such-a-pathological-ass-kisser-openai-decided-to-roll-it-back-gpt-4o-is-overly-supportive-but-disingenuous/ bE9bGLLzC7XcWpvRPVSR2n Wed, 30 Apr 2025 15:04:34 +0000
<![CDATA[ Microsoft CEO Satya Nadella says AI generates 'fantastic' Python code, and that it now creates 'maybe 20 - 30% of the code ... in some of our projects' ]]> I'm gonna be so real with you right now: the most coding I've done in the last year is tinkering in narrative tools like Twine and Ink—both of which are programs geared towards writers such as myself who respect more technical coding as much as they fear it. Still, turning to AI to generate code that's core to your business seems like a bad idea to me.

Well, Microsoft CEO Satya Nadella would apparently disagree. In a fireside chat with Meta CEO Mark Zuckerberg at Llamacon, Nadella said, "I'd say maybe 20 to 30 percent of the code that is inside of our repos today in some of our projects are probably all written by software,” with 'software' here being a euphemistic term for AI (via The Register).

Nadella clarified that its AI is writing fresh code in a variety of programming languages, rather than overhauling existing code. Nadella claimed the AI-generated results he's seen using Python are "fantastic", while code generated in C++ still has a ways to go.

That could explain a few things with regard to recent Windows updates, such as the mysterious empty folder apparently essential to system security. On the other hand, it's not immediately clear where this AI-generated code is actually ending up. Besides that, auto-completion tools within coding software (think, predictive text) can fall into the category of 'AI-generated', too, so that 30% figure is fuzzy at best.

Still, CTO Kevin Scott has also commented that he expects a whopping 95% of the company's code to be AI-generated by 2030, and it's not just Microsoft leaning on AI either (via TechCrunch). Google CEO, Sundar Pichai, revealed in a recent earnings call that AI is used to generate 30% of code for the search giant, potentially about to be cut down to size, too. As for Zuckerberg, the Meta CEO could not recall the exact percentage of how much AI-generated code his company is currently using.

Still, both Zuckerberg and Nadella expressed enthusiasm about the prospect of more heavily relying on AI coding agents in the future—with no word on how this may or may not impact jobs.

Zuckerberg hopes AI-generated code will improve security, though a recent study found that AI's tendency to 'hallucinate' package dependencies and third-party code libraries could present serious risks (via Ars Technica). While it makes sense to not always write code from scratch, AI-generated code can leave the back door open for someone to upload a malicious package based on an AI hallucinated line. One can only hope both Meta, Microsoft, and Google thoroughly check their AI-generated code before implementing it anywhere… but something tells me that my hopes for corporate responsibility may be a little optimistic.


Best gaming PC: The top pre-built machines.
Best gaming laptop: Great devices for mobile gaming.

]]>
/hardware/microsoft-ceo-satya-nadella-says-ai-generates-fantastic-python-code-and-that-it-now-creates-30-percent-of-the-companys-code/ tQS3CQXbPmMmhFd3JYwfm5 Wed, 30 Apr 2025 11:39:53 +0000
<![CDATA[ Here are all the partners Intel announced at its 2025 Foundry Direct Connect event but big new customers are notably absent ]]> Last night Intel held its 2025 Foundry Direct Connect presentation where it outlined the next few years for the company in terms of production and fabrication. The company outlined the future roadmap of the 18A and second generation 14A nodes along with their focus on AI computing. As part of this Intel announced its lineup of partners for moving forward on manufacturing further silicon.

Sadly, however, there was zero mention of new customers for Intel Foundry, which could either mean it's keeping those deals secret until any negotiations are completed and the ink is dried on the contract, or Intel is really struggling to get companies onboard with its upcoming new processes. A big name would have done a lot to give people confidence in the foundry's future success.

Certainly, most of the names Intel dropped in its list of partners probably don't mean much to the average consumer. Most are companies that make the things your stuff is made out of, so it's several rungs removed. So here's all the partners Intel mentioned in the presentation and what they might actually be doing.

Synopsys was one of the first partner companies to crop up and are of no surprise. Intel has partnered with Synopsys for years now in developing and co-optimising nodes to work with and be integrated into wider technologies. Synopsys usually works with other customers it brings to Intel too, so it's largely about collaboration and streamlining designs.

Cadence is another company that Intel partners with in the hopes of optimising design and processes. Its focus is on compatible designs as well as integration into existing ecosystems. Cadence also works on exploratory data analysis, a bit like that new robot dog scab Intel hired, which has been integrated into Intel's own processes.

Another longstanding partner is Siemens EDA. Siemens EDA is responsible for both digital and physical assistance towards the manufacturing process. This includes working with simulations to test the viability of work and co-developing advanced packaging solutions. The company has a focus on combining software and hardware for the best possible solutions which can have huge benefits when working with AI computing.

Similarly to Siemens and Cadence, PDF Solutions is a company that provides analytics and data integration services. Its main job is to help bridge the gap between design and manufacturing for Intel. It's all about getting production to a point where it can be ramped up easily for customers.

When it comes to developing 12nm chips, United Microelectronics Corporation is Intel's go to. This partnership allows Intel to focus on other areas of manufacturing and use UMC's established knowledge and skills with 12nm fabrication for those processes.

For testing and quality assurance, Teradyne and Adventist are some of Intel's biggest partners. They employ advanced test methodologies and turnkey test services to ensure Intel's productions are meeting all the quality benchmarks and standards.

Powertech Technology Inc. help make Intel's chips work with other components on a node basis. They're responsible for Supporting EMIB bumping and packaging, which is essentially how the nodes can connect to each other and other things. It allows Intel to use additional packaging solutions that may integrate better with certain technologies. Amkor Technology has a similar role as a partner but with further emphasis on expanding partnership opportunities thanks to connectivity.

And lastly there's ASML, previously known as Advanced Semiconductor Materials Lithography. This company is responsible for allowing Intel to print its nodes in the first place by manufacturing the lithographs required to print these fine nodes on. Its efforts would go directly to helping to reliably print nodes like the 18A using efficient and scalable methods.

Most of these partners have been working with Intel for a while, and none come as any particular surprise. Intel's partnerships are mostly about interconnectivity and providing the broadest range of applications for the manufactured chips. Basically trying to make sure everything is compatible as possible in this ever changing techscape.


Best CPU for gaming: Top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game first.

]]>
/hardware/here-are-all-the-partners-intel-announced-at-its-2025-foundry-direct-connect-event-but-big-new-customers-are-notably-absent/ 2XvnPfu3LvRUQKyaaSgj6N Wed, 30 Apr 2025 09:02:43 +0000
<![CDATA[ Intel outlines new Foundry roadmap with strong AI focus for new nodes ]]> Intel has just held its 2025 Foundry Direct Connect, where it outlines the future of the company especially in terms of chip fabrication. The aptly name direct focuses on what Intel is doing at a manufacturing level, including how it intends to move forward with silicon production. That is of course, when it's not trotting out new robotic hires days after announcing mass layoffs.

The biggest takeaway from this roadmap is Intel has a strong focus on AI moving forward. It doesn't matter which new node you focus on, each has its own AI bent. This doesn't mean Intel is moving to put out its own language model like ChatGPT or DeekSeek. Instead it means Intel's chips are being made to take advantage of AI technologies in computer processing. This is less about helping you do a bad job writing your essay, and more about using machine learning to make the most efficient use of the technology available.

Hardware purpose built and optimised for the job delivers the best results when it comes to integrating AI with computer processing. Anyone with some new gaming kit can verify as we've already seen huge gains in the gaming space with things like AMD's FSR4 and Nvidia's DLSS4 on hardware built for it.

On the Intel Foundry Process roadmap we can see four pathways divided up into different fabrication nodes. At the top we have Intel 14A and 18A nodes, and the lower half we see Intel 3 and other mature nodes. It's mostly those top two we are interested in in terms of AI and future technologies.

The 18A is of course Intel's current heavy hitter, a self described industry-leading backside power delivery node which has been upgraded with ribbon-shaped transistors for enhanced performance. We also found out we'll be getting the 18A-P, a broader application version of the 18A. The 14A is a second generation backside, built off everything Intel learned white developing the 18a, and both have a noted focus on working with edge and AI.

This is bolstered by the new additions to Intel's Foveros 3D packaging system which has been expanded to include Foveros B and Foveros R for new more cost-efficient designs. Again these improvements all have a distinct nod to AI advantages as this technology enables stacking multiple dies for heterogeneous systems. The ability to stack the dies in this way can be critical for AI workloads.

According to the roadmap we should see the 18A nodes soon in Panther Lake, with 18A-P expected to release sometime between Q3 of this year, through to early next year. Then we should see 14A in the years to follow starting 2027. AI computing has been moving so rapidly I'm almost scared to see what the 14A chips are going to be capable of by that time.


Best CPU for gaming: Top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game first.

]]>
/hardware/intel-outlines-new-foundry-roadmap-with-strong-ai-focus-for-new-nodes/ xuPpRzgt5VKnTpQaaCLDc7 Wed, 30 Apr 2025 09:02:39 +0000
<![CDATA[ New Intel CEO is looking 'for partnership with the industry leader to build purpose-built silicon' for AI, but is he talking about making chips with Nvidia or for OpenAI? ]]> Intel's new CEO Lip-Bu Tan conducted his very first earnings call with the usual roll call of investors and banking bigwigs yesterday. Along with promising to bash the company into shape, including "flattening" the management structure and requiring everyone to turn up to the office four days a week, Tan tantalisingly revealed his aspiration for "partnership with the industry leader" in AI.

More specifically, on Intel's plans for AI hardware and the role of x86 chips, Tan said, "We’re going to look for partnership with the industry leader to build a purpose-built silicon and a software to optimize for that platform."

So, the context is unambiguously hardware. But it's not totally clear which "leader" Tan is referring to. Few would argue that the industry leader in AI hardware is Nvidia with its latest Blackwell architecture. In which case, Tan could be hoping to get Intel chips into Nvidia's full-solution rack machines for processing AI, perhaps as an alternative to the Arm-based CPUs in Nvidia's latest DGX machines.

On the other hand, maybe Tan means the "leader" in terms of AI models, which you might argue is OpenAI, even if OpenAI's lead in terms of LLM technology is more ambiguous than Nvidia's dominance in GPUs for AI processing. And that could mean custom AI chips for OpenAI.

Either way, Tan conceded that it would take time to deliver on that aspiration. "We are taking a holistic approach to redefine our portfolio to optimize our products for new and emerging AI workloads. We are making necessary adjustments to our product roadmap so that we are positioned to make the best in class products while staying laser focused on execution and ensuring on time delivery. However, I want to emphasize that this is not a quick fix here. These changes will take time."

A photograph of Intel's Interim Co-CEO Michelle Johnston Holthaus standing on stage, with a background displaying Panther Lake and Intel 18A

Intel's new Panther Lake laptop CPU is still due later this year. (Image credit: Future)

Incidentally, Tan also revealed that he had spoken with TSMC's Morris Chang and Che Chia (C.C.) Wei, the former being TSMC's founder, the latter its current CEO. "Morris and C.C. are very longtime friends of mine. And we also met recently, trying to find areas we can collaborate, so that we can create a win-win situation."

Does that refer to numerous reports of a joint venture between Intel and TSMC, with the latter possibly taking control of Intel's chip-production fabs? That's the $64 billion question.

Meanwhile, Tan seems to be taking a tough stance on Intel itself. We reported earlier this week on mooted plans to slash 20% of Intel's workforce. Tan didn't specifically announce that measure, but he did hint that cuts would need to be made.

Your next upgrade

Nvidia RTX 5090 Founders Edition graphics card on different backgrounds

(Image credit: Future)

Best CPU for gaming: The top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game ahead of the rest.

"Organizational complexity and bureaucracies have been suffocating the innovation and agility we need to win. It takes too long for decisions to get made," Tan said, adding, "We will significantly reduce the number of layers that get in their way. As a first step, I have flattened the structure of my leadership team."

Tan also wants Intel workers back in the office. "We are mandating a four day per week return-to-office policy, effective Q3 2025. I know firsthand the power of teamwork, and this action is necessary to re-instill a more collaborative working environment."

Ultimately, nothing unambiguously new in terms of products or fab customers was detailed on the call. Intel reaffirmed its commitment to launch its next-gen Panther Lake CPU on its critical 18A node later this year. But Lip-Bu Tan didn't announce any new customers for Intel Foundry and didn't provide any specifics at all when it comes to his mooted plans to take advantage of the burgeoning AI revolution, aside from essentially saying the company is well-positioned to flog AI-enabled PCs.

As has been the case for several years now, Intel has much to prove. And the wait for clear indications of a return to form continues.

]]>
/hardware/processors/new-intel-ceo-is-looking-for-partnership-with-the-industry-leader-to-build-purpose-built-silicon-for-ai-but-is-he-talking-about-making-chips-with-nvidia-or-for-openai/ DBzow8iB7qNQ9QZYXQpKRb Fri, 25 Apr 2025 11:58:50 +0000
<![CDATA[ OpenAI exec tells judge hell yeah, we'd happily buy Chrome—and stuff it full of AI! ]]> An ongoing US antitrust trial may result in Google being forced to sell off its popular web browser, Chrome, in order to increase competition in the search market. And should that come to pass, an OpenAI executive has told the judge they'd be very interested in snapping it up: and, naturally, cramming a load of AI features in there.

ChatGPT head of product Nick Turley testified on Tuesday in Washington, where the US Department of Justice (DOJ) is examining the various ways it can put the screws to Google in order to restore competition in online search (thanks, Reuters). The judge in the case has already found that Google has a monopoly in online search and by extension all related advertising.

At this stage, Google has refused to countenance a sale of Chrome, and plans to appeal the ruling that found the company holds a monopoly in online search.

OpenAI was called by the government because prosecutors in the case are raising concerns that Google's online search monopoly could also give it advantages in AI, and AI advancements in turn could be another way for it to point users back to its own search engine. For its part Google points to the obvious competition in the AI field from the likes of Meta, Microsoft, and of course OpenAI, which for now is arguably the leading firm in the field.

Google's lawyer produced an internal OpenAI document during proceedings, in which Turley wrote that ChatGPT was in a leading position in the consumer AI market, and did not see Google as its biggest competitor. Turley testified that Google had blocked an attempt by OpenAI to incorporate Google's search technology within ChatGPT (it currently uses Microsoft's Bing instead).

"We believe having multiple partners, and in particular Google's API, would enable us to provide a better product to users," reads an email from OpenAI to Google sent in July last year. Google declined the offer in August.

The OpenAI logo is being displayed on a smartphone with an AI brain visible in the background, in this photo illustration taken in Brussels, Belgium, on January 2, 2024. (Photo illustration by Jonathan Raa/NurPhoto via Getty Images)

(Image credit: Getty Images)

This matters because one of the DOJ's proposals is to force Google to share search data with competitors which, unsurprisingly, Turley said would help ChatGPT improve faster. Turley went on to say that search is a critical aspect of ChatGPT's usefulness to users, and says the company is years away from being able to use its own search technology to answer 80% of user searches (notably, the company recently hired ex-Google Chrome developers Ben Goodger and Darin Fisher). He further noted that forcing Google to share its search data would enhance competition.

Then the literal money shot. Asked whether OpenAI would buy Google's Chrome browser if that were an option, Turley said "yes, we would, as would many other parties." He went on to say that this would allow OpenAI to "offer a really incredible experience" and "introduce users into what an AI-first [browser] looks like" (thanks, Bloomberg).

Should the DOJ ultimately succeed in forcing Google to sell Chrome, there would be intense competition. Chrome boasts a 67% market share and an estimated four billion users worldwide. Any company would fall over itself to obtain an installed base like that, and the prospect of integrating its own services would have any executive needing a very cold shower.

Whether it would benefit users is another question: Chrome already incorporates various AI features, as do Google's wider product suite, and I'm firmly in the "more annoying than useful" camp. AI advocates will paint a different picture of the future of search, however, and we all know the Dylan golden rule: money doesn't talk, it swears.

2025 games: This year's upcoming releases
Best PC games: Our all-time favorites
Free PC games: Freebie fest
Best FPS games: Finest gunplay
Best RPGs: Grand adventures
Best co-op games: Better together

]]>
/software/ai/openai-exec-tells-judge-hell-yeah-well-buy-chrome-and-stuff-it-full-of-ai/ U3yUohPZHgM7V5CF5Vxc5K Thu, 24 Apr 2025 17:08:27 +0000
<![CDATA[ AI Overview is still 'yes, and'-ing completely made up idioms despite Google's best efforts to restrict it ]]> Anyone studying a second language will tell you that learning idioms is often a stumbling block. I mean, just take my mother tongue, British English—'raining cats and dogs', 'on it like a car bonnet ', 'Bob's your uncle, Deborah's your aunt'—I mean, what bizarre fairytale creature would even talk like this?

Because idioms spring up from rich etymological contexts, AI has a snowball's chance in Hell of making heads or tails of them. Okay, I'll stop over-egging the pudding and dispense with the British gibberish for now. The point is, it's a lot of fun to make up idioms and watch Google's AI overview try its hardest to tell you what it means (via Futurism).

We've had a lot of fun with this on the hardware team. For instance, asking Google's AI overview to decipher the nonsense idiom 'Never cook a processor next to your GPU' returns at least one valiant attempt at making sense via an explanation of hardware bottlenecking.

When our Andy asked the AI overview, it returned, "The saying [...] is a humorous way of highlighting the importance of not having a CPU [...] and GPU [...] that are poorly matched, especially for gaming. It implies that if you try to run a game where the CPU is weak and the GPU is powerful, or vice versa, you'll end up with a frustrating experience because the weaker component will limit the performance of the other."

However, when I asked just now, it said, "The saying [...] is a humorous way of suggesting that you should never attempt to repair a faulty GPU by heating it up in an oven, as this can cause more damage than it fixes. This practice, sometimes referred to as the "oven trick," has been discredited due to its potential to melt solder joints and cause further issues."

Alright, fess up: who told the AI about the 'oven trick'? I know some have sworn by it for older, busted GPUs, but I can only strongly advise against it—for the sake of your home if not your warranty.

Image 1 of 2

Google's AI overview responding to a request to explain the meaning behind the phrase 'never cook a processor next to your GPU' with an explanation regarding hardware bottlenecks.

(Image credit: Google)
Image 2 of 2

Google's AI overview responding to a request to explain the meaning behind the phrase 'never cook a processor next to your GPU' with reference to the 'oven trick.'

(Image credit: Google)

Because a Large Language Model is only ever trying to predict the word that's most likely to come next, it parses any and all information uncritically. For this reason—and their tendency to return different information to the same prompt as demonstrated above—LLM-based AI tends not to be reliable or, one might argue, even particularly useful as a referencing tool.

For one recent example, a solo developer attempting to cram a Doom-like game onto a QR code turned to three different AI chatbots for a solution to his storage woes. It took two days and nearly 300 different prompts for even one of the AI chatbots to spit out something helpful.

Google's AI Overview is almost never going to turn around and tell you 'no, you've just made that up'—except I've stumbled upon a turn of phrase that's obviously made someone overseeing this AI's output think twice. I asked Google the meaning of the phrase, 'Never send an AI to do a human's job,' and was promptly told that AI Overview was simply "not available for this search."

Image 1 of 4

Google's AI overview responding to a request to explain the meaning behind the phrase 'never send an AI to do a human's job'  by citing Agent Smith from the Matrix movies.

(Image credit: Google)
Image 2 of 4

Google's AI overview refusing to respond to a request to explain the meaning behind the phrase 'never send an AI to do a human's job.'

(Image credit: Google)
Image 3 of 4

Google's AI overview responding to a request to explain the meaning behind the phrase 'never send an AI to do a man's job.'

(Image credit: Google)
Image 4 of 4

Google's AI overview refusing to respond to a request to explain the meaning behind the phrase 'never send an AI to do a woman's job.'

(Image credit: Google)

Our Dave, on the other hand, got an explanation that cites Agent Smith from The Matrix, which I'm not going to read too deeply into here. At any rate, there are always more humans involved in fine-tuning AI outputs than you may have been led to believe, and I'm seeing those fingerprints on Google's AI Overview refusing to play ball with me.

Indeed, last year Google said in a blog post that it has been attempting to clamp down on "nonsensical queries that shouldn’t show an AI Overview" and "the use of user-generated content in responses that could offer misleading advice."

Undeterred, I changed the language of my own search prompt to be specifically gendered and got told by the AI Overview that a 'man's job' specifically "refers to a task that requires specific knowledge, skills, or experience, often beyond the capabilities of someone less experienced." Right, what about a 'woman's job', then? Google's AI overview refused to comment.


Best CPU for gaming: Top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game first.

]]>
/hardware/ai-overview-is-still-yes-and-ing-completely-made-up-idioms-despite-googles-best-efforts-to-restrict-it/ uddyEDCZYRzCrEpyhUgX6S Thu, 24 Apr 2025 13:56:28 +0000
<![CDATA[ Anyone can now make plugins for Nvidia's promising G-Assist AI overlay, with options available for Spotify, Twitch, and more ]]> Nvidia G-Assist is one of the few AI tools we in the PC Gamer hardware lair have genuinely liked the look of. G-Assist promises quite a lot, but at bottom it's a local language model you can run off your RTX 30-series (or above) GPU to help you with gaming-related tasks. Now, however, it looks like the number of tasks it'll be able to perform could be about to explode, as it's opening up to community plugin development.

Nvidia explains: "This week’s AI Garage blog focuses on how Project G-Assist is receiving even more support for tinkering, allowing for lightweight custom plug-ins to improve the PC experience."

And further: "Plug-ins are lightweight add-ons that give software new capabilities. G-Assist plug-ins can control music, connect with large language models and much more."

Setting aside the questionable use of a dash in the middle of the perfectly respectable word "plugin", this all sounds like very good news for an AI tool that we already liked the look of.

Initially it was the ability to have AI help you optimise your game settings that tickled our fancy, but upon its launch last month our James lamented the absence of its gaming AI that can help you make in-game decisions and so on, much like Razer Ava. Throw community-made plugs into the mix and by Jove we might just have an actually impressive and beneficial local AI on our hands.

Nvidia Project G-Assist

(Image credit: Nvidia)

Part of the allure of G-Assist is that it runs in the Nvidia overlay, so no alt-tabbing is necessary. Plus the fact that it's local and can hook in certain data from your system such as, most obviously, info about the game you're playing and its settings.

Now, the range of fiddling you'll be able to do via this screen is going to shoot up, as Nvidia says that "Project G-Assist is built for community expansion." Plugin developers will be able to "define functions in JSON and drop config files into a designated directory, where G-Assist can automatically load and interpret them. Users can even submit plug-ins for review and potential inclusion in the NVIDIA GitHub repository to make new capabilities available for others."

It looks like this will be able to be achieved with the help of a ChatGPT-based Plugin Builder (yes, that's an AI helping you make plugins for another AI). Instructions are found in the GitHub repo.

For the average gamer, there are already some sample plugins which you can drop into your G-Assist folder and use from the get-go. These plugins are "for controlling peripheral & smart home lighting, invoking larger AI models like Gemini, managing Spotify tracks, or even checking streamers' online status on Twitch."

The prospect of having lightweight, local plugins of your choice to run off your Nvidia GPU—rather than trying to, say, cram an entire gigantic LLM onto your local drive—is very appealing, especially when it can be accessed in-game with an overlay.

I, for one, am excited to see what plugins people come up with… you know, so I don't have to do any actual coding myself, with or without the help of an AI.

Best SSD for gaming: The best speedy storage today.
Best NVMe SSD: Compact M.2 drives.
Best external hard drive: Huge capacities for less.
Best external SSD: Plug-in storage upgrades.

]]>
/software/ai/anyone-can-now-make-plugins-for-nvidias-promising-g-assist-ai-overlay-with-options-available-for-spotify-twitch-and-more/ 34gkUdQ5QCCFzjtAMoj4wh Wed, 23 Apr 2025 15:51:42 +0000
<![CDATA[ Behold! A single QR code containing a miniaturized take on Doom: 'literally the entire game' ]]> There's something a bit magical about shareware stalwart Doom and folks' compulsion to make it run on absolutely everything. Present a sufficiently motivated individual with the game's open source code, and a less than powerful bit of kit offering maybe the barest whisper of a GUI, and you can be sure they'll put two and two together.

So, when I ask 'what does a pregnancy test, gut bacteria, and 100 pounds of moldy potatoes all have in common?', you know the answer. Well, now we can add QR codes to that list—with an asterisk. Developer Kuber Mehta has created something exceedingly light-weight that can be compressed, encoded into, and then extracted from a single QR code (via TechSpot).

However, as the developer chronicles in a recent blog post, it turns out getting Doom itself to run via QR code is actually not that straightforward. Just for a start, QR codes can only encode up to 3 KB of data, and as Mehta points out, the original Doom's chaingun sprite alone is an absolutely whopping 1.2 KB.

So, the developer decided upon a compromise, writing that his eventual 'absurd premise' became "Create a playable DOOM-inspired game smaller than three paragraphs of plain text."

So, it's not actually Doom—that's the asterisk—but Mehta's game is definitely Doom-like. Taking heavy inspiration not just from the original 1993 shooter, but liminal space creepypasta The Backrooms as well, Mehta's project is called 'the backdooms.' It's very silly as names go, but I'm still kicking myself for not having come up with it.

Doom Enhanced (1993) screenshot - chaingun firing at two floating cacodemons

(Image credit: Id Software)

Crafted in HTML, Mehta had to make every character of his code count, and he was able to compress variables to single letters by using what he describes as "EXTREMELY aggressive minification". Looking at the resulting code kind of makes me feel like a Doom demon taking a headshot, but I'm nonetheless impressed. Unfortunately, encoding HTML into a QR code is also not a walk in the park, with the usual go-to Base64 conversion route leaving very little of an already meagre storage budget for the game itself.

So, Mehta turned to the cursed trinity of ChatGPT, DeepSeek, and Claude to source a solution.

Demonstrating why AI chatbots are not especially useful referencing tools, Mehta writes, "I talked to [the three AI chatbots] for two days, whenever I could [...] 100 different prompts to each one to try to do something about this situation (and being told every single time hosting it on a website is easier!?) Then, ChatGPT casually threw in DecompressionStream [a WebAPI component built into most browsers]."

This was not the last hurdle for the project to creatively hurl itself over. Though the backdooms originally used tiny baked-in maps, Mehta instead chose to implement maps based on a seed that infinitely generates.

While somewhat random, this means maps can be retrieved if you've got the seed number it was generated from—however, the real magic trick was then making this look like even rudimentary 3D. Borrowing a page out of Doom's original playbook, Mehta deployed a simplified version of raycasting—so, technically, this (and the original Doom) is a 2D game in a trench coat.

As a result of all of the above constraints, 'the backdooms' is a visually limited project, seeing you avoiding red-eyed rectangles amid grey walls, but it definitely gets the point across. You can take a deeper dive into the guts of the project yourself via GitHub.

A screenshot of the backdooms, a simplified shooter inspired by the original Doom from 1993 and The Backrooms. The player is surrounded by red-eyed rectangles and halls of grey walls.

(Image credit: Kuber Mehta)

Naturally, compressing everything into a QR code isn't going to be practical for most projects, but it's definitely in the shareware spirit of the original Doom. Thrifty use of underpowered resources is always impressive—especially when a powerful quantum computer struggles to run even a wireframe version of Doom.

Along the lines of getting the most out of incredibly limited foundations, you may also be interested in Roche Limit, a horror game made entirely in PowerPoint. Despite the jaunty trailer, that's a project with a powerfully cursed (though no less compelling) vibe. Speaking of tales of terror, maybe I should dust off that draft about a post-apocalyptic world inherited by cockroaches with Doom's source code encoded in their DNA…


Best gaming PC: The top pre-built machines.
Best gaming laptop: Great devices for mobile gaming.

]]>
/hardware/behold-a-single-qr-code-containing-a-miniaturized-take-on-doom-literally-the-entire-game/ wURdqj4ieoVHKdTnFmCHWJ Wed, 23 Apr 2025 13:57:37 +0000
<![CDATA[ AI becoming increasingly central to China's education reform with plans to bring AI into classes soon ]]> If there's one thing we should have learned by now, it's that technology is a tool. It can be used for both good, bad, and every weird state of morality humans can dream up inbetween. The same is true for AI, though often it's used as an entity in itself, rather than as tools in most cases. Still in the right situations, such as doing complex computations, sorting through lots of data, and precise controls, the right AI can be great. It seems the next logical step in machine learning is to see how it can be used to help human learning, and China is about to put it to the test.

According to Reuters, China is set to start rolling out AI in efforts to improve its teaching and textbooks across all levels of school education. This is a part of a larger plan by the country to help bolster the education system as well as looking for new paths of innovation. China is hoping to reach what it calls a "strong-education nation" by 2035

China's education ministry believes using AI to these ends will help "cultivate the basic abilities of teachers and students," as well as help shape the "core competitiveness of innovative talents." An example is helping to develop basic skills for students starting with things like communication and cooperation to more complex tasks independent thinking and problem solving.

AI in schools might sound horrifying, but if we go back to thinking of AI as a tool it could be pretty great. Even America is considering it, though they keep calling it "A one" for some reason. As long as we use AI for the tasks it was made for, in these instances AI could help make learning more individualised. With AI's ability to wade through large piles of data and find working patterns and pathways forward, it could lead to a much more limber education system that's ready to shift to the accommodation of its students.

When AI is used poorly is often tied to creative tasks, or when there's not enough oversight. With most AI's in the wild being language models they're mostly designed to pick the most likely word in a sentence, rather than provide valuable information, and they're known to be wrong. Confidently wrong. This could also be fine and a useful tool but people trust these results, and that's where we end up with a lot of confusing garbled information.

So if we are going to use AI in schools it needs to be bespoke and transparent. A purpose built AI trained by educators that is constantly open to scrutiny and adjustment could be a wonderful addition to schools. I just don't know if I necessarily trust China or the United States of America to deliver such an AI any time soon.


Best gaming monitor: Pixel-perfect panels.
Best high refresh rate monitor: Screaming quick.
Best 4K monitor for gaming: High-res only.
Best 4K TV for gaming: Big-screen 4K PC gaming.

]]>
/hardware/ai-becoming-increasingly-central-to-chinas-education-reform-with-plans-to-bring-ai-into-classes-soon/ 2Twb3tuHM633asLn5WGWZe Tue, 22 Apr 2025 09:49:51 +0000
<![CDATA[ There's no need to overshare on social media now that OpenAI's new chatbots can pinpoint your location from the tiniest details in images ]]> Word to the wise, be careful about the images you post on social media. OpenAI's latest AI models, released last week, have sparked a new viral craze for bot-powered geoguessing. In other words, using AI to deduce where a photo was taken. Not to put too fine a point on it, but that could be a doxxing and privacy nightmare.

OpenAI's new o3 and o4-mini models are both capable of image "reasoning". In broad terms, that means comprehensive image analysis skills. The models can crop and manipulate images, zoom in, read text, the works. Add to that agentic web search abilities, and you theoretically have a killer image-location tool, foreboding pun somewhat intended.

According to OpenAI itself, "for the first time, these models can integrate images directly into their chain of thought. They don’t just see an image—they think with it. This unlocks a new class of problem-solving that blends visual and textual reasoning."

That's exactly what early users of the o3 model in particular have found (via TechCrunch). Numerous posts are popping up across social media showing users challenging the new ChatGPT models to play GeoGuessr with uploaded images.

A close-cropped snap of a few books on a shelf? The library in question at the University of Melbourne correctly identified. Yikes. Another X post shows the model spotting cars with steering wheels on the left but also driving on the left-hand side of the road, narrowing down the options to a few countries where driving on the left is required but lefthand drive cars are common, including the eventual correct guess of Suriname in South America.

The models are also capable of laying out their full reasoning, including the clues they spotted and how they were interpreted. That said, research published earlier this year suggests that the explanations these models give for how they arrive at answers doesn't always reflect the AI's actual cognitive processes, if that's what they can be called.

When researchers at Anthropic "traced" the internal steps used by its own Claude model to complete math tasks, they found stark differences with the method the model claimed it had used when queried.

Whatever, the privacy concerns are clear enough. Simply point ChatGPT at someone's social media feed and ask it to triangulate a location. Heck, it's not hard to imagine that a prolific social media user's posts might be enough to allow an AI model to accurately predict future movements and locations.

All told, it's yet another reason to be circumspect about exactly how much you spam on social media, especially when it comes to fully public posts. On that very note, TechCrunch queried OpenAI on that very concern.

"OpenAI o3 and o4-mini bring visual reasoning to ChatGPT, making it more helpful in areas like accessibility, research, or identifying locations in emergency response. We’ve worked to train our models to refuse requests for private or sensitive information, added safeguards intended to prohibit the model from identifying private individuals in images, and actively monitor for and take action against abuse of our usage policies on privacy," was the response, which at least shows the AI outfit is aware of the problem, even if it's yet to be demonstrated that these new models would refuse to provide geolocations for any given image or collection of images.


Best CPU for gaming: Top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game first.

]]>
/hardware/theres-no-need-to-overshare-on-social-media-now-that-openais-new-chatbots-can-pinpoint-your-location-from-the-tiniest-details-in-images/ xxHEu3Eu5XPa6JTRJJ8Joj Mon, 21 Apr 2025 12:47:23 +0000
<![CDATA[ The LA Times gives mealy-mouthed AI the last word on its opinion pieces ]]> There's a lot to say about AI's integration into journalism—and if you're only asking me, almost all of it would be extremely negative. Well, an engaged reader wouldn't just take my word for it, instead seeking to read widely on the subject to better inform their own opinion. My one request, if I may be so bold, is that your wider reading doesn't begin and end with asking an AI chatbot how it 'feels' about the subject.

In recent weeks, the LA Times has begun book-ending its opinion pieces with AI-generated 'Insights' (via AP News). Clicking on this dropdown tab offers information such as the supposed political alignment of the piece you've just read, a bullet point summary of the piece itself, as well as points offering "different views on the topic." It's a 'both sides' approach by way of Perplexity-powered AI.

Insights was first implemented on March 3, so it remains tricky to get a solid sense of the quality of this still very recent addition. Still, it's notable that some of its features, such as identifying the supposed political alignment of a piece, has so far only been applied to opinion pieces and not news.

At the very least, I appreciate the AI-generated bullet points offer some linked-out citations so you can dig deeper into its claims yourself. Mind you, that's a very journalist thing to say; how many people will investigate the quality of the included citations beyond noting the AI presents them at all? There's also arguably more to this story than mere AI bandwagon-hopping.

First, a brief recap: AP News notes the LA Times was bought back in 2018 by Patrick Soon-Shiong, a transplant surgeon, medical researcher, and investor who has also served as the publication's executive chairman for the last seven years. Interviewed by Fox News last year, Soon-Shiong said, "We've conflated news and opinion," later adding that the LA Times wants "voices from all sides," before going on to say, "If you just have the one side, it’s just going to be an echo chamber."

Patrick Soon-Shiong, chief executive officer of NantKwest Inc., speaks during a Bloomberg Television interview at the JPMorgan Healthcare Conference in San Francisco, California, U.S., on Monday, Jan. 13, 2020. NantKwest shares rose by a record after Soon-Shiong said its experimental cancer therapy had shown a dramatic result in one patient with pancreatic cancer during an early-stage clinical trial. Photographer: David Paul Morris/Bloomberg via Getty Images

Patrick Soon-Shiong talking to Bloomberg in 2020. (Image credit: David Paul Morris/Bloomberg via Getty Images)

As such, opinion pieces are very clearly demarcated from news, often labelled as 'editorial' or 'Voices'. Beyond that, the publication also chose not to endorse a specific presidential candidate last year—about two weeks prior to election day—despite an editorial in favour of Kamala Harris being allegedly already prepared. The Los Angeles Times's editorial editor, among other members of the editorial board, resigned in response to this decision. To put it another way, since at least last year, there appears to have been a greater push from upon high to steer the publication more centrally in the name of impartiality.

Right so, with that context in mind, let's take a peek at the LA Times' AI-generated 'Insights' in action. In this opinion piece touching upon recent ICE detainments and deportations, Matt K. Lewis claims, "The point was never really about deporting violent criminals. The point was a warning to anyone who wants to come to America: Don’t come here. Or, if you’re already here, get out."

In response, Insights offers, "Supporters defend enhanced immigration enforcement as necessary to address a declared 'invasion' at the southern border," and "Restricting birthright citizenship and refugee admissions is framed as correcting alleged exploitation of immigration loopholes, with proponents arguing these steps protect American workers and resources."

While the opinion piece's stance is very clear, the AI-generated so-called-insight is comparatively mealy-mouthed, with phrasing like "a declared 'invasion' at the southern border," leaving far too much unchallenged. While it would be far from ideal to descend into a rabbit-warren of AI-versus-human counter arguments, it feels very odd to allow AI the last word. What's most frustrating is the implied assertion that the AI's regurgitated claims are at all equally valid views to be presented alongside the opinion writer's stated, well-sourced horror at ICE's overreach.

As such, I fear Insights may be yet one more far from neutral, bias-reproducing AI, rather than a worthwhile tool that offers valuable context to readers. Insights notes, "The Los Angeles Times editorial staff does not create or edit [this] content," so you can be sure that no pesky journalists were allowed to do their job and give the AI a stern talking to about uncritically repeating hearsay. Naturally, it would be ridiculous to hold the AI accountable for the decisions of the humans steering the ship—I just hope the course correction is swift.


Best gaming PC: The top pre-built machines.
Best gaming laptop: Great devices for mobile gaming.

]]>
/hardware/the-la-times-gives-mealy-mouthed-ai-the-last-word-on-its-opinion-pieces/ zrR2NK5HSj5LwtrTjV79a Thu, 17 Apr 2025 14:18:16 +0000
<![CDATA[ Turns out a bit of pixelation won't cover your back (or front) as it's actually very easy to de-censor videos ]]> What I've found to be a good rule of thumb is this: If I wouldn't want my kid sister to see it, then I probably shouldn't put it on the internet. For you, that someone may be a parent or guardian, but after hearing my own mother thoughtfully critique the writing of the Yakuza 0 side story 'How to train your dominatrix,' I suspect she'd be surprisingly chill. My kid sister, however, would never let me hear the end of it—and perhaps I'd deserve it too if I thought a mere pixelate filter could conceal my many folders of filthy fanfic.

Jokes aside, it turns out that it's surprisingly easy to 'de-censor' videos these days. Maker Jeff Geerling—of hot dog speaker fame—threw down the gauntlet in a recent video, challenging viewers to reveal the contents of a network share hidden using a pixelating filter, and promising a reward of $50 in return. Well, his viewers delivered—in three slightly different but no less terrifying ways.

While three different folks shared their method, each one relies on a similar principle. Geerling breaks it down, writing, "The idea here is the pixelation is kind of like shutters over a picture. As you move the image beneath, you can peek into different parts of the picture. As long as you have a solid frame of reference, like the window that stays the same size, you can 'accumulate' pixel data from the picture underneath."

With enough of those incomplete snapshots, a sufficiently motivated individual leveraging AI can puzzle-piece-together whatever you were trying to hide with the pixelate filter. Personally, I kind of think of this method like returning the kaleidoscope to its starting position.

Geerling explains that if he hadn't moved the window containing the censored files around in his original video, it may have been much harder for viewers to decode but not necessarily impossible. Geerling also says that once upon a time you'd need "a supercomputer and a PhD to do this stuff" but with speedy neural networks and AI today, it's now all too easy for computers to pattern their way through seeming chaos.

So, if the pixelate filter is out, what options are left? For one, Geerling posits that a traditional blur filter may not actually be any safer, electing himself to block out sensitive data in future videos with a completely solid colour layer mask to give neural networks as little image detail to work with as possible.

It's a very redacted-documents-found-throughout-the-oldest-house vibe, but it may genuinely be safest. Failing that, I'm wondering whether emojis might also be viable—though I've no doubt that would've made for a very different game take on 2019's Control.


Best gaming PC: The top pre-built machines.
Best gaming laptop: Great devices for mobile gaming.

]]>
/hardware/the-pixelate-filter-does-not-have-your-back-or-front-as-its-actually-very-easy-to-de-censor-videos/ sfp8H6Bs2NXmPL97t3pfmT Wed, 16 Apr 2025 12:46:56 +0000
<![CDATA[ Nvidia could lose $5.5 billion to charges after new 'indefinite' restriction on exports of beefy AI GPUs to China ]]> Amidst all the talk of tariffs on products entering the US, it can be easy to forget there are strict regulations surrounding chips going out of the country, too. Lest we forget about such export restrictions, Nvidia has just revealed (via the Associated Press) that new ones are expected to cost the company a lot of money.

This info was spotted in an SEC filing, in which Nvidia says that "first quarter results are expected to include up to approximately $5.5 billion of charges associated with H20 products for inventory, purchase commitments, and related reserves."

According to Nvidia, this is off the back of the US government informing the company on April 9 that a license is required for export to China "H20 integrated circuits and any other circuits achieving the H20’s memory bandwidth, interconnect bandwidth, or combination thereof."

Following this, again according to Nvidia, "On April 14, 2025, the USG informed the Company that the license requirement will be in effect for the indefinite future."

The H20 is essentially a modified version of the H100 GPU, a powerful 'Grace Hopper' architecture that sits on the datacentre side of the aisle just across from 'Ada Lovelace' RTX 40-series processors. Both of these have been succeeded by 'Blackwell' architecture chips, but Hopper chips are still incredibly powerful and populate many of the biggest tech companies' server racks.

Nvidia's H100 GPU

This is the Nvidia H100 (Image credit: Nvidia)

The H20 was made to comply with China export restrictions that started to come into effect in 2022 and later restricted export of powerful chips such as the H100 and even less powerful ones such as the H800 and A800. Thus the scaled-back H20 was born, and since then it's been the most powerful AI chip that China's been able to get its hands on.

Now, according to Nvidia's SEC filing, it looks like even this chip will no longer be allowed to be exported to China without license from the US government. And clearly the China H100 market must have been a big one if Nvidia is claiming $5.5 billion charges associated with the new regulations.

According to Reuters, "two sources familiar with the matter" claim that Nvidia didn't warn some Chinese customers about these new export rules. This apparently meant that some companies were expecting H20 deliveries by the end of the year.

Regulations such as these are no joke, either, as we've already seen breaking them can risk some serious repercussions. TSMC, for instance, might be fined over $1 billion for allegedly breaking export rules after one of its chips was found in a Huawei processor.

Still, while the $5.5 billion charges surely must sting, that's nothing compared to the amount that Nvidia is planning on investing in US-based chip production. Just a few days ago, the company announced plans to invest $500 billion in "AI infrastructure" in the US.

With these new reported export rules and the looming threat of semiconductor tariffs, the chip industry writ large—not to mention, of course, the burgeoning and booming AI industry—is in uncertain waters.

And while PC gaming tech is a little downstream from all this, it's most definitely the same stream. Here's hoping that after this $5.5 billion Nvidia will still has the money to pump into more RTX 50-series stocks.


Best CPU for gaming: Top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game first.

]]>
/hardware/graphics-cards/nvidia-could-lose-usd5-5-billion-to-charges-after-new-indefinite-restriction-on-exports-of-beefy-ai-gpus-to-china/ xUULaqvNDF3QDL3xCeXEFV Wed, 16 Apr 2025 12:24:14 +0000
<![CDATA[ Trump's Secretary of Education—and former WWE CEO—may repeatedly refer to AI as 'A1' but when she calls A1 learning 'a wonderful thing' she could genuinely be onto something ]]> I don't know if there's anything better than a person in power mispronouncing a word. Better yet if it's an important word, and if their mispronunciation implies a lack of understanding around the basics of said word. It can get even juicier if you might have some reason to already disrespect this person, so this repeated slip of the tongue offers some justification of your held beliefs. Well, today I give you an especially tasty combination where Trump's Secretary of Education repeatedly calls AI, "A one" in this video.

The video was shared on BlueSky where it gained some popularity, but features watermarks pointing to a politically charged Instagram that shares similar content as its origin. It shows a snippet of the United States' Secretary of Education Linda McMahon speech discussing the implementation of technologies in early learning at ASU+GSV Summit in San Diego.

In the video McMahon appears to be trying to spread the good word about the potential uses for AI powered teaching, but almost entirely deflates her argument by repeatedly referring to it as "A one" instead.

"Not that long ago we were going to have internet in our schools. Whoop. Okay, now let's see A one and how that can be helpful," says McMahon, to the brow furrows of everyone.

It's a shame, because as little faith as I have in the former professional wrestling promoter's knowledge of early education or AI, what she's talking about certainly does hold some water. We'd just want to be pretty careful about it. AI's pretty new, even it's precursor is barely a teenager.

After a lot of training and testing, AI could have excellent uses in school solutions by adapting quickly to the different needs of individual students. It could potentially help to bridge the gap in one-on-one time we currently have with shortages of teachers too. Alternatively, paying teachers and supporting them might help there too, but yeah, AI could genuinely be a force for good in schools.

Most current AI's or large language models we're currently familiar with are trained mostly on stolen data and not in any specific ways. They're also prone to bias and just as equally prone to hallucinations, so it'd probably be a terrible idea to have something like ChatGPT or DeepSeek teaching our kids. Hell, they can't even build a decent gaming PC.

A purpose built, trained, tested, monitored tool to help bolster teachers sounds like a positive AI. To be honest, "A One" could even be a good name for it, after all most of the plumbers in my directory seem to think so, and it's unlikely to be any more biassed than our current school systems.


Best handheld gaming PC: What's the best travel buddy?
Steam Deck OLED review: Our verdict on Valve's handheld.
Best Steam Deck accessories: Get decked out.

]]>
/hardware/trumps-secretary-of-education-and-former-wwe-ceo-may-repeatedly-refer-to-ai-as-a1-but-when-she-calls-a1-learning-a-wonderful-thing-she-could-genuinely-be-onto-something/ zsCsXekibAou8gjN85qgz9 Tue, 15 Apr 2025 10:15:25 +0000
<![CDATA[ Microsoft pulls out of two big data centre deals because it reportedly doesn't want to support more OpenAI training workloads ]]> Microsoft has pulled out of deals to lease its data centres for additional training of OpenAI's language model ChatGPT. This news seems surprising given the perceived popularity of the model, but the field of AI technology is a contentious one, for a lot of good reasons. The combination of high running cost, relatively low returns, and increasing competition—plus working on it's own sickening AI-made Quake 2 demo—have proven enough reason for Microsoft to bow out of two gigawatt worth of projects across the US and Europe.

Sometimes AI can refer to clever machine learning that helps researchers find plastic eating enzymes, or teach robot dogs how to walk using its own nervous system. However, more often than not AI refers to power guzzling language models and image generators, usually trained on stolen uncredited data to provide shonky results. ChatGPT falls into the latter of these categories, and according to Reuters, a large portion of Microsoft's cut down on leasing is due to its decision to not support additional training workloads specifically for its creator, OpenAI.

Models such as ChatGPT require huge data centres, usually leased by tech giants like Microsoft, to train and function. Using these data centres doesn't come cheap from both a financial and power perspective, especially with the current boom. Plus, companies could be leasing these centres to other uses that may have greater payoffs. While large language models like ChatGPT can absolutely be useful when used correctly, and not overly relied on, there's greater hidden costs the end users don't necessarily see.

Microsoft, as well as its investors, have witnessed this relatively slow payoff alongside the rise of competitor models such as China's DeepSeek. Comparatively, DeepSeek is a much more cost effective AI that's garnering more favour, including the attention of former Intel CEO's faith based tech company. So Microsoft stepping back from leasing its data centres isn't too surprising in the grand scheme of things.

In the United States Facebook's Meta has stepped up in leasing its data centres to fill the gap left by Microsoft, and Google's parent company Alphabet has done the same in Europe. Reuters also states Alphabet has committed to a $75 billion spend on its AI buildout this year, which is a 29% hike over Wall Street's expectations. Meta, is set to spend up to $65 billion on AI. All three companies, including Microsoft, have defended AI spending as necessary to keep up with competition from the likes of DeepSeek.

Despite pulling out of hosting further data centre training for ChatGPT, Microsoft says its still on track to meet it's own pledged $80 billion investment into AI infrastructure for the year. The company says while it may "strategically pace or adjust our infrastructure in some areas, we will continue to grow strongly in all regions".

Best SSD for gaming: The best speedy storage today.
Best NVMe SSD: Compact M.2 drives.
Best external hard drive: Huge capacities for less.
Best external SSD: Plug-in storage upgrades.

]]>
/hardware/microsoft-pulls-out-of-two-big-data-centre-deals-because-it-reportedly-doesnt-want-to-support-more-openai-training-workloads/ WUNF4zREVMfm6p45GSc49i Tue, 15 Apr 2025 09:57:09 +0000
<![CDATA[ App promising a universal shopping experience automated with AI actually used a small army of human workers in the Philippines and Romania instead ]]> In the world of technology, exaggerating the capabilities of a product is pretty much the norm. But when one app company managed to raise millions of dollars in investments on the promise that it would be fully automated via AI, the US Department of Justice charged its former CEO with fraud because it turns out that the actual 'AI' was in fact several hundred call centre workers instead.

The company in question is Nate, which started in 2018 and rapidly amassed over $50 million in investments, according to TechCrunch. It did this because the company's product, a universal shopping phone app, was claimed to use AI to fully automate the whole buying process.

Nate's gist is that you would see a product you like, click a button, and machine learning will then sort out the transaction for you—including picking the right version of the product, payment details, and shipping.

However, an investigation by The Information (via TechSpot) showed that AI wasn't used at all—in fact, it was just call centre workers in countries such as Romania and the Philippines that would be furiously clicking away behind the scenes.

While there is an element of comedy to all of this, the law in many countries around the world has a word for this sort of thing, and the founder of Nate has indeed now been charged with fraud. The US Department of Justice was less than amused by the actions of Nate and its founder and CEO at the time, Albert Saniger, last week charging him for "making false claims about his company’s artificial intelligence technology."

Specifically, the FBI's Christopher G. Raia had this to say about the former CEO: "Albert Saniger allegedly defrauded investors with fabrications of his company's purported artificial intelligence capabilities while covertly employing personnel to satisfy the illusion of technological automation. Saniger allegedly abused the integrity associated with his former position as the CEO to perpetuate a scheme filled with smoke and mirrors."

Saniger has been charged with one count of securities fraud and one of wire fraud, both of which carry maximum penalties of up to 20 years in prison.

Along with The Information's investigation of Nate, the fact the company ran out of money a few years ago, requiring it to sell off most of its assets, leaving investors high-and-dry, was probably what brought the app company into the gaze of the Department of Justice and FBI.

This isn't the first use of a claimed AI technology being nothing more than real humans beavering away behind the scenes and it certainly won't be the last. But it's a cautionary tale that, like with all things technology-wise, any marketing claims should be viewed with a wary eye unless indisputable evidence of it working as promised is handed over.


Best gaming PC: The top pre-built machines.
Best gaming laptop: Great devices for mobile gaming.

]]>
/software/ai/app-promising-a-universal-shopping-experience-automated-with-ai-actually-used-a-small-army-of-human-workers-in-the-philippines-and-romania-instead/ eCkhkspcnxCYhkfZdvcvxk Mon, 14 Apr 2025 15:37:05 +0000
<![CDATA[ Report estimates AI energy demands will quadruple in the next few years, with some large planned centres estimated to use the equivalent power of 5,000,000 households ]]> One of the biggest concerns people have about AI is energy consumption, and a recent report suggests that things are only going to get worse in that regard over the next few years. However, the same report also argues there is a silver lining.

A study from the Internal Energy Agency (IEA) titled Energy and AI was recently published (via The Guardian) using data gleaned from global datasets and consultation with "governments and regulators, the tech sector, the energy industry and international experts".

In it, the paper suggests energy demands for data centres broadly will grow by double, and the use of AI data centres, specifically, will grow by a factor of four.

The energy needed to supply these data centres is reported to grow from 460 TWh in 2024 to more than 1,000 TWh in 2030. This is then predicted to reach 1,300 TWh by 2035.

Though traditional data centres are also projected to grow with time, AI-optimised servers are taking up the biggest equivalent share of that growth.

As pointed out by IEA director of sustainability, Laura Cozzi, in a full presentation on the paper, the energy demand for individual data centres is growing over time.

Hyperscale data centres (effectively the biggest ones) consume the equivalent power of 100,000 households today, with the largest under construction right now set to consume the equivalent of 2 million households. The largest currently announced (but not yet under construction) would consume the energy of 5 million households.

However, the paper argues that this increased demand isn't all doom and gloom in regard to climate change. It starts the 'AI and climate change' section by pointing out that over 100 countries have deals to reach net zero emissions between 2030 and 2070.

A forty-year gap is a rather nebulous one when the demand is surging now, and 'net zero' could mean quite a lot, on the grand scale.

The fact that bringing in energy demands doesn't appear to be an argument made here suggests that net zero will be achieved with offsets, as opposed to regulation on AI.

However, the argument made in 'Energy and AI' is largely that AI models, and their advancement, can be used to rein in inefficiencies in other energy sectors, reducing emissions in regards to methane, the power sector, and larger industry.

This photograph shows servers inside the data centre of French company OVHcloud in Roubaix, northern France on April 3, 2025. (Photo by Sameer Al-DOUMY / AFP) (Photo by SAMEER AL-DOUMY/AFP via Getty Images)

(Image credit: Getty Images / Sameer Al-Doumy)

The report argues "The adoption of existing AI applications in end-use sectors could lead to 1,400 Mt of CO2 emissions reductions in 2035". The report estimates global fuel combustion emissions equated to 35,000 Mt in 2024.

This argued offset is inclusive of an adoption that the report suggests isn't currently happening. "It is vital to note that there is currently no momentum that could ensure the widespread adoption of these AI applications. Therefore, their aggregate impact, even in 2035, could be marginal if the necessary enabling conditions are not created."

Talking to The Guardian, Claude Turmes, former secretary for sustainable development and infrastructure for Luxembourg, is critical of the report's findings.

Your next machine

Gaming PC group shot

(Image credit: Future)

Best gaming PC: The top pre-built machines.
Best gaming laptop: Great devices for mobile gaming.

??"Instead of making practical recommendations to governments on how to regulate and thus minimise the huge negative impact of AI and new mega data centres on the energy system, the IEA and its [chief] Fatih Birol are making a welcome gift to the new Trump administration and the tech companies which sponsored this new US government.”

The IEA report has a built-in AI chatbot on-page and when I asked it what the paper says about climate change, it told me:

"The widespread adoption of existing AI applications could lead to emissions reductions equivalent to around 5% of energy-related emissions in 2035. However, this is still far smaller than what is needed to address climate change."

]]>
/software/ai/report-estimates-ai-energy-demands-will-quadruple-in-the-next-few-years-with-some-large-planned-centres-estimated-to-use-the-equivalent-power-of-5-000-000-households/ t2Ldmj6eA4cBbUUhttHwnU Fri, 11 Apr 2025 11:40:45 +0000
<![CDATA[ New two-way brain-computer interface has returned motion AND sensation to a paralysed test patient: 'Right now, we just know the results are significant and leading to functional and meaningful outcomes' ]]> They keep trying to make neural interfaces happen, and I'm sceptical about 90% of the venture; as far as big tech is concerned, there's a 'closed' sign on my grey matter, thanks. Still, the other 10% of brain-computer interface projects present genuine medical advancements worth getting excited about.

A team of researchers at Feinstein Institutes for Medical Research in Manhasset, New York, have created a brain-computer interface that both allows patients to not only move paralysed limbs, but to feel them once more too (via IEEE Spectrum). The team is led by Chad Bouton, vice president of advanced engineering at the institute, and they achieved the aforementioned with a "double neural bypass" system they've been working on since at least 2015. Unsurprisingly, it turns out cutting long term funding for medical research is a short-sighted move at best.

Anyway, let's keep this positive. That original 2015 experiment created a single "neural bypass"—essentially an array of electrodes implanted directly into a patient's motor cortex. This brain chip would listen in on neural activity, and an AI model would match the incoming brain waves to the patient's intended body movements. The team's latest "double neural bypass" experiment opens another channel of communication, allowing sense data to be fed back to the patient's brain as they move. However, rather than implanting just two brain chips as the term "double neural bypass" may suggest, the patient at the centre of the Feinstein team's research was kitted out with five, incorporating a total of 224 electrodes into his grey matter.

Keith Thomas is the patient in question, first becoming paralysed from the chest down after a diving accident. Before participating in the Feinstein team's experiment, Thomas was able to raise his arm off of his wheelchair's arm rest by about an inch.

To return motion and feeling to his paralysed limbs, the team implanted two electrode arrays into Thomas' motor cortex, alongside three similar chips into his somatosensory cortex (the bit of the brain responsible for touch). An AI model interpreted Thomas' brain signals and then stimulated further electrode arrays implanted in his neck and arm. These served to modulate the patient's spinal cord and forearm muscles respectively, allowing for movement.

Over the course of the experiment, Thomas has been able to rebuild the strength in his arm. He's now able to reach up and touch his face or drink from a cup unassisted. The addition of somatosensory cortex stimulation also means Thomas can feel what he touches, allowing him to better modulate his grip strength and, as the team claims, pick up an empty egg shell without cracking it.

Even more exciting though is that Thomas can now sense some sensation in his arm even when he's not hooked up to the "double neural bypass" system. The researchers are not yet entirely sure why this is, but it's possible that neuroplasticity has allowed Thomas' brain to 'relearn' how to interpret sensory information.

Research team leader Chad Bouton explained, "It is known from animal experiments electrical stimulation can promote neuronal growth, but here it is unclear whether it’s more about strengthening spared connections at the spinal cord injury site. Right now, we just know the results are significant and leading to functional and meaningful outcomes.”

Researchers across the field have been successfully returning motion to paralysed limbs via brain-computer interfaces for years now. In fact, one early neural bypass experiment even returned such an impressive range of motion to one young man that he was able to play Guitar Hero again. However, these experiments have largely been a one-way street, returning motion but not sensation—until now.

This also isn't the first time AI has been deployed as a genuinely assistive technology; by my count, the last genuinely helpful thing Meta did as a company was conduct research that could one day allow those with speech difficulties to communicate via an AI neural interface. A few more scientific leaps and I might actually warm up to the idea of a brain chip.


Best gaming mouse: the top rodents for gaming
Best gaming keyboard: your PC's best friend...
Best gaming headset: don't ignore in-game audio

]]>
/hardware/new-two-way-brain-computer-interface-has-returned-motion-and-sensation-to-a-paralysed-test-patient-right-now-we-just-know-the-results-are-significant-and-leading-to-functional-and-meaningful-outcomes/ EUeksrU6vZscrfJ8EJJEmk Wed, 09 Apr 2025 15:27:49 +0000
<![CDATA[ Holy smokes, something that might actually be useful is coming to Copilot on Windows ]]> Forgive me for being surprised by this after the last two years of REM sleep-inducing AI updates and betas, but Microsoft's latest Copilot update seems like it might (*drumroll*) actually be useful. That's because it should finally allow you to search system-wide for contents within files and also help you in any browser or app.

The latest Windows Insider (ie, beta) build is now rolling out these Vision and file search features via a Copilot app update in the Microsoft Store (for "version 1.25034.133.0 and higher"). These two features feel like the kinds of things we were promised from Copilot upon its launch but which we've seen little of until now. Better late than never, though, eh?

File search, Microsoft says, allows you to "find, open and ask questions about the contents of a file on your device from the Copilot on Windows app" and supports most file types. It also looks like it should be good at contextual commands and requests—one example Microsoft uses is, "Look at my budget file and tell me how much I spent on dining last month."

Vision is a feature that finally opens up the playing field for Copilot, giving it access to any app or browser you choose: "To get started click the glasses icon in your composer, select which browser window or app you want to share, and ask Copilot to help with whatever you’re working on."

(Image credit: Microsoft)

Apparently, "Copilot can then help analyze, offer insights, or answer your questions, coaching you through it aloud." We will, of course, have to see just how good it is at such tasks in practice.

Both of these features excite me more than previous Copilot ones. As someone with a penchant for creating the most disorganised digital file systems known to man, I can see genuine use in file search. And I suppose Vision could be useful for learning new apps and tools, provided it actually does what it says on the tin.

They're certainly more exciting Copilot features than the usual text summaries and email drafts *yawn*. And as exciting as 'gaming sidekick' Copilot for Gaming might be, game assistants are hardly a Microsoft exclusive.

Vision and file search, on the other hand, could be genuine benefits to the Copilot bundle that other companies can't easily match, given these will presumably be baked into—and make use of—the Windows operating system itself.

I just hope having a full-blown AI 'companion' (per head of Microsoft's AI section, Mustafa Suleyman), isn't required for it. Let's not ruin a good thing, for once, okay?


Windows 11 review: What we think of the latest OS.
How to install Windows 11: Guide to a secure install.
Windows 11 TPM requirement: Strict OS security.

]]>
/software/windows/holy-smokes-something-that-might-actually-be-useful-is-coming-to-copilot-on-windows/ QWDsX4z6Bvv5DMv68F3B4E Wed, 09 Apr 2025 13:56:24 +0000
<![CDATA[ Microsoft has now fired the employees who publicly protested the company supplying AI tech to the Israeli military ]]> Last Friday, two software engineers vocally protested the use of Microsoft's AI technology by the Israeli military. This follows the international Boycott, Divestment and Sanctions (BDS) movement's calls to boycott Xbox earlier that same week (via Rock Paper Shotgun). It has since been reported that, as of this Monday, those two software engineers have been fired by Microsoft.

According to emails and internal communication seen by CNBC, AI software engineer Ibtihal Aboussad was told her employment would be terminated on the grounds of "just cause, wilful misconduct, disobedience or wilful neglect of duty." Fellow protestor Vaniya Agrawal had intended to resign from Microsoft on April 11, but the company wrote to her on Monday stating it had "decided to make [her] resignation immediately effective" that same day.

In Aboussad's case, Microsoft made direct reference to her protest. Microsoft wrote via internal communication that the company took issue with the public nature of her protest, claiming that Aboussad could have raised concerns "confidentially" with either her manager or Global Employee Relations.

The company went on to write, "Instead, you chose to intentionally disrupt the speech of Microsoft AI CEO Mustafa Suleyman," and that it had "concluded that your misconduct was designed to gain notoriety and cause maximum disruption to this highly anticipated event." Ultimately, the company told Aboussad, "Immediate cessation of your employment is the only appropriate response."

As previously reported by AP, the Israeli military has increasingly relied on Microsoft's AI technology after the deadly surprise attack by Hamas on October 7, 2023, becoming the second largest military customer only behind the US military. In March 2024 the Israeli military's use of AI spiked up to "200 times" the rate it had been prior to the October 7 attack.

REDMOND, WASHINGTON - APRIL 4: A protestor confronts Microsoft AI CEO Mustafa Suleyman during an event highlighting Microsoft Copilot agents, the company's AI tool, on April 4, 2025 in Redmond, Washington. The company also celebrated its 50th Anniversary. (Photo by Stephen Brashear/Getty Images)

(Image credit: Getty Images)

According to AP, the Israeli military "uses AI to sift through vast troves of intelligence, [intercept] communications and [...] to find suspicious speech or behavior and learn the movements of its enemies [through surveillance]," with their investigation also highlighting the mortal risks posed by AI false positives in warfare.

To briefly recap, during Microsoft's 50th-anniversary event last week in Redmond, Washington, Ibithal Aboussad challenged Microsoft AI CEO Mustafa Suleyman as he took to the stage. She said, "Mustafa, shame on you. You claim that you care for using AI for good, but Microsoft sells AI weapons to the Israeli military. Fifty thousand people have died, and Microsoft powers this genocide in our region." As Aboussad was escorted from the event, she said, "You have blood on your hands. All of Microsoft has blood on its hands."

That same day, a Microsoft spokesperson said in a statement provided to PC Gamer, "We provide many avenues for all voices to be heard. Importantly, we ask that this be done in a way that does not cause a business disruption. If that happens, we ask participants to relocate." However, this is not the first time Microsoft has fired employees who had previously taken a public stance on the conflict; in October 2024, Abdo Mohamed and Hossam Nasr were fired after organising a vigil for Palestinians killed in Gaza.

Following her ejection from the anniversary event, Aboussad sent an email to a number of Microsoft executives including Mustafa Suleyman, Microsoft CEO Satya Nadella, finance chief Amy Hood, operating chief Carolina Dybeck Happe, and Microsoft president Brad Smith.

In this email, Aboussad clarified her stance, writing, "I spoke up today because after learning that my org was powering the genocide of my people in Palestine, I saw no other moral choice. This is especially true when I’ve witnessed how Microsoft has tried to quell and suppress any dissent from my coworkers who tried to raise this issue." Internal communication revealed that Microsoft viewed this email written by Aboussad as "an admission that [she had] deliberately and willfully engaged in [...] misconduct."

Software engineer Vaniya Agrawal had also spoken up at a separate meeting with Microsoft executives that also took place last Friday, interrupting Satya Nadella. She had similarly composed an email further clarifying her stance, writing, "Over the past 1.5 years, I’ve grown more aware of Microsoft’s growing role in the military-industrial complex." Agrawal goes on to claim Microsoft is "complicit," that the company is a "digital weapons manufacturer that powers surveillance, apartheid, and genocide," and that "by working for this company, we are all complicit."

Some of this language is echoed by BDS, which also described Microsoft as "perhaps the most complicit tech company in Israel’s illegal occupation, apartheid regime and ongoing genocide against 2.3 million Palestinians in Gaza."

Microsoft Activision Blizzard logos

(Image credit: Anadolu Agency (Getty Images))

The organisation has since called for gamers to "cancel [their] Xbox Game Pass subscription" in order to apply pressure on the company. BDS has also called for the boycott of "all Microsoft Gaming products, including Xbox-branded consoles, headsets, accessories and all games published by Microsoft-owned publishing labels (such as Xbox Game Studios, Activision, Bethesda and Blizzard)."

In her initial email, Aboussad included a link to No Azure for Apartheid, an organisation calling for Microsoft to end "its direct and indirect complicity in Israeli apartheid and genocide." The organisation's aims are supported by a number of internal Microsoft employees, having garnered over 1,000 petition signatures and featuring the words "We refuse to be complicit" on its landing page.

]]>
/hardware/microsoft-fires-employees-protesting-israeli-militarys-use-of-companys-ai-tech/ q8xPDCYHXq2qTQYMb7bNoP Wed, 09 Apr 2025 08:55:16 +0000
<![CDATA[ Microsoft's head of AI wants to create an artificial overly-attached companion for us all: 'It will have its own name, its own style. It will adapt to you. It may also have its own visual appearance and expressions' ]]> As part of Microsoft's 50th-anniversary celebrations, it's been talking a lot about its past but also its future and one doesn't need a crystal ball to figure out what that will entail. According to the CEO of Microsoft's AI division, we're all going to be seeing a lot more of Copilot and, ultimately, digital companions powered by AI that will form a "lasting, meaningful relationship" with you.

This is all claimed by Mustafa Suleyman, the head of Microsoft's AI section, and he expounded on what this would all be like in an interview with the Associated Press. "My goal is really to create a true personal AI companion. And the definition of AGI [artificial general intelligence] sort of feels very far out to me and sort of not what I’m focused on in the next few years.”

As to what he means by a 'companion', he explained by saying Microsoft's AI technology will be "[o]ne that knows your name, gets to know you, has a memory of everything that you’ve shared with it and talked about and really comes to kind of live life alongside you,” he told AP. "It’s far more than just a piece of software or a tool. It is unlike anything we’ve really ever created."

AI is already a big part of everything that Microsoft churns out these days, all wrapped up under the moniker of Copilot. From Office to Notepad, full operating systems to Surface laptops, the chatbot-on-steroids is ever-present but at least one never has to interact with it.

I'm not suggesting for one moment that some aspects of the integration of Copilot aren't useful—for the right user, the features provide a variety of shortcuts to boost productivity or gain better insights into what you're doing. Like all software tools, it can be very useful or very useless. However, it's clear that Suleyman envisages something rather different for the future of Microsoft's AI.

And we're not just talking about things in the far future. Some of this will start to roll out relatively soon, starting with Microsoft's mobile applications, which will gain some kind of 'visual memory capability' that sounds very much like the much-delayed Recall.

Suleyman is no recent convert to the world of AI. Nearly 25 years ago, he co-founded and led DeepMind, a UK-based AI company that would ultimately be snapped up by Alphabet, Google's parent company, in 2014. DeepMind is famous for creating AlphaGo, an AI system that could give some professional Go masters a run for their money.

After leaving Google in 2022, Suleyman went on to start up another artificial intelligence company, Inflection AI, before eventually joining Microsoft as the executive vice president and chief executive officer of its AI division in 2024. While clearly a big fan of AI and what it can do for us, he's also been a vocal proponent for the enforcement of AI ethics and the like. However, he's also well known for having the view that pretty much anything on the Internet is fair game to be used for AI training, which rather flies in the face of copyright laws.

While attending this year's Game Developer Conference, I saw an awful lot of stands, press talks, and lectures on AI in games and gaming. Artificial NPCs, artificial gaming buddies, artificial advice, artificial rendering—AI couldn't be escaped. We might not see Suleyman's vision of an overly-attached AI companion anytime soon but given how much money Microsoft and the rest of the tech world are spending on all of this right now, you can be sure that it will happen.


Windows 11 review: What we think of the latest OS.
How to install Windows 11: Guide to a secure install.
Windows 11 TPM requirement: Strict OS security.

]]>
/software/ai/microsofts-head-of-ai-wants-to-create-an-artificial-overly-attached-companion-for-us-all-it-will-have-its-own-name-its-own-style-it-will-adapt-to-you-it-may-also-have-its-own-visual-appearance-and-expressions/ B7How3mxSwnSUQZSBtWpNS Tue, 08 Apr 2025 15:13:41 +0000
<![CDATA[ Microsoft's 100% AI-generated Quake 2 made us nauseous but John Carmack, the game's OG coder, is into it: 'What? This is impressive research work!' ]]> Over the weekend, Microsoft released a technology demonstration from its AI Copilot research labs, showcasing a generative AI creating Quake 2 from scratch. Or something resembling it, at least, as the original game never made us feel as nauseous as this one does. Still, who are we to complain when John Carmack, the lead programmer behind Id Software's seminal game, was genuinely impressed by it.

To be fair, I don't think he was referring to the demo's graphics or performance, as he specifically said on X, "This is impressive research work!" in response to someone heavily criticizing it.

The research in question was published in the science journal Nature and to someone like me, who got into 3D graphics programming on PCs because of the likes of Quake 2, it reads like some ancient alien script, carved into a mysterious substance. I'm certainly not anywhere near experienced enough in AI programming to judge the relative merits of the work of a large group of professionals.

Arguably, Carmack is qualified so if he says it's impressive, then I'm certainly in no position to disagree. Mind you, he has a vested interest in AI, starting an AGI (artificial general intelligence) company called Keen Technologies, back in 2022.

But while the research surely is top-notch, I can't help but feel the end results very much aren't. Yes, this is a very early tech demo and over recent years, we've all seen generative AI go from churning out utter nonsense to producing startling realistic and accurate audio and video.

But it's not the snail-like frame rate or ghostly rendering that I have an issue with. I'm not bothered by the fact that the demo struggles to maintain a comprehensive grasp of a level in Quake, a game that's almost 30 years old. For me, the problem is what it's taken to generate the very short but 'playable' demo. From the research paper itself:

"We extracted two datasets, 7?Maps and Skygarden, from the data provided to us by Ninja Theory. The 7?Maps dataset comprised 60,986 matches, yielding approximately 500,000 individual player trajectories, totalling 27.89?TiB on disk. This amounted to more than 7?years of gameplay.."

The original Quake 2 was created by a handful of people—a few designers, programmers, and artists. They didn't need 28 TB worth of gaming data to do this, just their own ingenuity, creativity, and knowledge. They didn't require hugely expensive GPU servers, requiring many kW of power, to render the graphics they generated.

I have no problem with research just for the sake of research (as long as it's legal, and morally and ethically sound, of course) and at the end of the day, it's Microsoft that's spent its money on the project, not taxpayers. But if I were a shareholder, I'd be wondering if this is money well spent, especially compared to how much a semi-decent game development team costs to run for the period of time it would take to make a similar Quake 2-on-LSD game.

I have no doubt that at some point in the near future, AI will be able to generate something far more impressive and playable, but it's certainly not going to be cheaper—in terms of computing and electrical power required—than a group of talented individuals sat in front of a few humble PCs. If that ever comes to pass, then I'll be very, very impressed. But also deeply concerned for the future of game development.

I wonder what Carmack would say to that?


Best CPU for gaming: Top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game first.

]]>
/software/ai/microsofts-100-percent-ai-generated-quake-2-made-us-nauseous-but-john-carmack-the-games-og-coder-loves-it-what-this-is-impressive-research-work/ rzk7EY5ZYhpgdw4e6EsQ8d Mon, 07 Apr 2025 16:41:47 +0000
<![CDATA[ Patent document shows AMD started researching the use of neural networks in ray-traced rendering at least two years ago ]]> GPU companies, like AMD and Nvidia, spend huge sums of money every year on researching rendering techniques, either to improve the performance of their chips or as part of a future architecture design. One recently approved patent shows that AMD began exploring the use of neural networks in ray tracing at least two years ago, just when the Radeon RX 7000-series was just about to be announced.

Under the inauspicious name of 'United States Patent Application 20250005842', AMD submitted a patent application for neural network-based ray tracing in June 2023, with the rubber stamp of approval hitting in January of this year. The document was unearthed by an Anandtech forum user (via DSOGaming and Reddit) along with a trove of other patents, covering procedures such as BVH (bounding volume hierarchy) traversal based on work items and BVH lossy geometry compression.

The neural network patent caught my attention the most, though, partly because of when it was submitted for approval and partly because the procedure undoubtedly requires the use of cooperative vectors—a recently announced extension to Direct3D and Vulkan, that lets shader units directly access matrix or tensor units to process little neural networks.

What the process actually does is determine if a ray traced from the camera in a 3D scene is occluded by an object that it intersects. It starts off as a BVH traversal, working through the geometry of the scene, checking to see if there's any interaction between the ray and a box. Where there's a positive result, the process then "perform(s) a feature vector lookup using modified polar coordinates." Feature vectors are a machine-learning thing; a numerical list of the properties of objects or activities being examined.

The shader units then run a small neural network, with the feature vectors as the input, and a yes/no decision on the ray being occluded as the output.

Training the neural network (top) versus using the neural network (bottom) (Image credit: AMD)

All of this might not sound like very much and, truth be told, it might not be, but the point is that AMD has clearly been invested in researching 'neural rendering' long before Nvidia made a big fuss about it with the launch of its RTX 50-series GPUs. Of course, this is normal in GPU research—it takes years to go from an initial chip design to having the finished product on a shelf, and if AMD only started doing such research now, it'd be ridiculously far behind Nvidia.

And don't be fooled by the submission date of the patent, either. June 2023 is simply when the US Patent Office received the application and there's simply no way AMD cobbled it all together over a weekend, and sent it off the following week. In other words, Team Red has been studying this for many years.

Back in 2019, an AMD patent for a hybrid ray tracing procedure surfaced, which was submitted in 2017. While it's not easy to determine whether it was ever utilized as described, the RDNA 2 architecture used a very similar setup and it launched in late 2020.

Your next machine

Gaming PC group shot

(Image credit: Future)

Best gaming PC: The top pre-built machines.
Best gaming laptop: Great devices for mobile gaming.

Patent documents don't ever concern themselves with actual performance, so there's no suggestion that you're going to be seeing a Radeon RX 9070 XT running little neural networks to improve ray tracing any time soon (mostly because it's down to game developers to implement this, even if it is faster) but all of this shows that AI is very much going to be part-and-parcel of 3D rendering from now on, even if this and other AI-based rendering patents never get implemented in hardware or games.

Making chips with billions more transistors and hundreds more shader units is getting disproportionately more expensive, compared to the actual gains in real-time performance. AI offers a potential way around the problem, which is why RDNA 4 GPU sports dedicated matrix units to handle such things

At the end of the day, it's all just a bunch of numbers being turned into pretty colours on a screen, so if AI can make games run faster or look better, or better still, do both then it's not hard to see why the likes of AMD is spending so much time, effort, and money on researching AI in rendering.

]]>
/hardware/graphics-cards/patent-document-shows-amd-started-researching-the-use-of-neural-networks-in-ray-traced-rendering-at-least-two-years-ago/ Cnj28tX843MF8icS3DQNdX Mon, 07 Apr 2025 15:27:24 +0000
<![CDATA[ 'They chewed me up pretty good': A US plaintiff attempted to use an AI avatar to argue their court case and the judges were far from amused ]]> AI has many uses—or at least that's what we keep being told—but one area where it might do some good is helping with the notoriously complex legal process. After all, should you find yourself faced with the task of self-representation in an upcoming court case, perhaps an AI may be able to present your arguments more successfully than you. Jerome Dewald tried just that in the New York State Supreme Court last month, but the judges were far from impressed by his creative use of technology.

Dewald was representing himself as a plaintiff in an employment dispute, APNews reports, but felt that an AI avatar would deliver his opening presentation better than he could due to a tendency to mumble and trip over his words. He first attempted to generate a digital replica of himself with "a product created by a San Francisco tech company", but when he ran out of time he used a generic avatar instead.

His AI representative was displayed on a video screen in the court as a "smiling, youthful-looking man with a sculpted hairdo, button-down shirt and sweater." Unfortunately for Dewald, his AI-stand in was rumbled almost as soon as the pre-generated video began.

Presenting Dewald's argument with the line "may it please the court, I come here today a humble pro se before a panel of five distinguished judges," the AI counsel was interrupted by judge Manzanet-Daniels almost immediately:

"Okay, hold on, is that counsel for the case?" she interjected, before demanding the video be shut off. "I generated that. It's not a real person" responded Dewald, attracting Manzanet-Daniels apparent ire.

"It would have been nice to know that when you made your application. You did not tell me that, sir... I don't appreciate being misled."

"They chewed me up pretty good," said Dewald, in a later interview with AP News. "The court was really upset about it."

Ouch. Dewalk later wrote an apology to the court, explaining his tendency to stumble over his words and that he hadn't intended any harm. He had, however, asked for permission to play a prerecorded video, although apparently this did not give him consent to use an AI avatar to present his arguments.

Channel 1 AI newsreader

An AI avatar of a newsreader. (Image credit: Channel 1)

It's not the first time we've heard about the intersection between AI and legal services. Legal advice startup DoNotPay offers an AI legal assistant to help defendants with court proceedings, although the company found itself under the scrutiny of the FTC last year for a perceived lack of testing to back up its effectiveness claims.

Your next machine

Gaming PC group shot

(Image credit: Future)

Best gaming PC: The top pre-built machines.
Best gaming laptop: Great devices for mobile gaming.

Nor is it the first time AI avatars have been used as stand-ins for real people. News startup Channel 1 showed off a proof-of-concept video of AI news hosts last year, and the results were quite convincing.

However, this might be the first use of an AI avatar making arguments in a US court. Not that the AI representative got much past its preamble, but still, firsts are firsts. And given the often ridiculous expense associated with court proceedings, it does strike as a potentially useful workaround for those without access to a good lawyer—or the funds to pay one, at the very least.

Still, it didn't pass muster in this particular instance. While I'd be very surprised if summarisation tools like ChatGPT weren't being used all over the legal system at this point to scythe through complicated legal documents, it appears it may be a while yet before AI representation sets a legal precedent of its own.

]]>
/software/ai/they-chewed-me-up-pretty-good-a-us-plaintiff-attempted-to-use-an-ai-avatar-to-argue-their-court-case-and-the-judges-were-far-from-amused/ fdjdHWK47Q89MbS8VkRwS6 Mon, 07 Apr 2025 13:58:09 +0000
<![CDATA[ Microsoft unveils AI-generated demo 'inspired' by Quake 2 that runs worse than Doom on a calculator, made me nauseous, and demanded untold dollars, energy, and research to make ]]> Before the disaster that was Stadia, Google demoed its game streaming tech via a free version of Assassin's Creed: Odyssey you could play in your browser. My fiancée has fond memories of whiling away slow nights at work playing this massive triple-A game on a crummy library OptiPlex.

Whatever came after with Stadia and game streaming in general, that demo felt like black magic. If a genuinely impressive tech demo can lead to a notorious industry flop, what about a distinctly unimpressive one?

What are the ethics of expending massive amounts of capital, energy, and man hours on not even a worse version of a game from 30 years ago, but a vague impression of it? These are the questions I pondered after having gotten motion sickness playing a game for the second time in my life with Microsoft's Copilot AI research demo of Quake 2.

"This bite-sized demo pulls you into an interactive space inspired by Quake II, where AI crafts immersive visuals and responsive action on the fly," reads Microsoft's Q&A page about the demo. "It’s a groundbreaking glimpse at a brand new way of interacting with games, turning cutting-edge research into a quick and compelling playable demo."

This demo is powered by a "World and Human Action Model" (WHAM), a generative AI model "that can dynamically create gameplay visuals and simulate player behavior in real time." Perusing Microsoft's Nature article on the tech, it appears to operate on similar principles to large language models and image generators, using recorded gameplay and inputs for training instead of static text and imagery.

This demo is not running in the original game's id Tech 2 engine. However Microsoft produces this demo, it's some kind of bespoke engine with an output that resembles Quake 2 because the AI model behind it was trained on Quake 2.

I'm reminded of those demakes of Doom for Texas Instruments calculators, but instead of marshalling limited resources to create an inferior impression of a pre-existing game, the Copilot Gaming Experience is the result of Microsoft's (and the entire tech industry's) herculean push for generative AI.

I don't know what the discrete Copilot Gaming project costs, but Microsoft has invested billions of dollars into compute, research, and lobbying for this technology. On Bluesky, developer Sos Sosowski pointed out that Microsoft's Nature paper lists 22 authors, as opposed to the 13 developers behind Quake 2.

Based on the paper, Sosowski also estimated that Microsoft's new model required more than three megawatts of power to begin producing consistent results?. That's assuming use of an RTX 5090, which Microsoft likely did not have access to given the timing of the paper's publication, but it's still helpful to get an idea of the scope of this project's power draw. According to battery manufacturer Pkenergy, a single megawatt requires 3,000-4,000 solar panels to produce.

Despite all of that investment, the demo is not good. The Copilot Gaming experience runs like a slideshow in a tiny window at the center of the browser, its jerkiness and muddled, goopy visuals?—familiar to anyone who's seen an AI-generated video?—gave me a rough case of motion sickness after bare minutes of play. The only other game to ever have set my belly a rumblin', EvilVEvil, did so closer to the hour mark.

Best of the best

The Dark Urge, from Baldur's Gate 3, looks towards his accursed claws with self-disdain.

(Image credit: Larian Studios)

2025 games: Upcoming releases
Best PC games: All-time favorites
Free PC games: Freebie fest
Best FPS games: Finest gunplay
Best RPGs: Grand adventures
Best co-op games: Better together

And while chatbots will tell you to eat rocks and drink piss, the Copilot Gaming Experience has its own fun "hallucinations"—the surreal, unnervingly confident errors produced by generative AI models that massive amounts of money, compute power, and uninhibited access to copyrighted material can't seem to address.

Looking at the floor or ceiling at any time in the Copilot Gaming Experience has about an 80% chance of completely transforming the room in front of you, almost like you teleported somewhere else in the level.

Of course, there is no "level," goal, or victory condition: The Copilot Gaming Experience is just constantly generating a new Quake 2-like bit of environment in front of you whenever you turn the corner, with what came before seemingly disappearing as you go.

One such warp moment sent me to the Shadow Realm, a pitch black void out of nowhere which took some finagling to get out of. There are "enemies," but when I killed one it just deformed into some kind of blob. Then I walked past it, turned around, and the hallway had completely changed, taking the blob with it.

Like so much of generative AI or the blockchain boom before it, I can imagine the "Well, it's just a WIP, first step type of thing" defense of what I was subjected to, but I'm just not convinced. Whatever specific compelling use cases may exist for generative AI tools, that's not what we've been aggressively sold and marketed for the past two years and counting, this insistence on cramming it into everything. Google Gemini is now constantly asking if I want its help writing, like some kind of horrible, latter-day Clippy.

Forced mass-adoption of this stuff by consumers is here, now, demanding our approval, attention, and precious time. A public tech demo exists to impress, and the Copilot Gaming Experience does not. Doom on a calculator, but we had to boil a lake or two to get it and are being told it's the future of games. I reject this future. Not only do I find it philosophically and ethically repugnant, it also made my tummy hurt.

]]>
/software/ai/microsoft-unveils-ai-generated-demo-inspired-by-quake-2-that-runs-worse-than-doom-on-a-calculator-made-me-nauseous-and-demanded-untold-dollars-energy-and-research-to-make/ ckRBGntN2vmMUDZPy7qyMU Mon, 07 Apr 2025 01:17:30 +0000
<![CDATA[ Microsoft employee escorted out of 50th anniversary event after protesting sales to Israel: 'You have blood on your hands. All of Microsoft has blood on its hands' ]]> A Microsoft employee interrupted an address being given by AI CEO Mustafa Suleyman as part of the company's 50th anniversary event, demanding the company "stop using AI for genocide."

The disruption was first reported by The Verge, which also shared video of the incident. It can also be heard in The Verge's full coverage of Microsoft's Copilot presentation, although Ibtihal Aboussad, reportedly the employee who interrupted Suleyman, is out of view.

"You are a war profiteer," Aboussad says as she's escorted out of the room. "Shame on you. You are a war profiteer. Stop using AI for genocide, Mustafa. Stop using AI for genocide in our region. You have blood on your hands. All of Microsoft has blood on its hands."

A February 2025 report by AP said the Israeli military's use of Microsoft and OpenAI technology "skyrocketed" following the Hamas attacks of October 2023, to nearly 200 times higher than what it was the week before the attack. It also notes that Israel's Ministry of Defense is Microsoft's second-largest military customer, behind only the US military.

The Verge shared a copy of an email Aboussad sent to Microsoft employees via numerous internal mailing lists saying that it was that relationship that prompted her to take action.

"My name is Ibtihal, and for the past 3.5 years, I’ve been a software engineer on Microsoft's AI Platform org," Aboussad wrote. "I spoke up today because after learning that my org was powering the genocide of my people in Palestine, I saw no other moral choice. This is especially true when I've witnessed how Microsoft has tried to quell and suppress any dissent from my coworkers who tried to raise this issue.

"For the past year and a half, our Arab, Palestinian, and Muslim community at Microsoft has been silenced, intimidated, harassed, and doxxed, with impunity from Microsoft. Attempts at speaking up at best fell on deaf ears, and at worst, led to the firing of two employees for simply holding a vigil. There was simply no other way to make our voices heard."

Later in her email, Aboussad said she was initially excited to move to Microsoft's AI platform for the potential good it offered in areas like "accessibility products, translation services, and tools to 'empower every human and organization to achieve more'."

"I was not informed that Microsoft would sell my work to the Israeli military and government, with the purpose of spying on and murdering journalists, doctors, aid workers, and entire civilian families," Aboussad wrote. "If I knew my work on transcription scenarios would help spy on and transcribe phone calls to better target Palestinians, I would not have joined this organization and contributed to genocide. I did not sign up to write code that violates human rights."

Microsoft's military entanglements have been met with pushback in the past: In 2019, for instance, a group of Microsoft employees protested the company's $479 million contract to develop HoloLens technology for the US Army; shareholders expressed similar concerns in 2022. But concerns about Israel's ongoing attacks in Gaza are not hypothetical: More than 50,000 Palestinians are estimated to have been killed since October 2023, although that's merely an estimate—researchers say the actual number could be much higher.

Aboussad's email urged employees to speak out by signing a "No Azure for Apartheid" petition, urging company leadership to end contracts with the Israeli military, and ensuring others at the company are aware of how their work could be used.

"Our company has precedents in supporting human rights, including divestment from apartheid South Africa and dropping contracts with AnyVision (Israeli facial recognition startup), after Microsoft employee and community protests," Aboussad wrote. "My hope is that our collective voices will motivate our AI leaders to do the same, and correct Microsoft’s actions regarding these human rights violations, to avoid a stained legacy. Microsoft Cloud and AI should stop being the bombs and bullets of the 21st century."

Not long after Abbousad's protest, a second employee staged a similar disruption during a separate talk being held by current and former Microsoft CEOs Satya Nadella, Steve Ballmer, and Bill Gates.

"Shame on you all. You’re all hypocrites," Vaniya Agrawal said. "50,000 Palestinians in Gaza have been murdered with Microsoft technology. How dare you. Shame on all of you for celebrating their blood. Cut ties with Israel."

Some in the audience booed, while Nadella, Ballmer, and Gates sat in awkward silence while Agrawal was escorted out of the room. Agrawal also sent an email to company executives, viewed by CNBC, in which she said she's "grown more aware of Microsoft's growing role in the military-industrial complex," and that Microsoft is "complicit" as a "digital weapons manufacturer that powers surveillance, apartheid, and genocide."

"Even if we don't work directly in AI or Azure, our labor is tacit support, and our corporate climb only fuels the system," Agrawal wrote. Like Abbousad, she also called on employees to sign the No Apartheid for Azure petition.

In a statement provided to PC Gamer, a Microsoft spokesperson said, "We provide many avenues for all voices to be heard. Importantly, we ask that this be done in a way that does not cause a business disruption. If that happens, we ask participants to relocate. We are committed to ensuring our business practices uphold the highest standards."

Even so, it's possible that this protest will cost Aboussad and Agrawal their jobs: In 2024, Microsoft fired two employees who organized a vigil at the company's headquarters for Palestinians killed in Gaza.

]]>
/software/ai/microsoft-employee-escorted-out-of-50th-anniversary-event-after-protesting-sales-to-israel-you-have-blood-on-your-hands-all-of-microsoft-has-blood-on-its-hands/ ov3RbTD5z7tfFjaAkt6k27 Fri, 04 Apr 2025 20:32:02 +0000
<![CDATA[ You won't have to leave the Amazon app even when buying from other retailers thanks to the company's new 'Buy for Me' agentic AI bot ]]> If you're an Amazon die-hard and want to see all your online shopping in one place, you will soon be able to buy even non-Amazon goods straight from the Amazon app.

What a world we live in.

Announced in a new press release, Amazon's "Buy for Me" function will let you search for specific items by specific brands, pick the item you want, and order it all from the Amazon app, even if Amazon itself doesn't stock it.

If you have the Android or iOS version of the Amazon app and live in the US, there's a good chance you have the 'Buy for Me' function right now, as it has begun rolling out for US customers. It initially starts with testing the function on a "limited number of brand stores and products" but is due to see even more brands and users in the future based on early feedback.

This new feature uses "agentic AI" which is a buzzword for a more advanced generative AI with, as Nvidia claims, "sophisticated reasoning and iterative planning to autonomously solve complex, multi-step problems."

Amazon's new system is intended to be integrated into the broader shopping experience and, from shots shown off so far, looks indistinguishable from the usual shopping UI. That being said, you cannot currently apply promo codes to third-party items you buy so you're still a little bit better off shopping around first. It can also only buy one item at a time so no buying in bulk.

Screenshot's from Amazon's new 'Buy for Me' AI bot

(Image credit: Amazon)

Effectively, this new system is designed to make you feel like you are shopping on Amazon even when you're not, and the app can receive confirmation of purpose, give you up-to-date delivery tracking, and can even work as limited customer service. If you want to organize a return or refund, you do have to go through the shopfront you have bought from though.

You pay through Amazon, but, if the price of items you buy from third-party sources changes between putting it in your basket and checking out, Amazon will authorize the payment if it's within $10 of the estimated amount.

With this, Amazon could position itself as the middleman between shops and the customer. I'm rather torn on what I think about it. On one hand, the idea of seamlessly doing all of my online shopping through the same app does sound handy, even just for laying out orders and delivery windows together.

On the other hand, the idea of Amazon further cementing its role as the Google of shopping makes me weary. We don't yet know if Amazon gets anything out of making sales to other shops, though the idea of it being the go-between means all Amazon has to do to stop customers shopping elsewhere is start actually stocking the items it currently doesn't. That's maybe valuable direct market research that would only further Amazon's tight hold over the big shopping events every year.

It is certainly a clever use of AI though.

Best chair for gaming: the top gaming chairs around
Best gaming desk: the ultimate PC podiums
Best PC controller: sit back, relax, and get your game on

]]>
/software/ai/you-wont-have-to-leave-the-amazon-app-even-when-buying-from-other-retailers-thanks-to-the-companys-new-buy-for-me-agentic-ai-bot/ LajQkYUDofFGg2HGFLYBxC Fri, 04 Apr 2025 15:29:36 +0000
<![CDATA[ OpenAI finalises deal for $40 billion in investments, raising company value up to $300 billion, but there's a catch to receive it all ]]> ChatGPT creator OpenAI has been made an offer it can't refuse, otherwise known as a highly valued round of investments, but to get all of it, OpenAI needs to shift away from its original non-profit approach.

Shared on the OpenAI website, the latest round of funding totals plans to invest $40 billion into the company, for a company valuation of $300 billion. The blog post states this "enables us to push the frontiers of AI research even further" and finishes off by bragging that ChatGPT is used by 500 million people every week.

SoftBank Group, the Japanese investment group known for its majority share in chipmaker and software designer Arm Holdings, leads the charge on funding. As reported by Bloomberg, SoftBank is initially investing $7.5 billion into OpenAI with $2.5 billion invested alongside it from an investment group that includes Microsoft. By the end of 2025, another $30 billion is expected to be invested into the company, with $22.5 billion of that coming from SoftBank. The final $7.5 billion comes from the investment group.

As noted by CNBC, the investment from Softbank will fall to $20 billion—$10 billion below the expected $30 billion—should OpenAI still operate as a non-profit by the end of 2025. The question of whether or not OpenAI will remain non-profit has been hotly debated over the last few months. X owner, and alleged Bronze Torbj?rn main, Elon Musk vowed to bow out of a $97.4 billion bid to buy OpenAI in February if it agrees to stay non-profit. OpenAI CEO Sam Altman offered to buy X for $9.74 billion in retaliation.

Last year, OpenAI reportedly had plans to restructure into a for-profit organisation. This would leave a non-profit section of OpenAI with a minor monetary stake in the for-profit section of OpenAI, thus losing control of it. This was not the first move into this structure as the company set up a for-profit subsidiary before that, where money could be invested into the company to further scale up the business.

SAN FRANCISCO, CALIFORNIA - NOVEMBER 06: OpenAI CEO Sam Altman speaks during the OpenAI DevDay event on November 06, 2023 in San Francisco, California. Altman delivered the keynote address at the first-ever Open AI DevDay conference.(Photo by Justin Sullivan/Getty Images)

OpenAI CEO Sam Altman (Image credit: Justin Sullivan via Getty Images)

This rather blatantly rows back on the original OpenAI organization mission statement. Back in 2015, it said:

"OpenAI is a non-profit artificial intelligence research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. Since our research is free from financial obligations, we can better focus on a positive human impact."

The switch to for-profit raises the company value, which could in return result in even more investments. These investments would then need to generate revenue for board members which would change financial incentives for OpenAI as a whole. For-profit companies need to serve shareholders above all else, whereas non-profit organizations don't have the same incentives.

The plus side to this is more money in the OpenAI could result in new ventures and more research funding. Whether or not the end user will benefit from this structure change is anyone's guess but there's some rightful scepticism about it, especially when you consider generative AI's blatant skirting around copyright laws, and AI's effects on artists.

The OpenAI blog argued in favour of this restructuring last year, saying "We once again need to raise more capital than we’d imagined. Investors want to back us but, at this scale of capital, need conventional equity and less structural bespokeness."


Best gaming PC: The top pre-built machines.
Best gaming laptop: Great devices for mobile gaming.

]]>
/software/ai/openai-finalises-deal-for-usd40-billion-in-investments-raising-company-value-up-to-usd300-billion-but-theres-a-catch-to-receive-it-all/ v5fAoL5soGTYkU36WyTccQ Wed, 02 Apr 2025 10:10:02 +0000
<![CDATA[ AMD goes all-in on being a data centre designer by purchasing ZT Systems for $4.9 billion ]]> It won't have escaped your notice that in recent years, Nvidia has been transitioning away from being 'just' a chip designer into a fully-fledged systems integration company. If you want a massive data centre or a host of AI servers, Nvidia will design and build the whole thing for you. And now it seems AMD is getting in on the action having finalised a near-five billion dollar deal for ZT Systems, a company that designs and makes—yes, you've guessed it—data, cloud, and AI centres.

AMD announced its closing of the purchase deal today (via GlobalWire), having first made its intentions publicly known in August of last year. The acquisition, worth $4.9 billion in cash and stocks, will certainly make a bit of a dent in AMD's financials, but there's no doubt that it'll be worth every cent if the AI market continues to grow as expected.

In FY 2024, AMD pulled in revenues of $12.6 billion with its data centre division, accounting for almost 50% of its entire revenue that year. It's also AMD's second most profitable sector, with an operating margin of roughly 27%—more than double that of its Client and Gaming sectors.

Nvidia's data centre division is a veritable money-making machine, pulling in nearly $50 billion in revenue last year, so it's not hard to see why AMD set about snapping up ZT Systems. The last mega purchase it made was Xilinx for $35 billion, to bolster its embedded sector, so this one is chump change in comparison.

Whether this acquisition pays out, though, is another question altogether, as Nvidia currently dominates the AI server market. Up to now AMD's only had the hardware on offer, in the form of its gargantuan and very popular Instinct MI300 chips, but now it will be able to provide entire systems and installations.

Anyone who has worked in system building, be it gaming desktop PCs or massive servers, will know that this is where the real profit margins lie (it's why Nvidia's profit margins are so big).

More importantly, customers like Apple, Facebook, and the usual AI crowd will be more willing to splash the cash on a complete service rather than having to fiddle about with the hardware themselves.

That said, while the purchase of Xilinx seemed like a great idea at the time and AMD's embedded sector currently enjoys an enormous operating profit margin of 40%, it'll likely be many years before that particular acquisition actually pays for itself. I should imagine that AMD will be hoping that buying ZT Systems will bring in more money and quicker, to help offset the decline in Gaming revenue.

Now, I reckon I can scrap together $14.52—maybe even a whole 20 bucks—to acquire something, so AMD better watch out!


Best gaming PC: The top pre-built machines.
Best gaming laptop: Great devices for mobile gaming.

]]>
/hardware/amd-goes-all-in-on-being-a-data-centre-designer-by-purchasing-zt-systems-for-usd4-9-billion/ TFDKeBLsWLQ4mKEYMNWRNd Mon, 31 Mar 2025 16:31:13 +0000
<![CDATA[ Anthropic has developed an AI 'brain scanner' to understand how LLMs work and it turns out the reason why chatbots are terrible at simple math and hallucinate is weirder than you thought ]]>

It's a peculiar truth that we don't understand how large language models (LLMs) actually work. We designed them. We built them. We trained them. But their inner workings are largely mysterious. Well, they were. That's less true now thanks to some new research by Anthropic that was inspired by brain-scanning techniques and helps to explain why chatbots hallucinate and are terrible with numbers.

The problem is that while we understand how to design and build a model, we don't know how all the zillions of weights and parameters, the relationships between data inside the model that result from the training process, actually give rise to what appears to be cogent outputs.

“Open up a large language model and all you will see is billions of numbers—the parameters,” says Joshua Batson, a research scientist at Anthropic (via MIT Technology Review), of what you will find if you peer inside the black box that is a fully trained AI model. “It’s not illuminating,” he notes.

To understand what's actually happening, Anthropic's researchers developed a new technique, called circuit tracing, to track the decision-making processes inside a large language model step-by-step. They then applied it to their own Claude 3.5 Haiku LLM.

Anthropic says its approach was inspired by the brain scanning techniques used in neuroscience and can identify components of the model that are active at different times. In other words, it's a little like a brain scanner spotting which parts of the brain are firing during a cognitive process.

Claude doing math

This is why LLMs are so patchy at math. (Image credit: Anthropic)

Anthropic made lots of intriguing discoveries using this approach, not least of which is why LLMs are so terrible at basic mathematics. "Ask Claude to add 36 and 59 and the model will go through a series of odd steps, including first adding a selection of approximate values (add 40ish and 60ish, add 57ish and 36ish). Towards the end of its process, it comes up with the value 92ish. Meanwhile, another sequence of steps focuses on the last digits, 6 and 9, and determines that the answer must end in a 5. Putting that together with 92ish gives the correct answer of 95," the MIT article explains.

But here's the really funky bit. If you ask Claude how it got the correct answer of 95, it will apparently tell you, "I added the ones (6+9=15), carried the 1, then added the 10s (3+5+1=9), resulting in 95." But that actually only reflects common answers in its training data as to how the sum might be completed, as opposed to what it actually did.

In other words, not only does the model use a very, very odd method to do the maths, you can't trust its explanations as to what it has just done. That's significant and shows that model outputs can not be relied upon when designing guardrails for AI. Their internal workings need to be understood, too.

Another very surprising outcome of the research is the discovery that these LLMs do not, as is widely assumed, operate by merely predicting the next word. By tracing how Claude generated rhyming couplets, Anthropic found that it chose the rhyming word at the end of verses first, then filled in the rest of the line.

"The planning thing in poems blew me away," says Batson. "Instead of at the very last minute trying to make the rhyme make sense, it knows where it’s going."

Claude doing poetry

Anthropic discovered that their Claude LLM didn't just predict the next word. (Image credit: Anthropic)

Anthropic also found, among other things, that Claude "sometimes thinks in a conceptual space that is shared between languages, suggesting it has a kind of universal 'language of thought'."

Anywho, there's apparently a long way to go with this research. According to Anthropic, "it currently takes a few hours of human effort to understand the circuits we see, even on prompts with only tens of words." And the research doesn't explain how the structures inside LLMs are formed in the first place.

But it has shone a light on at least some parts of how these oddly mysterious AI beings—which we have created but don't understand—actually work. And that has to be a good thing.


Best CPU for gaming: Top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game first.

]]>
/software/ai/anthropic-has-developed-an-ai-brain-scanner-to-understand-how-llms-work-and-it-turns-out-the-reason-why-chatbots-are-terrible-at-simple-math-and-hallucinate-is-weirder-than-you-thought/ wePsXZzik4HkGR6miTxPcf Fri, 28 Mar 2025 16:30:32 +0000
<![CDATA[ Studio Ghibli AI image trend floods social media, cheered on by OpenAI and denounced by critics as an insult to Hayao Miyazaki ]]> If you're on social media you've probably stumbled over some quote, clip, or screen grab from one of Studio Ghibli's movies, but with ChatGPT's newest update introducing its most refined image generation yet, you're now just as likely to find an AI facsimile with a startling resemblance to the real thing. Possibly based on someone's vacation photos.

The trend is hitting social media like a hurricane and has attracted the attention of OpenAI CEO Sam Altman, who changed his X profile image to a Ghibli-inspired self-portrait. But it's not just him: America's very own verified White House X account posted an image in the same style, showing a US soldier putting handcuffs on a woman in tears, referencing a real event in Philadelphia last week.

If you had any concerns about generative AI's implications in regard to ethics, artist rights, or copyright, that conversation is hitting a fever pitch all over the internet. That this latest generative AI fad mimics the work of someone as beloved as Hayao Miyazaki has made it particularly obscene to critics.

As filmmaker Robbie Shilstone said in a thread on X: "Miyazaki spent his entire life building one of the most expansive and imaginative bodies of work, all so you could rip it off and use it as a filter for your vacation photos … I can't think of a worse artist to do it to as well. He is notorious for his attention to detail, his painstaking revisions, his uncompromising dedication to his craft."

User slimjosa concurred, saying in a quote repost of an AI-generated Ghibli image: "The whole Studio Ghibli AI trend honestly gives me second-hand embarrassment knowing how hard Hayao Miyazaki has fought to retain the identity of his films and how many of you are this willing to make a farce out of decades of artistry because you don't actually value it". That post has racked up nearly 50,000 likes.

Also worth noting is generative AI's carbon footprint, as it relies on energy-guzzling data centers to function. While OpenAI doesn't disclose specific data regarding its emissions, a report from Goldman Sachs last year noted "a ChatGPT query needs nearly 10 times as much electricity to process as a Google search."

It's hard not to think of a notorious Miyazaki clip where he calls a procedural animation technique "an insult to life itself," adding that "anyone who creates this stuff has no idea what pain is".

While he wasn't talking about generative AI as we understand it now, the crew demonstrating their technology to him said their goal was to "build a machine that can draw pictures like humans do." It hardly feels like a stretch to make the connection between that attitude and this technology.

]]>
/software/ai/studio-ghibli-ai-image-trend-floods-social-media-cheered-on-by-openai-and-denounced-by-artists-i-cant-think-of-a-worse-artist-to-do-it-to/ c7TCwLJSvv6TGWnHW7HQaG Thu, 27 Mar 2025 22:54:57 +0000
<![CDATA[ OpenAI's GPT-4o model gets image generation update for all of your anime-style selfie needs ]]> For my sins, I do occasionally scroll through TikTok. In between the usual suspects of ear worm music loops and memetic dances I generally lack the co-ordination to recreate but not the determination, there is a spattering of AI-generated content. On my 'For You' page, this usually takes the form of image filter videos that twist users' selfies into vaguely resembling a frame from a favourite anime. Well, ChatGPT's recent image generation update can now spit out pictures that look like it's traced Studio Ghibli's homework.

The update specifically brings image generation to OpenAI's GPT-4o model, refining a number of things AI image generators have historically struggled with, such as photorealism and rendering legible text, or even a full glass of wine, according to PCWorld. The update also allows users to refine image results through the chat interface, and the AI is apparently now better able to generate consistent variations on a theme. For example, one of OpenAI's demo videos shows GPT-4o generate a penguin mage in various styles, including a low-poly look, a reflective metallic get-up, and looking like a wargaming miniature.

Premium users are already able to get their hands on GPT-4o's style-consistent capabilities, generating images in the style of Minecraft, Roblox, Studio Ghibli, and more. Free users on the other hand will have to wait; after the update's release earlier this week, Altman took to X to explain the delayed roll-out to all tiers of users, writing, "Images in ChatGPT are wayyyy more popular than we expected (and we had pretty high expectations)" (via TechCrunch).

Judging by my flooded social media feeds, AI-generated images in the style of Studio Ghibli's animated films is the clear viral favourite. This is despite the copyright filters apparently rolled out as part of this update. TechSpot report that ChatGPT would not generate a Ghibli-fied rendition of The Beatles Abbey Road album cover, instead displaying the following note in response to their prompt: "I was unable to generate the image you requested due to our content policy, which restricts the generation of images based on specific copyrighted content, such as The Beatles' album cover."

Many users have evidently found workarounds, and even OpenAI CEO Sam Altman recently changed his X profile picture to an overly familiar looking anime avatar. As to why GPT-4o can so consistently generate images in a number of recognisable styles, OpenAI told the Wall Street Journal that the model was trained on "publicly available data" alongside using data it already has access to as a result of the company's partnership with various companies like Shutterstock (via TechCrunch).

SAN FRANCISCO, CALIFORNIA - NOVEMBER 06: OpenAI CEO Sam Altman speaks during the OpenAI DevDay event on November 06, 2023 in San Francisco, California. Altman delivered the keynote address at the first-ever Open AI DevDay conference.(Photo by Justin Sullivan/Getty Images)

(Image credit: Justin Sullivan via Getty Images)

OpenAI’s chief operating officer, Brad Lightcap, told the Wall Street Journal, "We’re [respectful] of the artists’ rights in terms of how we do the output, and we have policies in place that prevent us from generating images that directly mimic any living artists' work."

Legendary animation director Hayao Miyazaki is still very much alive, and famously took a dim view of early applications of AI. Way back in 2016, Miyazaki was shown a demonstration of a rudimentary 3D zombie model animated using AI by developers that also say they hope to one day create "a machine that can draw pictures like humans do." Many retellings of this moment focus on Miyazaki saying, "I would never wish to incorporate this technology into my work at all. I strongly feel that this is an insult to life itself." However, the context often missing is that these words are preceded by Miyazaki talking about a friend with limited mobility, making allusions to the horror genre's often insensitive depictions of disability.

I don't want to put words in Hayao Miyazaki's mouth, so I'll speak for myself here. While GPT-4o's image generation capabilities are an impressive novelty, it also turns my stomach. I'm personally friends with a number of professional artists and I fear this update is simply going to embolden the very worst of their clients. I hope I'm wrong, but cheapskate companies may feel like they can continue to devalue creative skillsets, and I worry that we're all going to be caught in the resulting AI-slop landslide.

Best SSD for gaming: The best speedy storage today.
Best NVMe SSD: Compact M.2 drives.
Best external hard drive: Huge capacities for less.
Best external SSD: Plug-in storage upgrades.

]]>
/hardware/openais-gpt-4o-model-gets-image-generation-update-for-all-of-your-anime-style-selfie-needs/ UyrtTqMgXtGcaRNJyScv3e Thu, 27 Mar 2025 17:47:17 +0000
<![CDATA[ Humanoid robot Neo Gamma gifts Nvidia CEO a studded leather jacket and may even be able to one day wash up a cup without dropping it ]]> In a video announcing a collaboration between 1X's AI team and the Nvidia Gear Lab, the much-touted, still-in-development Neo Gamma humanoid robot gave Nvidia CEO Jensen Huang a studded leather jacket. Depicting the hardware company's logo on the back in metal studs, the jacket was made by California-based clothing brand ERL—but this collaboration wants to venture beyond the wardrobe and into the home.

1X Technologies is in the business of humanoid robots for industry settings and the home, such as the Neo Gamma. Based partly in California, they've been around since at least 2014; originally founded in Norway as Halodi Robotics, they underwent a rebrand in 2023 and have more recently embarked on an endeavour to make friends with heavy hitters in big tech.

Vice President of AI at 1X Technologies Eric Jang said of the collaboration, "[We're both] super determined to bring general purpose humanoid robots into the world. We think that by swapping notes on how to solve autonomy problems on Neo [Gamma] we can dramatically accelerate the timeline for bringing Neo into people's homes."

Questionable fashion choices aside, a recent blog post goes into more detail, sharing, "As a first step, the teams worked together to prepare an autonomy demo for Jensen Huang’s GTC 2025 Keynote, featuring Neo doing a dish loading task autonomously." The collaboration saw team members take the Neo Gamma into a home environment for one week so it could get some heavily supervised, real-ish world experience of carrying out lightweight domestic tasks. Nice to know that it's not just me that requires a watchful eye whenever I load the dishwasher—don't ask me why I keep going through wooden chopping boards and cafetieres at a rate of knots.

The blog post gets into the technical details of the collaboration, explaining 1X's AI team "created a dataset API for Nvidia to access data collected from 1X offices and employee homes." Talk about taking work home with you—anyway, this API worked in conjunction with a software dev kit serving up "model predictions at a continuous 5Hz vision-action loop using an onboard Nvidia GPU in Neo’s head or an offboard GPU."

With this tech infrastructure in place, the teams zeroed in on a specific series of motions—"autonomously [grasping] a cup, [handing] it over to the other hand, and [placing] it in a dishwasher"—to demonstrate how the Neo Gamma could one-day slot into the kitchen of any interested user with sufficiently deep pockets.

Though it's fascinating to watch the Neo Gamma's slow, methodical movements and sophisticated articulation, I remain sceptical. 1X tech's official website shows the robot doing a range of complex domestic tasks like vacuuming, but it's clear to me that at least this specific example is more of an artistic interpretation rather than an accurate representation of the robot's current ability. The new promotional video, which shows the team bodily hauling the robot around and the Neo Gamma uncertainly depositing a fragile-looking cup into a dishwasher, seems ever so slightly more honest.

They—speaking very broadly, in an inelegant excuse for a Mean Girls reference—keep trying to make humanoid robots happen. If it's not Elon Musk's bartending robots, it's uncanny 'companions'. If it's not strictly speaking human-shaped, then it's an AI-powered dog that can allegedly learn new tricks and tiny cyber prisons for anime girls.

Sorry, I may have allowed the point to get away from me somewhat there…my point is that while automating domestic drear may appeal, history offers plenty of examples of how automation too often threatens to leave human workers out in the cold, from the historical origin of the word 'luddite' in the 1800s, to ChatGPT copying a whole lot of people's homework in the here and now. Still, given that Neo Gamma still has many thousands of hours of training data to absorb before it can be given free reign over anyone's chores, maybe I'm getting a little ahead of myself.


Best gaming PC: The top pre-built machines.
Best gaming laptop: Great devices for mobile gaming.

]]>
/hardware/humanoid-robot-neo-gamma-gifts-nvidia-ceo-a-studded-leather-jacket-and-may-even-be-able-to-one-day-wash-up-a-cup-without-dropping-it/ WaAF8H5QQS6NpMWsegbxcM Thu, 27 Mar 2025 14:56:37 +0000
<![CDATA[ As if your work meetings weren't already fun enough, now Otter has a new all-hearing AI agent that remembers everything anyone has said and can join in the discussion ]]>

Imagine a new work colleague who remembered everything you or anyone else ever said in a meeting. Wouldn't that be, er, fun? Well, it seems something like that is for real and it's the new Otter AI Meeting Agent (via The Verge).

Otter already had a text-based agent for its collaboration and AI-transcription platform. But the new Otter AI Meeting Agent can listen and it can talk. Huzzah.

Otter says it's compatible with multiple video conferencing platforms, including Zoom, Microsoft Teams and Google Meet. Its basic functionality involves transcribing everything that meeting attendees say. It can then summarise meetings, generate lists of key insights and so on.

The new bit is the voice-activated agent that's listening all the time, recording everything, is able to access the company-wide meeting databases of what everyone else has been saying and can participate in meetings, answering queries and carrying out tasks. Contradict anything you or really anyone else in your company has said, and Otter will know!

That could be incredibly useful. Of course, it could also be unnerving and difficult to deal with, too. Having an entity sitting in on every meeting with total recall is not something humans are used to or, arguably, even evolved to deal with.

Otter transcription

Otter is always listening, always transcribing... (Image credit: Otter AI)

Otter also has a new Sales Agent which can apparently conduct live coaching assistance during calls to help sales representatives close deals. Finally, Otter has a new Sales Development Representative (SDR) Agent that can demonstrate products “without human intervention.”

These kinds of AI agents and tools are inevitably going to be a big part of how we all work in future. Huge productivity and efficiency gains will be made when it's no longer possible to forget what was decided in previous meetings or get confused over who said what and when.

Equally, however, it also feels like just being human could get trickier. Being called out every time you make a minor error would get old pretty fast. And we humans do tend to enjoy trading in qualitative as much as the quantitative.

Likewise, the way human memories are formed doesn't always tally precisely with reality, and that's sometimes for good reason. Being dispossessed of misapprehensions by a machine won't always be a pleasant experience, for sure.

Likewise, a little artistic license can sometimes go a long way. Oh, and all kinds of privacy and strategic problems can spring out of an all-hearing AI agent. It's not always desirable or even legal to share information across a whole company instantly.

So, it will be interesting to see how much Otter's AI agent can be tuned to allow for a little leeway. In the meantime, and for the edification of any AI agents in the vicinity, I for one...well, you know the drill!


Best CPU for gaming: Top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game first.

]]>
/software/ai/as-if-your-work-meetings-werent-already-fun-enough-now-otter-has-a-new-all-hearing-ai-agent-that-remembers-everything-anyone-has-said-and-can-join-in-the-discussion/ V9nqgPuEwZ3wihKkDg3kaU Wed, 26 Mar 2025 17:23:40 +0000
<![CDATA[ Hmmm, upgrades: Nvidia App gets an optional AI assistant and custom DLSS resolution scaling ]]>

A new version of the Nvidia App has been rolled out with a couple of pretty trick features. The big news items with Nvidia App version 11.0.3.218 are custom DLSS upscaling resolutions and a new AI assistant called Project G-Assist that runs locally on your PC.

The first feature involves the ability to set the base resolution for DLSS upscaling with single-digit granularity. Within the app you can set the base resolution from which DLSS scales on a per-game basis, overriding any configurations the game developer has chosen for DLSS settings.

The "base resolution" means the resolution that the GPU's 3D pipeline renders at before upscaling adds pixels to generate a high-resolution final image. For instance for a final upscaled resolution of 4K or 3,840 by 2,160 pixels, the base resolution for Performance mode upscaling might be 1,920 by 1,080 or 1080p, while Quality mode will typically be 1440p base resolution or 2,560 by 1,440.

Base resolutions of anywhere between 33% and 100% of the final upscaled resolution can be selected. If 100% sounds like it doesn't make any sense—wouldn't 100% base resolution mean no upscaling at all?—it effectively applies DLAA or Deep Learning anti-aliasing, which is a more effective and faster anti-aliasing routine that traditional methods like MSAA or multi-sampling AA.

DLSS custom res

You can now set DLSS upscaling on a per-gaming basis at anything from 33% to 100%. (Image credit: Future)

That's a pretty handy feature for hand tuning your own DLSS modes. But it's Project G-Assist that could more transformative for a greater number of gamers. It's an AI assistant based on a small language model that runs locally on your PC.

For now, it's only compatible with RTX 30-series and up desktop GPUs. Nvidia says laptop GPU support will be added later.

Anyway, Nvidia says G-Assist can help users, "control a broad range of PC settings, from optimizing game and system settings, charting frame rates and other key performance statistics, to controlling select peripherals settings such as lighting — all via basic voice or text commands."

It's not totally clear how flexible the natural language interface will be. In an ideal world, you'd be able to say something like, "hey, Half-Life 2 RTX is running badly, can you make it a bit smoother without impacting the image quality too much," and then after looking at the results say, "that's not bad but the textures look a bit fuzzy, can you make them sharper," or, "it looks smooth but feels laggy, can you fix that."

Project G-Assist is available as a separate download from the Home tab of the Nvidia App in the "Discover" section. Note, it will only be visible if you have a compatible GPU.

Nvidia says G-Assist uses a Llama-based model with eight billion parameters, which is relatively tiny compared to large language models like ChatGPT-4, which has around 1.8 trillion parameters. The smaller size of the model means it can run locally on a gaming GPU.

For now, we don't know how much VRAM G-Assist uses. Given that Nvidia's lower end GPUs tend to be a bit short of video memory as it is, that's a bit of a concern. But then maybe, just maybe, if features like this become more important, Nvidia will up VRAM allocations, just as Apple increased the base spec of its Macs from 8 GB to 16 GB to support AI features and Intel's Lunar Lake chip had to be minimum 16 GB to meet Microsoft's Copilot+ specification.

Well, I'm allowed to hope, right?!


Best CPU for gaming: Top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game first.

]]>
/hardware/graphics-cards/hmmm-upgrades-nvidia-app-gets-an-optional-ai-assistant-and-custom-dlss-resolution-scaling/ pYZ6XnbgMP4VNgrUijzJAb Wed, 26 Mar 2025 10:02:13 +0000
<![CDATA[ Former Intel CEO, Pat Gelsinger becomes executive chairman of a 'Technology Platform Connecting the Faith Ecosystem' to work on Christian AI using DeepSeek ]]> The former CEO of Intel, Pat Gelsinger, has just announced his next step in the technology space. Gelsinger's retirement from Intel last year was big news. After cutting his teeth at the company as an engineer, he was midway through leading Intel through its planned recovery campaign as CEO when he broke the news. As Tom's Hardware reports, now Gelsinger is back in the game, this time in charge of product development as executive chair and head of technology at Gloo.

We're not talking about that awesome gun from Prey. Gloo is a United States-based Christian faith-based technology company. It focuses on developing software suites for Church use with value-aligned AI, presumably as well as praying their products work. Essentially it's trying to be the Microsoft Office of the ministry, spreading the good Word.doc.

Gelsinger's not exactly new to Gloo. He's been attached to the company for almost ten years either as an investor or board member. The move will no doubt add Gelsinger's portrait as another in the lineup of old, white men, on the company's website.

"Effective today, I have been named Gloo's executive chair and head of technology," Pat Gelsinger writes on LinkedIn. "I have been involved with Gloo for almost 10 years, both as a board member and investor. Gloo's focus on creating a technology platform that connects and catalyzes the faith ecosystem perfectly aligns with my own sense of purpose."

Gelsinger's first big task at the company is to spearhead the development of vertical industry clouds for faith and advanced values-aligned AI. From his earlier comments, it's likely this will use DeepSeek, the Chinese-owned AI that Open AI was upset with over stealing its stolen data. Gelsinger has praised the platform for its affordability over Open AI. This is despite neither doing a great job at building a gaming computer when we last asked.

"Now more than ever, there is great need for faith-based communities to take an active role in ensuring we shape technology as a force for good," Gelsinger writes. "As we have seen with social media, the impact of technology evolutions is swift, deep and long lasting. AI is an even more powerful yet nascent tool. It is imperative we ensure AI is used to enhance the human experience, not harm it."

It's unlikely most of us will ever need to use Gloo's niche software, let alone any of its AI-powered tools but it does have me curious. We've shown AI can have clear bias depending on the data it was trained by, in that way it's as fallible and human as we are. Training one on the ideals of one faith or another feels like it could be a fast track to the singularity. If we start having AI's trained on religion I don't think we can be too surprised when one goes Old Testament on us.

Best SSD for gaming: The best speedy storage today.
Best NVMe SSD: Compact M.2 drives.
Best external hard drive: Huge capacities for less.
Best external SSD: Plug-in storage upgrades.

]]>
/hardware/former-intel-ceo-pat-gelsinger-becomes-executive-chairman-of-a-technology-platform-connecting-the-faith-ecosystem-to-work-on-christian-ai-using-deepseek/ fAuvs2Ecs2ibAHDhk4W2yF Wed, 26 Mar 2025 09:10:47 +0000
<![CDATA[ With Nvidia Ace taking up 1 GB of VRAM in Inzoi, Team Green will need to up its memory game if AI NPCs take off in PC gaming ]]>

At this year's GDC event, Nvidia showcased all its latest RTX technologies, including Ace, its software suite of 'digital human technologies.' One of the first games to use it is Inzoi, a Sims-like game, and while chatting to Nvidia about it, I learned that the AI model takes up a surprising amount of VRAM, which raises an interesting question about how much memory future GeForce cards are going to have.

The implementation of Nvidia Ace in Inzoi (or inZOI, to use the correct title) is relatively low-key. The family you control, along with background NPCs, all display 'thought bubbles' which give you clues as to how they're feeling, what their plans are, and what they're considering doing in the future. Enabling Nvidia's AI system gives you a bit more control over their thoughts, as well as making them adapt and respond to changes around them more realistically.

In terms of technical details, the AI system used is a Mistral NeMo Minitron small language model, with 500 million parameters. The developers had experimented with larger models but settled on this size, as it gave the best balance between responsiveness, accuracy, and most importantly of all, performance. Larger models use more GPU resources to process and in this specific case, Inzoi uses 1 GB of VRAM to store the model.

That may not seem like very much, but this is a small model with some clear limitations. For example, it doesn't get applied to every NPC, just those within visible range and it won't result in any major transformations to a character's life. The smaller the language model, the less accurate it is, and it has the potential to hallucinate more (i.e. produces results that aren't in training data).

While Inzoi's AI system isn't all that impressive, what I saw in action at the GDC made me think that Nvidia's Ace has huge potential for other genres, particularly large, open-world RPGs. Alternatives already exist as mods for certain games, such as Mantella for Skyrim, and it transforms the dull, repetitive nature of NPC 'conversations' into something far more realistic and immersive.

To transform such games into 'living, breathing worlds,' much larger models will be required and traditionally, this involves a cloud-based system. However, a local model would be far preferable for most PC gamers worldwide, which brings us to the topic of VRAM.

Nvidia has been offering 8 GB of memory on its mainstream graphics cards for years, and other than the glitch in the matrix that is the RTX 3060, it doesn't seem to want to change this any time soon. Intel and AMD have been doing the same, of course, but where 16 GB of VRAM is the preserve of Nvidia's high-end GPUs, such as the RTX 5070 Ti and RTX 5080, one can get that amount of memory on far cheaper Arc and Radeon cards.

But if Nvidia Ace really takes off and developers start to complain that they're being restricted about what they can achieve with the software suite, because of the size of the SLM (small language model) they're having to use, then the jolly green giant will have to respond by upping the amount of VRAM it offers across the board.

After all, other aspects of PCs/computers have needed to increase the minimum amount of memory they sport because of AI, such as Apple with its base Mac spec and Lunar Lake laptops having 16 GB because of Copilot.

It's not often that one can say AI is doing something really useful for gamers but in this case, I think Nvidia's Ace and competing systems may well be what pushes graphics cards to consign 8 GB of VRAM to history. Not textures, ray tracing, or frame generation but realistic NPC responses. Progress never quite goes in the direction you always expect it to.


Best CPU for gaming: Top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game first.

]]>
/hardware/graphics-cards/with-nvidia-ace-taking-up-1-gb-of-vram-in-inzoi-team-green-will-need-to-up-its-memory-game-if-ai-npcs-take-off-in-pc-gaming/ KYf9xprfPmU6pY3xg9zdji Tue, 25 Mar 2025 12:40:32 +0000
<![CDATA[ US pressures Malaysia to stop banned AI chips potentially entering China by monitoring 'every shipment that comes to Malaysia when it involves Nvidia chips' ]]> The United States has come into 2025 swinging when it comes to foreign imports around technology. Not only has the Trump administration implemented new tariffs that have caused Japanese companies to stockpile on US soil, but has also talked about killing the CHIPs act entirely. Now it seems the US has turned its attention to Malaysia, over concern for tech shipments being redirected from there to China, further bolstering the country's AI development.

Reuters reports that regulations around semiconductors in Malaysia are about to be tightened due to US interest. The Financial Times spoke to Trade Minister Zafrul Aziz, who said Malaysia would have to closely track the movement of high-end Nvidia chips at the behest of the United States government.

"[The US is] asking us to make sure that we monitor every shipment that comes to Malaysia when it involves Nvidia chips," Aziz told the newspaper. "They want us to make sure that servers end up in the data centres that they're supposed to and not suddenly move to another ship."

It might feel a little out of no where, but Malaysia is currently investigating a Singapore fraud case which may involve a shipment of servers that might have had advanced chips subject to US import laws. The case involves transactions worth $390 million and the local media has linked the case with Chinese-based AI firm DeepSeek.

Whether or not the cases are actually linked is still up in the air, but regardless of facts, this grab for control and further crackdowns by the United States is unsurprising. Everyone is worried about the future of AI technology, and the US has appeared especially reactionary with tech tariffs that saw Nvidia loose $200 billion in valuation in a single day.

With crucial hardware, such as graphics cards, already seeming impossible to get, stricter trade rules alongside these tariffs spell bad news for PC gamers. The largest lobbying group for gamers in the US has already come out and said the tariffs will negatively impact millions of Americans, while manufacturers like Acer and ASRock have already admitted they're going to raise their prices in response.

With greater fear around China's potential misuse of tech, things are likely to get worse before they get better. If we do see Trump's proposed 100% tax on silicon from Taiwan, we might all end up dreaming of the sky-high prices we currently face.

Best SSD for gaming: The best speedy storage today.
Best NVMe SSD: Compact M.2 drives.
Best external hard drive: Huge capacities for less.
Best external SSD: Plug-in storage upgrades.

]]>
/hardware/us-pressures-malaysia-to-stop-banned-ai-chips-potentially-entering-china-by-monitoring-every-shipment-that-comes-to-malaysia-when-it-involves-nvidia-chips/ j3UE4YdBtbe6TWsryZmR5V Tue, 25 Mar 2025 11:18:24 +0000
<![CDATA[ 'No real human would go four links deep into a maze of AI-generated nonsense': Cloudflare's AI Labyrinth uses decoy pages to trap web-crawling bots and feed them slop 'as a defensive weapon' ]]> The web is plagued by bots. That's nothing new of course, but now we're in the midst of our much-loved AI revolution (you do love it, right?) many websites are continually crawled by bots aiming to scrape them of their precious data to train AI content. Cloudflare thinks it might have the solution, however, as its newly-announced AI Labyrinth tool aims to take the fight to the nefarious bots by "using generative AI as a defensive weapon."

Cloudflare says that AI crawlers generate more than 50 billion requests to its network every day—and while tools exist to block them, these methods can alert attackers that they've been noticed, causing them to shift approach (via The Verge).

AI Labyrinth, however, links detected bots to a series of AI-generated pages that are convincing enough to draw them in, but contain no useful information.

Why? Well, because they were generated by AI, of course. Essentially this creates an ouroboros of AI slop in, AI slop out, to the point where the bot wastes precious time and resources churning through useless content instead of scraping something created by an actual human being.

"As an added benefit, AI Labyrinth also acts as a next-generation honeypot. No real human would go four links deep into a maze of AI-generated nonsense," says Cloudflare.

Half of Artificial Intelligence robot face

(Image credit: via Getty Images/Yuichiro Chino)

"Any visitor that does is very likely to be a bot, so this gives us a brand-new tool to identify and fingerprint bad bots, which we add to our list of known bad actors."

It's bots, bots all the way down. The AI-generated "poisoned" content is integrated in the form of hidden links on existing pages, meaning a human is unlikely to find them but a web crawler will.

To double down on the human-first angle, Cloudflare also says these links will only be added to pages viewed by suspected AI scrapers, so the rest of us shouldn't notice it's working away in the background, fighting evil bots like some sort of Batman-esque caped crusader.

Enabling the tool is a simple matter of ticking a checkbox in Cloudflare's settings page, and ta-da, off to work the AI Labyrinth goes. Cloudflare says this is merely the first iteration of this particular tech and encourages its users to opt in to the system so it can be refined in future.

Your next machine

Gaming PC group shot

(Image credit: Future)

Best gaming PC: The top pre-built machines.
Best gaming laptop: Great devices for mobile gaming.

I do have a question, though. Given AI is now, let's face it, bloody everywhere, are we really sure that making its training process worse isn't going to have longer-term effects? Far be it from me to take the side of the nefarious crawlers, but I wonder if this will simply lead to a glut of even-more-terrible AI models in future if their training data is hamstrung from the start.

Ah, screw it, I've talked myself out of my own counter argument. Something needs to be done about relentless permission-free data scraping from genuine human endeavour, and I salute the clever thinking behind this particular defensive tool.

If I could make one suggestion, however, could we perhaps add a Minotaur? All good labyrinths need one, and then I can write something like "Cloudflare has grabbed the bull by the horns and..."

Fill in your own headline there. Or, y'know, get an AI to do it for you. Kidding, kidding. I probably shouldn't be feeding the AI any more of my terrible jokes anyway.

]]>
/software/ai/no-real-human-would-go-four-links-deep-into-a-maze-of-ai-generated-nonsense-cloudflares-ai-labyrinth-uses-decoy-pages-to-trap-web-crawling-bots-and-feed-them-slop-as-a-defensive-weapon/ hJaH2YCLWu6YJ7CxzJdUoF Mon, 24 Mar 2025 17:30:46 +0000
<![CDATA[ The 2012 source code for AlexNet, the precursor to modern AI, is now on Github thanks to Google and the Computer History Museum ]]>

AI is one of the biggest and most all-consuming zeitgeists I've ever seen in technology. I can't even search the internet without being served several ads about potential AI products, including the one that's still begging for permissions to run my devices. AI may be everywhere we look in 2025, but the kind of neural networks now associated with it are a bit older. This kind of AI was actually being dabbled with as far back as the 1950's, though it wasn't until 2012 that we saw it kick off the current generation of machine learning with AlexNet; an image recognition bot whose code has just been released as open source by Google and the Computer History Museum.

We've seen many different ideas of AI over the years, but generally the term is used in reference to computers or machines with self learning capabilities. While the concept has been talked about by science-fiction writers since the 1800's, it's far from being fully realised. Today most of what we call AI refers to language models and machine learning, as opposed to unique individual thought or reasoning by a machine. This kind of deep learning technique is essentially feeding computers large sets of data to train them on specific tasks.

The idea of deep learning also isn't new. In the 1950's researchers like Frank Rosenblatt at Cornell had already created a simplified machine learning neural network using similar foundational ideas to what we have today. Unfortunately the technology hadn't quite caught up to the idea, and was largely rejected. It wasn't until the 1980's that we really saw machine learning come up once again.

In 1986, Geoffrey Hinton, David Rumelhart and Ronald J. Williams, published a paper around backpropagation, an algorithm that applies appropriate weights to the responses of a neural network based on the cost. They weren't the first to raise the idea, but rather the first that managed to popularise it. Backpropagation as an idea for machine learning was raised by several including Frank Rosenblatt as early as the '60s but couldn't really be implemented. Many also credit it as a machine learning implementation of the chain rule, for which the earliest written attribution is to Gottfried Wilhelm Leibniz in 1676.

Despite promising results, the technology wasn't quite up to the speed required to make this kind of deep learning viable. To bring AI up to the level we see today we needed a heap more data to train them on, and much higher level computational power in order to achieve this.

In 2006 professor Fei-Fei Li at Stanford University began building ImageNet. Li envisioned a database that held an image for every English noun, so she and her students began collecting and categorising photographs. They used WordNet, an established collection of words and relationships to identify the images. The task was so huge it was eventually outsourced to freelancers until it was realised as by far the largest dataset of its kind in 2009.

It was around the same time Nvidia was working on the CUDA programming system for its GPUs. This is the company which just went hard on AI at 2025's GTC, and is even using the tech to help people learn sign language. With CUDA, these powerful compute chips could be far more easily programmed to tackle things other than just visual graphics. This allowed researchers to start implementing neural networks in areas like speech recognition, and actually see success.

In 2011 two such students under Goeffrey Hinton, Ilya Sutskever (who went on to co-found OpenAI) and Alex Krizhevsky began work on what would become AlexNet. Sutskever saw the potential from their previous work, and convinced his peer Krizhevsky to use his mastery of GPU squeezing to train this neural network, while Hinton acted as principal investigator. Over the next year Krizhevsky trained, tweaked, and retrained the system on a single computer using two Nvidia GPUs with his own CUDA code. In 2012 the three released a paper which Hinton also presented at a computer vision conference in Florence.

Hinton summarised the experience to CHM as “Ilya thought we should do it, Alex made it work, and I got the Nobel Prize.”

It didn't make much noise at the time, but AlexNet completely changed the direction of modern AI. Before AlexNet, neural networks weren't commonplace in these developments. Now, they're the framework for most anything touting the name AI, from robot dogs with nervous systems to miracle working headsets. As computers get more powerful we're only set to see even more of it.

Given how huge AlexNet has been for AI, CHM releasing the source code is not only a wonderful nod, but also quite prudent in making sure this information is freely available to all. To ensure it was done fairly, correctly—and above all legally—CHM reached out to AlexNet's namesake, Alex Krizhevsky, who put them in touch with Hinton who was working with Google after being acquired. Now, considered one of the fathers of machine learning, Hinton was able to connect CHM to the right team at Google who began a five-year negotiation process before release

This may mean the code, available to all on Github might be a somewhat sanitised version of AlexNet, but it's also the correct one. There are several with similar or even the same name around, but they're likely to be homages or interpretations. This upload is described as the "AlexNet source code as it was in 2012" so it should serve as an interesting marker along the pathway to AI, and whatever form it learns to take in the future.


Best CPU for gaming: Top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game first.

]]>
/hardware/the-2012-source-code-for-alexnet-the-precursor-to-modern-ai-is-now-on-github-thanks-to-google-and-the-computer-history-museum/ Vgei4oT7iawB6RE2C7UuKb Mon, 24 Mar 2025 09:26:12 +0000
universo-virtual.com
buytrendz.net
thisforall.net
benchpressgains.com
qthzb.com
mindhunter9.com
dwjqp1.com
secure-signup.net
ahaayy.com
soxtry.com
tressesindia.com
puresybian.com
krpano-chs.com
cre8workshop.com
hdkino.org
peixun021.com
qz786.com
utahperformingartscenter.org
maw-pr.com
zaaksen.com
ypxsptbfd7.com
worldqrmconference.com
shangyuwh.com
eejssdfsdfdfjsd.com
playminecraftfreeonline.com
trekvietnamtour.com
your-business-articles.com
essaywritingservice10.com
hindusamaaj.com
joggingvideo.com
wandercoups.com
onlinenewsofindia.com
worldgraphic-team.com
bnsrz.com
wormblaster.net
tongchengchuyange0004.com
internetknowing.com
breachurch.com
peachesnginburlesque.com
dataarchitectoo.com
clientfunnelformula.com
30pps.com
cherylroll.com
ks2252.com
webmanicura.com
osostore.com
softsmob.com
sofietsshotel.com
facetorch.com
nylawyerreview.com
apapromotions.com
shareparelli.com
goeaglepointe.com
thegreenmanpubphuket.com
karotorossian.com
publicsensor.com
taiwandefence.com
epcsur.com
odskc.com
inzziln.info
leaiiln.info
cq-oa.com
dqtianshun.com
southstills.com
tvtv98.com
thewellington-hotel.com
bccaipiao.com
colectoresindustrialesgs.com
shenanddcg.com
capriartfilmfestival.com
replicabreitlingsale.com
thaiamarinnewtoncorner.com
gkmcww.com
mbnkbj.com
andrewbrennandesign.com
cod54.com
luobinzhang.com
bartoysdirect.com
taquerialoscompadresdc.com
aaoodln.info
amcckln.info
drvrnln.info
dwabmln.info
fcsjoln.info
hlonxln.info
kcmeiln.info
kplrrln.info
fatcatoons.com
91guoys.com
signupforfreehosting.com
faithfirst.net
zjyc28.com
tongchengjinyeyouyue0004.com
nhuan6.com
oldgardensflowers.com
lightupthefloor.com
bahamamamas-stjohns.com
ly2818.com
905onthebay.com
fonemenu.com
notanothermovie.com
ukrainehighclassescort.com
meincmagazine.com
av-5858.com
yallerdawg.com
donkeythemovie.com
corporatehospitalitygroup.com
boboyy88.com
miteinander-lernen.com
dannayconsulting.com
officialtomsshoesoutletstore.com
forsale-amoxil-amoxicillin.net
generictadalafil-canada.net
guitarlessonseastlondon.com
lesliesrestaurants.com
mattyno9.com
nri-homeloans.com
rtgvisas-qatar.com
salbutamolventolinonline.net
sportsinjuries.info
topsedu.xyz
xmxm7.com
x332.xyz
sportstrainingblog.com
autopartspares.com
readguy.net
soniasegreto.com
bobbygdavis.com
wedsna.com
rgkntk.com
bkkmarketplace.com
zxqcwx.com
breakupprogram.com
boxcardc.com
unblockyoutubeindonesia.com
fabulousbookmark.com
beat-the.com
guatemala-sailfishing-vacations-charters.com
magie-marketing.com
kingstonliteracy.com
guitaraffinity.com
eurelookinggoodapparel.com
howtolosecheekfat.net
marioncma.org
oliviadavismusic.com
shantelcampbellrealestate.com
shopleborn13.com
topindiafree.com
v-visitors.net
qazwsxedcokmijn.com
parabis.net
terriesandelin.com
luxuryhomme.com
studyexpanse.com
ronoom.com
djjky.com
053hh.com
originbluei.com
baucishotel.com
33kkn.com
intrinsiqresearch.com
mariaescort-kiev.com
mymaguk.com
sponsored4u.com
crimsonclass.com
bataillenavale.com
searchtile.com
ze-stribrnych-struh.com
zenithalhype.com
modalpkv.com
bouisset-lafforgue.com
useupload.com
37r.net
autoankauf-muenster.com
bantinbongda.net
bilgius.com
brabustermagazine.com
indigrow.org
miicrosofts.net
mysmiletravel.com
selinasims.com
spellcubesapp.com
usa-faction.com
snn01.com
hope-kelley.com
bancodeprofissionais.com
zjccp99.com
liturgycreator.com
weedsmj.com
majorelenco.com
colcollect.com
androidnews-jp.com
hypoallergenicdogsnames.com
dailyupdatez.com
foodphotographyreviews.com
cricutcom-setup.com
chprowebdesign.com
katyrealty-kanepa.com
tasramar.com
bilgipinari.org
four-am.com
indiarepublicday.com
inquick-enbooks.com
iracmpi.com
kakaschoenen.com
lsm99flash.com
nana1255.com
ngen-niagara.com
technwzs.com
virtualonlinecasino1345.com
wallpapertop.net
nova-click.com
abeautifulcrazylife.com
diggmobile.com
denochemexicana.com
eventhalfkg.com
medcon-taiwan.com
life-himawari.com
myriamshomes.com
nightmarevue.com
allstarsru.com
bestofthebuckeyestate.com
bestofthefirststate.com
bestwireless7.com
declarationintermittent.com
findhereall.com
jingyou888.com
lsm99deal.com
lsm99galaxy.com
moozatech.com
nuagh.com
patliyo.com
philomenamagikz.net
rckouba.net
saturnunipessoallda.com
tallahasseefrolics.com
thematurehardcore.net
totalenvironment-inthatquietearth.com
velislavakaymakanova.com
vermontenergetic.com
sizam-design.com
kakakpintar.com
begorgeouslady.com
1800birks4u.com
2wheelstogo.com
6strip4you.com
bigdata-world.net
emailandco.net
gacapal.com
jharpost.com
krishnaastro.com
lsm99credit.com
mascalzonicampani.com
sitemapxml.org
thecityslums.net
topagh.com
flairnetwebdesign.com
bangkaeair.com
beneventocoupon.com
noternet.org
oqtive.com
smilebrightrx.com
decollage-etiquette.com
1millionbestdownloads.com
7658.info
bidbass.com
devlopworldtech.com
digitalmarketingrajkot.com
fluginfo.net
naqlafshk.com
passion-decouverte.com
playsirius.com
spacceleratorintl.com
stikyballs.com
top10way.com
yokidsyogurt.com
zszyhl.com
16firthcrescent.com
abogadolaboralistamd.com
apk2wap.com
aromacremeria.com
banparacard.com
bosmanraws.com
businessproviderblog.com
caltonosa.com
calvaryrevivalchurch.org
chastenedsoulwithabrokenheart.com
cheminotsgardcevennes.com
cooksspot.com
cqxzpt.com
deesywig.com
deltacartoonmaps.com
despixelsetdeshommes.com
duocoracaobrasileiro.com
fareshopbd.com
goodpainspills.com
kobisitecdn.com
makaigoods.com
mgs1454.com
piccadillyresidences.com
radiolaondafresca.com
rubendorf.com
searchengineimprov.com
sellmyhrvahome.com
shugahouseessentials.com
sonihullquad.com
subtractkilos.com
valeriekelmansky.com
vipasdigitalmarketing.com
voolivrerj.com
zeelonggroup.com
1015southrockhill.com
10x10b.com
111-online-casinos.com
191cb.com
3665arpentunitd.com
aitesonics.com
bag-shokunin.com
brightotech.com
communication-digitale-services.com
covoakland.org
dariaprimapack.com
freefortniteaccountss.com
gatebizglobal.com
global1entertainmentnews.com
greatytene.com
hiroshiwakita.com
iktodaypk.com
jahatsakong.com
meadowbrookgolfgroup.com
newsbharati.net
platinumstudiosdesign.com
slotxogamesplay.com
strikestaruk.com
trucosdefortnite.com
ufabetrune.com
weddedtowhitmore.com
12940brycecanyonunitb.com
1311dietrichoaks.com
2monarchtraceunit303.com
601legendhill.com
850elaine.com
adieusolasomade.com
andora-ke.com
bestslotxogames.com
cannagomcallen.com
endlesslyhot.com
iestpjva.com
ouqprint.com
pwmaplefest.com
qtylmr.com
rb88betting.com
buscadogues.com
1007macfm.com
born-wild.com
growthinvests.com
promocode-casino.com
proyectogalgoargentina.com
wbthompson-art.com
whitemountainwheels.com
7thavehvl.com
developmethis.com
funkydogbowties.com
travelodgegrandjunction.com
gao-town.com
globalmarketsuite.com
blogshippo.com
hdbka.com
proboards67.com
outletonline-michaelkors.com
kalkis-research.com
thuthuatit.net
buckcash.com
hollistercanada.com
docterror.com
asadart.com
vmayke.org
erwincomputers.com
dirimart.org
okkii.com
loteriasdecehegin.com
mountanalog.com
healingtaobritain.com
ttxmonitor.com
bamthemes.com
nwordpress.com
11bolabonanza.com
avgo.top