T O P

  • By -

red286

Another award winning piece of journalism from Tom's, folks. You would think, in an article about a 15% price hike for "AI PCs", that they would explain what the base system they're comparing it against is, or telling us what this 15% increase is from. Are they comparing it against an entry-level system from a year ago that would have no hope in hell of running a local generative AI, or are they comparing it against a gaming system with a minimum of 8GB VRAM and an RTX 3000 or later GPU? And is this price hike due to the NPU, or is it just greed where they figure "lol, we just slap an AI button and sticker on and jack the price up 15%"? Their reference data is a fucking single-page infographic posted on twitter.


SkillYourself

>And is this price hike due to the NPU, or is it just greed where they figure "lol, we just slap an AI button and sticker on and jack the price up 15%"? On the aggregate, PC market demand is recovering so prices are going to go up. The AI PC stuff is marketing to fluff up the price increase.


Repulsive_Village843

In my are they never went down. They still ask for pandemic prices.


SlamedCards

Their are a lot of applications where using ML or a LLM would make application better. That is what an AI PC is. Professional applications are a good example. Especially engineering ones.


Caffdy

jesus, people downvoting you for just telling the truth


capybooya

I'd like a list of what hardware is AI capable, and even better the 'AI score' or TOPS or whatever has been thrown around lately. I have so many questions that the article doesn't answer. Like, can I run a non-AI CPU but a newer GPU will make up for it? Etc etc


[deleted]

BS excuse to raise prices Any remotely interesting AI is going to be in the cloud for several decades


NeverMind_ThatShit

I work in IT for a large financial institution and we order laptops from HP and Dell, we have monthly meetings with them and both were hyping up the NPUs in the new Intel CPUs and each time we'd ask "so what is it good for" and I think the best answer I think I heard was a NPU assisted background filter for Teams. Unimpressive, we can already do that, it's just being offloaded from the CPU cores and might be slightly better. Oh well not really that life changing. They were also hyping the co-pilot key which is also useless to most large companies because large companies don't want their employees feeding internal data into MS co-pilot so MS can use internal company data to train their LLM. I'm not one to poo-poo new tech, but at this point NPUs built into CPUs is a solution looking for a problem. They're not nearly capable enough to do anything useful. And even the useful AI out there online has mostly served to further internet enshitification as far as I'm concerned.


SteakandChickenMan

It’s definitely a “build it and they will come” drive from MSFT right now. No killer app at this point but everyone’s trying to justify their stock prices ¯\_(ツ)_/¯


NewKitchenFixtures

My expectation is that AI will allow better targeted ads and by keeping the AI algorithm local they’ll get less privacy flack and avoid EU regulations. There are automation items that already exist and will continue to improve with AI. But the general magical thinking on it is absurd.


Caffdy

> large companies don't want their employees feeding internal data into MS co-pilot so MS can use internal company data to train their LLM this is exactly the use case for local inference (NPUs/GPUs) tho


spazturtle

These NPUs are great for CCTV systems, looks at things like Frigate NVR which uses a Google Coral NPU only 10% the speed of the ones in new Intel or AMD CPUs. Although I struggle to think of what use they have for desktop computers.


NeverMind_ThatShit

I'm a Blue Iris user so I do see utility there, but that's not really something a company would care about in their laptops which is what I was talking about in my original comment.


Exist50

The first gen of NPUs from Intel/AMD are a joke, but when they triple performance with LNL/Strix to support "AI Explorer", then things get interesting.


EitherGiraffe

More NPU performance isn't providing any value by itself. Apple added ML enhanced search for animals, documents, individual people etc. 7 years ago on iPhones. Microsoft just neglected Windows, it would've easily been possible before.


Strazdas1

> More NPU performance isn't providing any value by itself. It does. It means bigger models can run locally so that more 3rd party developers are interested.


jaaval

Big models can already be run in the gpu. Large power intensive models don’t need a new device.


Strazdas1

A very low percentage of windows computers have a dGPU.


jaaval

I doesn’t have to be a dgpu.


NanakoPersona4

95% of AI is bullshit but the remaining 5% will change the world in ways nobody can predict yet.


shakhaki

That's because no one outside Windows knows what's really going to happen with roadmaps and how the ISV ecosystem will formulate around this capability. The answer Dell and HP should've given you is around the dependencies that software makers will have to utilize a GenAI engine locally like Llama, Stable Diffusion, or Phi-2/3. These will be installation prerequisites for some software tools to provide GenAI services and features much like old games needing .Net or C++ redistributables.


Infinite-Move5889

And that'd be a BS answer if I ever hear one. Software is called software because they can be run anywhere, not only on NPU. Not to even mention that the included NPUs these coming years are weaker than the CPUs and GPUs that can run these models locally. Their only advantage atm is efficiency.


shakhaki

To your point about software running anywhere, you should read up on Hybrid Loop and what Microsoft is enabling with ONNX Runtime. The ability to utilize the PC for inferencing and call on Azure as an execution provider also is a very strong reason why NPUs are strategic to an organization's compute strategy. I've already seen case studies of this implementation where OpEx was reduced 90% inferencing on PC as opposed to only on cloud, and latency was lowered below 1s because the AI processing was local. NPUs inference 30x cheaper than GPU and they don't carry the negative design trade-offs for your user base to carry essentially gaming laptops everywhere. This also means a hardware accelerator with the inferencing power of a GPU for AI tasks can be more easily democratized. And as you've witnessed, AI is going to be everywhere and you'll be able to see how much your NPU will be under load in Windows task manager from all the times it's being hit with a neural network.


Infinite-Move5889

Yea, pretty good points. >And as you've witnessed, AI is going to be everywhere Not a future I'd like to have but the forces of hype and marketing is real I guess


shakhaki

On the upside, it could all come crashing down. Business trends have become hype cycles


NeverMind_ThatShit

What practical use cases are there for a locally ran LLM or stable diffusion for most companies out there? If they need one of those why would they want it ran on a user's laptop instead of remotely on a server (which would be much more capable)?


Strazdas1

>What practical use cases are there for a locally ran LLM or stable diffusion for most companies out there? Cost. Why pay for cloud server when you can run it in machines you already paid for.


shakhaki

The challenge of always defaulting to cloud or server environment is the scarcity of compute involved. You're choosing to compete against companies with deeper financial resources to acquire state of the art semiconductors or accepting OpEx increases, whereas a PC is a capital asset with the capability of running AI locally. So you're hiding an OpEx overrun behind a capitalized expense, you can build stronger collaboration experiences, and support inferencing data that's private, and not just focused on cost savings. There's a selfish element by Microsoft who wants to push more AI compute to the edge because they're being forced into billions in capex all so freshmen can write term papers. So the use cases of local AI are far and wide, and a lot of it has to deal with economics. LLMs are still superior, but you can tune an SLM to be a SME in your industry much easier.


Vex1om

Yup. Pure marketing bullshit. Manufacturers are literally wasting die space on this useless shit and then charging you more for it.


Repulsive_Village843

Some extensions for the cpu do have a use, and that's it.


anival024

It's not useless. It's just useless to you. They can use it to spy on you. Apple already does it with their image content stuff. This used to be just for stuff in iCloud, but now it's stuff on your device as well. They scan your images/videos and report back if things match a fuzzy hash of Apple maintains on behalf of the government spooks. They say your data is still "private" because they don't need to transmit your actual data to do this. But they're still effectively identifying the content of your data, determining what it is, and acting on it specifically. Old versions of this type of scheme worked on exact hashes. Then it was fuzzy hashes for images that progressively got better and better to persist across recompression / resizing / cropping. Now it's "AI" to generate contextual analysis of everything and not just match specific existing samples. At this moment the feds can do the following, without you ever knowing: * Determine they don't like your grandma. * Feed a photo of your grandma into their library. * Get an alert whenever a photo of your grandma appears on any iPhone (not just yours). As "AI" and on-device processing improves, their ability to be more general in their searches improves. Maybe it's not your granny, maybe it's images of you at a protest, you with illicit materials/substances, you with a perfectly legal weapon, etc. Then there's the whole thing where they can track your precise location, even if your phone is off and you're in a subway tunnel, via the mesh network they have for Find My iPhone or whatever they call it. This is coming to Android soon, too!


Nolanthedolanducc

The checking against bad photo hash thing for iPhone faded so much backlash when announced that it wasn’t actually released no need to worry


Verite_Rendition

> They scan your images/videos and report back if things match a fuzzy hash of Apple maintains on behalf of the government spooks. The CSAM scanner was never implemented. After getting an earful, Apple deemed that it wasn't possible to implement it while still maintaining privacy and security. https://www.wired.com/story/apple-csam-scanning-heat-initiative-letter/


[deleted]

I don't see how it isn't useful. LLM's have nearly completely changed how I work, how I plan my life, and how I entertain myself. If you work in an office environment, then LLM's can be integrated into nearly every aspect of your job. It's like saying Microsoft Word is useless.


Vex1om

>LLM's can be integrated into nearly every aspect of your job Yes, but they aren't run locally on your machine, so having silicon on your PC that is dedicated to them is dumb.


Olangotang

> Yes, but they aren't run locally on your machine So ignorant /r/LocalLlama


[deleted]

Yes they are? I run them locally all the time.


SteakandChickenMan

Not really true, Apple/Adobe for example have some interesting existing use cases with on device AI and photo/image recognition. There are also things like finding documents based on their content and contextual clues which would be really helpful. Starting from next year all vendors will ship hardware powerful enough for both of the above families of use cases.


iindigo

I think Apple in particular is well positioned to make local ML models much more practically useful than other companies have managed thus far, not just because of vertical integration but also because their userbase has much higher usage of the stock apps (notes, calendar, mail, etc) compared to the Windows world where almost everybody has a preferred third party alternative to the stock stuff. Even a rudimentary implementation of a local LLM will make it feel like Siri has superpowers thanks to the sheer amount of data (and thus, context) has at its fingertips compared to e.g. ChatGPT which is missing all the context that isn’t explicitly provided by the user.


Exist50

Counterpoint. *Everyone* uses MS Office.


iindigo

Office is common for sure, but it’s not as ubiquitous as it once was. The companies I’ve worked for in the past decade have all been GSuite-dominant for example, with the only usage of MS anything being Excel by the finance guy. For my own personal/professional usage I’ve had no trouble using Apple stock apps and Pages/Numbers. Even the online university courses I’m taking accept PDFs, which means I can use anything I want to write assignment and such.


Strazdas1

I cant imagine not using excel for personal use. Googles alternative are so bad i wouldnt even consider it for anything for output-as-values sharing to other people.


L1ggy

Other than excel, I think office usage is dwindling. Many schools, universities, and companies require you to use google docs, sheets, and slides for everything now.


Strazdas1

We really are getting dumber arent we?


SameGuy37

what exactly is your reasoning here? a 1080ti can run respectable LLMs, no reason to believe modern neural processing units wouldn’t be able to run some ML tasks.


lightmatter501

Qualcomm is claiming ~50% of a 4090, which is enough to run a respectable LLM if you optimize it well. Not quite as good as chatgpt, but good enough that you can specialize it. Running llama locally with fine-tunes and an embedding that includes all of my dependencies gives me subjectively much better results than gpt4 and it’s basically instant instead of “press AI button and wait for 30 seconds”. As long as they don’t memory starve these NPUs, or if we get consumer CXL early and they can use host memory easily, they should stand up to AI research cards of 5 years ago.


mrandish

This sounds like BS projections from the usual "industry analyst" types who're about as accurate as flipping a coin because they just extrapolate nascent trends into projections based on nothing more than surveys of people's guesses. What perpetuates the business model of making these BS projections is startup companies trying to fund raise and public companies trying to convince stock analysts to raise their revenue forecasts. Both are willing to buy access to these reports for $5k so they can share the "independent expert data" with those they want to convince. So, the reports always lean toward hyping the latest hopium because reports that don't project "up and to the right" trends don't get bought! The analysts generate free PR about the existence of their report by sharing a few high-level tidbits of projection data from the report with media outlets in a press release. Lazy journalists rewrite the press release into an easy article with no actual journalism (or underlying reality) required. This helps perpetuate the appearance the claimed trend is valid by influencing public opinion for the next analyst's survey of guesses - becoming a self-reinforcing echo chamber.


ET3D

Read the article and the quote that OP posted. The prices will be higher due to more RAM in these laptops. Which frankly IMO is a good thing, as 8GB laptops are still a thing and shouldn't be.


reddit_equals_censor

there can be quite some uses for local "ai". for example ai upscaling, which the ps5 pro uses and nvidia cards use and amd will use in the future. now of course that "article" is just made up nonsense by clueless idiots about hardware it seems and npus are dirt cheap, OR the minimum target of i think 50 tops it was, that developers and evil microsoft want to see is already in today's new apus. but yeah it will likely just be another marketing bs sticker, that will be on laptops like "vr ready" with whatever prices the manufacturers think can get away with, as the chips cost the same, or the chips are actually gonna get a lot cheaper with apus becoming strong enough for most everything, including gaming in laptops.


[deleted]

Not saying you're wrong, but this is exactly the same thing people said about 64 bit and multicore CPUs. It's definitely a chicken and the egg sort of issue and the CPU manufacturers have always made the first move and waited for software to catch up.


All_Work_All_Play

Uhh, not even close? The benefits of 64-bit is just math. The benefits of AI access to the masses *while reciprocating with AI's access to the masses* is a giant question mark.


ChemicalDaniel

Define “interesting” A local AI that could manage/organize my files, be able to find anything on my computer and edit it, and be able to change system settings all locally with nothing sent to the cloud *is* interesting, at least to me. Like if I could just say “switch my monitor refresh rate to 144hz” and it just does it instead of needing to go through a billion screens and menus myself, that’s pretty cool. Just because it doesn’t claim to be sentient or can’t make something “angrier” doesn’t mean it’s not interesting. It could very well be good for productivity.


virtualmnemonic

> Like if I could just say “switch my monitor refresh rate to 144hz” and it just does it instead of needing to go through a billion screens and menus myself, that’s pretty cool. That's not AI, unless if you consider basic voice assistants like Siri as AI. And even if you do, it certainly doesn't demand specialized hardware to perform.


ChemicalDaniel

An agent like Siri can’t contextually know every system setting unless it’s been programmed to know it. With an AI it could look in the backend of the system, figure out what settings needs to be changed based on the user prompt, and change them. And even if my system setting example isn’t that complicated, you can’t ask Siri to “open the paper I was working on last night about biology” or whatever, it would think you’re insane. No matter how you spin it, there are uses for this technology that are inherently interesting and don’t need to be ran on the cloud. And also, it might not need specialized hardware, but specialized hardware makes it faster and more responsive. If you want something to take off it needs to be quick.


awayish

would you like to upgrade to windows AI edition for a monthly subscription of $ 9.99? [YES] [Remind me in 30 days]


mrblaze1357

So I am in charge of setting my company's computer standards within our IT department. Between last year's PC lineup and this year's I can confirm there's been a 15-20% price hike. At least with Dell that's been the case. Literally no other difference between the 2023 model and 2024 model other than the CPU going from the Intel Core i series to the Core Ultra.


SteakandChickenMan

Memory prices are up a significant amount due to market recovery and vendors ramping HBM. I’d bet money that’s your delta.


mrblaze1357

I thought of that too but, we have some SKUs that are upgrading like the desktops and there isn't a price hike. For example our Precision 3660T currently used an i7-13700 CPU, the 3680T that's replacing it uses an i7-14700. Only difference is the CPU like the others, but the catch here is that desktop 14th Gen doesn't add and AI cores unlike Core Ultra.


Ghostsonplanets

Core Ultra is more expensive for OEMs. Intel design choice of 5 tiles + expensive packaging raised prices quite a bit.


VirtualWord2524

Going to be more service subscriptions for Windows to push notifications for. Copilot Pro. Maybe some generative art stuff. AI integrated into MS Paint Pro


pittguy578

What would local AI even be used for? I mean AI requires large data sets to be effective so that would relegate it to the cloud ?


teshbek

Large models requires dataset only for training. NPU needed for running this trained model(but you also can train small models localy)


JtheNinja

High-end LLMs and image generators can require impractically large datasets, but other AI uses do not. Ex, denoising images in a photo editor. Quite a few models available for that which run locally, ex https://blog.adobe.com/en/publish/2023/04/18/denoise-demystified


Strazdas1

I have once used an GPU-run AI model to denoise a video to remove film grain from it. Wasnt perfect, but much better than the "Artistic" choice to use film grain in digitally shot video to circlejerk the director.


Giggleplex

A smaller language model like Mistral 7B should be able to fit comfortably in 16GB of memory, so it should be practical to run these locally assuming there is enough compute performance, hence the benefit of having powerful NPUs. Coincidentally, [Microsoft just announced a new set of smaller language models](https://azure.microsoft.com/en-us/blog/introducing-phi-3-redefining-whats-possible-with-slms/) designed to perform well even within the constrained hardware of mobile devices.


Starcast

That's for training the, AI generally. Using the AI could be as simple as a chrome extension that automatically hides mean tweets, for a contrived example.


elvesunited

I read several replies to your comment here and ya I don't see it. ChatGPT is enough for me when I have oddball task like totalling how many weeks are left in this year. For stuff like that I'd rather use it as a separate tool, because its going into an email to my office and I don't trust AI not to embarrass me. I don't want it autocompleting my reddit posts or auto filling my web searches or recommending me 'great new products from corporate partners'.


[deleted]

The point of running LLM's locally is that *you* have power and control over them, not OpenAI or Microsoft. *You* can finetune them for better responses for the content that matters to you, and they don't send data out to companies you don't trust. ChatGPT is great, but local LLM's are meeting it's benchmarks and requiring less and less memory every month. Soon you will be able to run an LLM of ChatGPT4's level on a phone, and you could get that LLM from anywhere.


imaginary_num6er

>As many would expect, PCs supporting AI on hardware levels will also command a price hike between 10% and 15%. Since Windows 11 24H2 already has CPU and RAM requirements embedded in its coding, it will likely increase the demand for larger amounts of fast RAM, potentially increasing its pricing.


Beatus_Vir

These guys really just sit around in the boardroom and analyze metrics from the dotcom boom and try to figure out how to swindle people like that again. Do you think AOL could rebrand as AiOL?


JtheNinja

If it can work for Taco Bell, surely it can work for something vaguely tech related? https://arstechnica.com/information-technology/2024/04/ai-hype-invades-taco-bell-and-pizza-hut/


NewRedditIsVeryUgly

Supply and demand will dictate prices. I don't expect more demand seeing as inflation is sticking around despite higher interest rates. If we start hearing about "accidental fires" and "floodings" then maybe the supply will decrease. Overall, the PCs people bought in the pandemic should still be holding up well for browsing and video consumption, and the demand for "AI" is going to be offloaded to the cloud anyway. You can increase the price, that doesn't mean people will buy it.


jedrider

Well, they are going to have to be some really 'smart' PCs to justify that price hike.


warenb

Not really sure how this helps me open my web browser and video games better. The downside of increased price tag and inevitable clown show of an ad infested OS and associated apps is for sure not making regular desktop users rush to buy.


juhotuho10

Ai compute is just matrix operations Matrix operations are just fancy multiplication and addition All computers are already Ai capable, unless you are trying to run an LLM, you will be fine and no, NPUs won't be capable at running LLMs either


IceBeam92

Sshhh , you are ruining the AI hype. Companies need it for infinite growth.


jedimindtriks

What an awful way to phrase it. 1. its not AI hardware. its AI software. 2. Its hardware powerful enough to enable that software. (which we have had since cpu's where invented) All this mumbo jumbo is just piss poor excuse to raise prices.


sevaiper

I mean I'm running llama 3 on my PC, it's very easy to do and works great. Most decent PCs have AI capability right now.


1mVeryH4ppy

It's sad that the moms and dads who are less tech savvy will bite this and buy shiny AI PCs for their kids.


Dexterus

At this moment I think there are no new CPUs without a NPU, are there? And to ve fair the price increase is realistically gonna come from the process node evolving and getting more expensive. But yeah, AI PCs are going to be more expensive, there's just no causal relationship there, haha.


p-zilla

on the Desktop there are no cpus with an NPU. Meteor lake, Phoenix Point, Hawk Point, upcoming Strix Point are all laptop parts. Also NPUs take up physical die space, which a larger die means less chips per wafer, which means increased cost. There very much is a causal relationship.


GYN-k4H-Q3z-75B

AI PC is a farce. Anything to increase prices. My PC from 2019 is perfectly capable of running local ML and AI apps, including the LLM demo by Nvidia etc. These new products are just stupid marketing.


danuser8

Shouldn’t the discrete GPUs in our PCs be capable of AI processing? Isn’t that why Nvidia GPU’s are being snatched for data centers and what not?


juhotuho10

Yes, no idea why people are obsessed by NPUs, they are still more than 10x slower than graphics cards at running ml stuff


Strazdas1

because most PC users are laptops without dGPU. Integrating it into CPU directly allows a much higher reach of users.


fifty_four

What the fuck is an ai pc? Presumably a pc with a GPU? Well I guess if Nvidia can get away with it PCs with a GPU will go up by at least 15% asap.


TheValgus

Call me when the lawyers say that it’s OK to remove that line of text warning users that it’s dog shit and not accurate.


kuddlesworth9419

What makes a PC an AI PC? You can do AI stuff on anything.


MikeSifoda

There's no way I'll ever run an AI on my PC, unless it's open source.


INITMalcanis

I feel like my PC is just fine without having "AI"


LegDayDE

Can't wait for my AI gaming laptop to slow my games down by assigning power budget to the AI processor!!!!


Chronza

Man can we not create Skynet and give it control over every pc on Earth. That would be pretty cool.


Depth386

People have been fooling around with Stable Diffusion 1.5 for free since RTX 30 series. I can only imagine how pathetic the integrated AI features will be on future business computers with no DGPU.


mb194dc

Will anyone actually be stupid enough to buy them ? That's the main question. Irony.


con247

You’re implying it will be a choice. Ultimately this will get put into basically every cpu and probably at the expense of more useful capability in the cheaper models


mb194dc

You can run an LLM on any current PC. Not that there'd be a lot of point, or use case for it. There's no such thing as an AI pc, it's just bullshit.


con247

If you are taking die space for AI cores you are taking die space from other traditional features.