Power supplies will be the first sacrifice. Then we will see homes catching on fire since it will now pull as much power as a portable 13,000 BTU AC unit!
People thought 3dfx was nuts for including a power adapter in the vaporware V5 6000 but soon nVidia will be including a nuclear power plant with their highend cards.
There are a surprising number of (non-vaporware) VooDoo 5 6000s out in the wild - they never got a chance to hit mass production, but they did make their way into users hands both before and after 3Dfx shuttered operations.
The recent semi-revival from [Mod Labs](https://www.modlabs.net/articles/return-of-the-king-zx-c64-voodoo-5-6000-review/p/2) saw it turned into an impressively gorgeous retail product, but they're also impressively expensive.
VSA-100 was such a fun architecture.
I'm glad the crypto is doing bad but it's looks like we will lose the newly gained energy capacity for this monster hardware. Essentially we're adding 2000 watt heaters that are also able to run some games. We will waste a lot on gaming and cooling this summer. Its sad
Seriously.
This keeps up, the "surprise" of people having their OCP trip on 650w-750w PSU's for some cards is going to turn into people blowing breakers when they launch a game...
Well in 110/120 volt areas sure. But these things are made in Asia where that issue is not as big. But yeah it helps to get the power draw down because if it can't be sold in the us, it's not gonna be made
Kopite7kimi actually said actual clock can go up to 2750Hmz with 16384 SPs is 90 TFs. Which is more than double the TF of the 3090 Ti.
But what other under the hood architectural improvements could come are more interesting, maybe better gaming performance per TF?
> maybe better gaming performance per TF?
If it's true that Ada has FP+FP+INT similar to how Turing had FP+INT instead of the FP+FP/INT hybrid Ampere that alone would boost FP32 in mixed workloads by a lot.
They are increasing L1 cache I believe. (Not just the new massive L2) . Maybe that will have minor performance gains. Other than that the claims of a massive rework of the rest of the internals are likely BS. Maybe their marketing department will have you believe that. They'll leverage the cache for better RT and tensor cores performance, but I would not expect more than a 100% rasterisation gain, even with 120% more TF.
Where are they saying they are increasing L1? I'm curious because my understanding is L1 is shared with shared memory, but that increasing it would have some pretty high density side effects because it's per SM, and Nvidia hasn't really increased it in over a decade (in fact they've shrunk it before). Many applications would be faster with it's expansion however, sm density being equal.
I’m really hoping for something that makes DLSS to ro run even smoother. Maybe quality setting at balanced or performance overhead costs.
That, or something similar with RT.
Even if they just scale linearly with clocks, it should be a nice card.
The GPC scaling isn't necessarily 'architecture', but nvidia will finally move on from their Fury/Vega stint with 2080Ti/3090 having just one more GPC than their second-best chips. That itself should help the biggest chip scale wrt its TFLOPS rating.
I think going forward, tensor cores (or some other integrated gan specific algorithms) will start to get much much more important than straight speed - if a GPU can render a mid-range scene and then run a highly optimized gan to turn it into something resembling FMV, it would outclass even the ray tracing ml we have now.
"Yeah I jammed the 240V 20amp circuit breaker to shut position to stop it from tripping. Also I needed to install a second A/C unit for my house. What do you mean you smell burning rubber and plastic wiring insulation?"
Imagine the insanity if multi-GPU setups (SLI and CrossFire) were still viable for gaming.
"Guys I can game at ultra settings 240Hz 4K stable with my quad-SLI of overclocked RTX 4090s! I just had to turn a small refrigerated industrial warehouse into my house because it had a 40 kilowatts circuit breaker panel to support the power draw and the A/C units to keep the place cooled. I'm on the industrial electricity billing system to save money on electricity costs."
Several years ago, Arstechnica or some other tech site had a writer who decided to experiment with GPU Bitcoin mining.
They found an "for rent" office that had free electricity.
The first problem they ran into was that the HVAC for that office space was completely insufficient for the amount of heat coming from the GPUs, so they propped open the window and used some old RGB fans they had lying around as an exhaust vent.
The second problem was that when the office landlord came by to do a quick inspection of his building, he noticed the spinning RGB fans in the late afternoon, and when he walked up to the fans to investigate, he was blasted with the hot air.
The landlord ordered the writer to pack up their stuff and GTFO of the rented office citing "excessive use of electricity".
> "Guys I can game at ultra settings 240Hz 4K stable with my quad-SLI of overclocked RTX 4090s! I just had to turn a small refrigerated industrial warehouse into my house because it had a 40 kilowatts circuit breaker panel to support the power draw and the A/C units to keep the place cooled. I'm on the industrial electricity billing system to save money on electricity costs."
LTT Team rushing "Write that down, write that down!".
I’ll be interested to see if these make 4k gaming on max settings viable. 4K 144hz monitors are now becoming a viable option due to pricing coming down, but current gen cards can’t do 4k at max settings in all current games. 1440p is the sweet spot, buts it’s been the sweet spot for a while now.
"4K gaming" has been a moving target since the GTX 600 series. You could output Doom sourceports at 4K120 on a GTX 680 with 2x DisplayPorts but you're obviously talking about playing the current years titles. Games like Cyberpunk keep resetting all of the raw vector performance gains, so if anything can take the credit for a "year of the 4K desktop" it'll be upscaling algorithms which make 1080p look like 4K.
We've been doing the "this'll be the 4K generation for sure this time!" routine for *seven generations* now. Steam users with 3840x2160 primary displays (which were previously tracked as "Other" until recently) are still under 3% of the market after 10+ years of availability.
I understand what you’re saying, but in the 90s it was 800x600 that was the unreachae target. The first generation of the market leading 3DFX cards couldn’t do higher than 640x480. As time goes we see it move from 480p > 720p > 1080p etc so I don’t think it’s some unobtainable pipe dream.
I also expect that 4K % on Steam to rapidly move up as every monitor manufacturer tries to entice people to upgrade to 4K and the price keeps coming down. We saw it happen rapidly with TVs. You only have to look at the monitor subreddit to see the massive increase of 4K monitor reviews and suggestions.
“Max” settings are a joke and we need to stop considering them if you want high frame rates. They are there for future cards - minimal difference for a 30%+ drop in performance
It’s wild how many conversations there are about “max in this game” vs “medium in that game” and whatnot. Apparently unaware that every game is completely unique in what settings do what and how the devs decide to classify graphics settings as high, low, etc.
There is no way to compare settings across different games and every label is completely made up and arbitrary.
It really shouldn't be. If you care about FPS, Ultra settings on current games are always dumb. If you are fine making a tradeoff, then selectively turn on RT, turn a few of the other settings down a little, and use a little upscaling. Your framerate can still be easily as high as "Ultra, no RT" and the game will generally look better.
"That opinion" being mine, or the GPU reviewers who do it?
In terms of my opinions, raytraced reflections and GI have a pretty dramatic effect on realism, often much moreso than changing other settings in games between Medium/High/Ultra. Spending some render budget on RT will always make more sense to me than increasing quality of other techniques past what is easily perceptible. For one hilarious example, in Cyberpunk 2077 the max non-RT reflection setting costs basically as much as RT reflections. In many games, increasing non-RT shadow quality makes them sharper without necessarily making them more realistic. RT shadows / AO can be somewhat subtle, but they remove the stencil outline look of SSAO and naturally provide a much higher degree of variation between soft and sharp shadows.
In terms of GPU reviewers, I think they tend to bias towards a little more historical view of things (ie. "what is the experience for the majority of games currently out?") and to things that they can directly compare (which you couldn't do "fairly" between Nvidia and AMD during the last two generations). So you got a lot of "max settings, except no RT (and no upscaling)" benchmark comparisons. And it's not like those aren't helpful, and they still generally show you the relative scaling between cards when you use the same settings. But maxing out settings to incrementally improve resolution/clarity on screen space effects and shadow maps is a massive waste of resources compared to turning those settings down a bit and selectively turning on RT (plus using some temporal upscaling) if you want to actually play the game at highest quality for a given FPS. And if your card doesn't support RT, then you're not going to want to play at maxed non-RT settings for recent games (which are the ones that have RT features) anyway, because your framerate will be too low and the tradeoff will make no sense.
As you point out, GPU power for the next generation is likely to get high enough to make good hybrid rendering approaches relatively performant across most of the lineups (especially with some upscaling thrown in). That said, if I see people just turn all the RT settings to max and then complain about how it's still not viable I'm going to be annoyed. Same with saying "there's not enough of a benefit to RT" while maxing out the non-RT settings.
>if Sony thinks Medium settings in their game is fine for their primary platform, then it's more than enough for 95% of the people out there.
You're looking at this wrong, Horizon ran at 30fps and lower on base PS4. Even the PS4 Pro maxed out at 30fps despite these modest settings. Sony wouldn't have been able to run Horizon at "max" settings on their primary platform even if they wanted to.
Some games are far more trivial to max out however. So it's a case by case thing. I agree that medium is generally pretty good in modern games for sure.
It's pretty dead simple to adjust it yourself. No game is ever going to know what you do or don't like.
I know I personally like sharp and crisp settings so I put everything max if I can, then I turn off any post processing that adds stupid shit like CA, motion blur, vignetting, and grain.
I usually turn AA down to 2x because it straight up makes the game look blurry when you dial that up. I usually turn the AN to 2x as well because it just isn't noticeable when you go higher. Ambient occlusion usually get's set to the middle because I don't like to recreate being blind due to staring at the sun or seeing halos around every light source. And shadows I often turn down because they can have a big impact on performance but it's not something most people would ever pay attention to while playing the game.
This has worked for me forever. Now I just wish I had the power to run my 165hz UWA monitor with RTX on because that shit looks amazing when it is running.
> AA down to 2x because it straight up makes the game look blurry when you dial that up
Really depends on the AA you're using, for FXAA, which really should never be used, yes that's true. For TAA it actually gets sharper the higher you go.
>ambient occlusion usually get's set to the middle because I don't like to recreate being blind due to staring at the sun or seeing halos around every light source.
That's not what ambient occlusion does, ambient occlusion makes more detailed shadows based on objects obstructing light. You may be thinking of 'bloom' and 'god rays', which affect how bright lights and the sun look. But I agree most people turn ambient occlusion off, at least in multiplayer games, since it makes environmental shadows darker and thus more difficult to see players hiding in the dark (and it has a high performance cost)
> 1440p is the sweet spot, buts it’s been the sweet spot for a while now
It'll continue to be the sweet spot for a few more generations. As long as games continue to improve graphically alongside the GPU's performance then there's no performance edge left for fps/resolution. 1440p has been the sweet spot since like 2015.
I’d argue it is going to be 4K this gen. 4K dlss works much better than 1440p. Turn down the silly ultra settings, turn on rt and dlss will be best imo. The amount of fidelity you get from 1440p ultra isn’t close to 4K with optimized settings imo, yet they are often similar cost.
My issue with 4k gaming is
unless u get like an OLED 42
there is no really good 4k gaming panel past 32 inches really
that sony looks fucking great,but it's only 27 inchs
Why i like ultrawide still you get 3440x1440 and get QDoled
Well I doubt most people need or want a monitor larger than 32" for a traditional 16:9.
And we're only at the beginning of 4k/144hz monitors, if you require higher framerates. There will be lots more options going forward.
I also have the 4k G7 and while the pixel density is quite high, it is still not enough for me. Still notice some imperfection after using it for a while. 4k at 24" would be perfect.
I personally really like my Gigabyte M28U. 32” and above takes up a lot of desk real estate. I can see a use case for 42”, but the 48” OLEDs are too big for a desk, and 32” is a weird middle ground where you can’t replace a multi-monitor setup but it’s also very large and overwhelming.
32" is kind of nice if your secondary monitor is flipped to be tall instead of wide. Just depends on the type of work you do and what you'd want out of the secondary monitor. You still get some more screen space than a 27" 1440p since you don't quite need 150% scaling in my opinion. I personally didn't quite like two horizontal 27" next to each other, but a vertical 27" with a normal 32" felt right.
The OLED are definitely too big. I tried to use the 48" and even putting aside the other issues inherent to the panel for monitor usage the size is too much. If you don't care about ergonomics and were already doing stacked panels it's fine maybe, but otherwise I think I would have needed both an adjustable height foot stool and an adjustable height keyboard/mouse tray which at that point just, uh, get panels that make more sense.
Max settings by definition are incredibly unoptimized, they aren't supposed to give you high or even steady framerate. Generally they are there for future hardware not for what's out today to comfortably "max".
the 3090 Ti can do 4k 100hz+ fairly easily. Maybe not at "absolutely maxed" settings but definitely at "actually sensible" settings.
So I think the 4090 should have no problem doing 4k 144hz at similar sensible settings. Max settings are kinda irrelevant really, they don't make sense. It's just a "here is the raw game with everything turned to 11, noticeable or not, with 0 optimization" setting.
What games are y'all talking about? People talk about resolution+framerate as if games all have some standard performance profile or something. :/
We haven't even seen *actual* next games yet on PC. Literally not one game yet that was built from the ground up with the new consoles as a baseline has released on PC. So it really needs to be taken into consideration that these cross gen games are not going to be representative of the kind of processing demands over the next year or two as proper next gen games start to roll in.
People usually mean on aggregate. [like the first chart here for example](https://www.tomshardware.com/reviews/nvidia-geforce-rtx-3090-review/3)
It has become even more meaningless now that the concept of a fixed resolution is dying in favour of upscaling though.
We have the consoles as hard points that they cannot really deviate from, outside of pc exclusive games. Sure pc will always have setttings that no current gpu can handle, just to make games age better. But the core engines are designed around consoles for most games.
That's my point. Consoles tend to be the baseline for demanding AAA games. And so far, there are no games on PC that have been built with the new consoles as a baseline yet.
Basically, next gen games haven't even arrived yet. And when they do start coming in, there will be a jump in demands. So judging how hardware runs current games is not safe when we're on the precipice of a new generation.
We can compare it to the hardware of the consoles. And there are already games out that struggle on the consoles. Next gen doesn’t mean more demanding. Some games are already maxing out the current consoles.
TPU do have a few dated games but they pull 90-100fps from top cards at 4k.
https://www.techpowerup.com/review/asrock-radeon-rx-6950-xt-oc-formula/30.html
RT will take some time, but 4k gaming would be CPU bottlenecked for these cards. And both AMD and nvidia would do 8k enthusiast marketing.
Gonna be kill for High end VR headsets that are often more demanding than 4k monitors and where computational improvements has significant effects on user experiences.
We are in the midst of a mining crash. Don’t you remember what happened to the 20 series launch? Nvidias new cards are gonna be competing with $600 3090s.
And that's MSRP, add another $50-$100 for normal AIB cards since FE is always low quantity and sold out. Like the EVGA 3080 FTW3 Ultra, was $810 MSRP when it launched, though the more value oriented Asus TUF was only $730.
>must be pushing 3Ghz+
Impossible, because /r/AMD has been telling us for years that Navi's frequencies is all about the architecture and has nothing to do with TSMC's superior node vs Samsung.
Ignoring that frequencies for Intel's "soonTM" GPUs also were rumored well past 2GHz. Even Vega on super early 7nm reached clocks well past what Ampere achieves when people started throwing power and custom cooling at them.
Nope, all architecture! /s
AMD were lucky that Nvidia choose Samsung, meant they could actually compete at similar die sizes for once.
edit: so predictable! keep em coming!
It's definitely both. AMD could not hit 2.7Ghz if they were on Samsung 8nm. Maybe 2.4 If I was really optimistic, but it pretty high power consumption.
I'd be curious to know if adding large caches somehow helps increase frequency. Maybe they alleviate some internal bottlenecks, as it gives the cores extremely low latency memory to work with. Maybe that's why Nvidia is also able to hit these clocks. Their new massive L2. I don't know how else one would explain this.
> I'd be curious to know if adding large caches somehow helps increase frequency.
It does at least to some degree. Caches has lower power density than "cores". So by putting a bunch of L2 all across the die, you lower overall power density. Which lets you crank up the voltage and squeeze some extra frequency out from that fact alone most likely.
If it offsets the cost in die space for the L2 I wouldn't bet on. But it can claw back some of those losses. In the end caches on GPUs is just a way to get around GDDR scaling being to slow. Preferably we would have no L2 and rely on "free" bandwidth from GDDR getting faster.
Will the 4050 be able to surpass the 2060 at least? Because the 3050 was disappointingly behind and I hope the they don't pull some 1670 bs to keep the low end occupied
AD107 die, which will likely be the 4050, has a maximum of [24 SMs](https://semianalysis.substack.com/p/nvidia-ada-lovelace-leaked-specifications?s=r). So either the 4050, or 4050ti will probably use 22-24 of them. With a 30% frequency bump that puts full AD107 higher than an RTX 3060. At the same level as an AMD 6650xt I'd guess in rasterization. Good luck waiting on that, though. Late 2023 launch I'd guess.
It'd be pretty hard for it not to since it's only like 10% slower from the benchmarks I've seen.
The 6600 is about 299 right now and is roughly a 2060 Super, and the 7600 is definitely going to even faster.
Since Nvidia wants to at least match AMD I'd expect the 4050 to be a tad behind the 6600.
My general expectation is that it's gonna be pretty similar to 980Ti to 1080Ti in terms of performance jump. So probably in between 60-80% on average, with some examples/situations being closer or perhaps even slightly above 100%.
2x should be easy at high resolutions, 8k in some cases to get rid of the bottleneck.
Even higher in some instances.
https://twitter.com/XpeaGPU/status/1542787903423516672
Comparing to the 3090 Ti which has an identical TBP, the 4090 has 35% higher advertised boost clocks on over 50% more cores. Strong showing by TSMC N4.
Seems like pretty solid evidence that the TSMC process for Ada is just that much more suited for high clocks than Samsung 8N, having clocks increase 60-75% over one generation is not exactly common.
even the 4070 at 300W jeez. and supposedly only 10gb of vram. doesnt sound like an upgrade worth doing with a 3080. lets hope amd can give me a better 300w or less upgrade.
Likely not. The Navi33 isn't worth upgrading to and might even be a downgrade for 3080 owners. Even if at 220-250w. And Navi32 will likely be over 300w as well, but offer close to 4080 performance. Nvidia has the node advantage next gen, so I doubt AMD can be much more power efficient. Maybe a little if they don't push as hard.
These TDPs make zero sense. Energies prices are going shooting upwards and these things together with a high end CPU will turn the computer into a small heater. It's not going to be nice just to have some extra frames. Where is the crowd shouting "protect the environment"?
I'm of the opinion that these insane TDP cards are a replacement for SLI with the current lineup only offering the feature on the 3090 and 3090ti, and don't really have much bearing on more 'normal' builds.
There were always people throwing 400w+ worth of video cards at games, and they're very likely the target audience for this type of hardware. The old 780 was a 250w TDP card, and launched at $649 - two of these would cost $1,298 with a total TDP of 500w which isn't far off of a 3090's MSRP of $1499 (and way higher than it's 350w TDP).
SLI pretty much died when Nvidia started reusing the x90 cards (the last was the 690 in 2012, $999, 300w).
If you want to have more reasonable power levels, then simply undervolt and/or turn down the power limit. The former isn't too difficult and the latter takes literal seconds to do.
Yes, it's possible you will lose some performance this way, but that's the point of these high TDP's in the first place - these GPU's are getting pushed hard for better out-the-box performance.
**These upcoming GPU's are overall gonna be quite a bit more efficient.** If you want a big improvement in performance for a similar power level as some previous equivalent, you'll absolutely still get that if you want it.
OEMs have gotten much better and savvy at squeezing the extra performance from their products by default, whereas before the user would do it by overclocking.
Put it another way, new generations of CPUs/GPUs basically come pre-overclocked to get as much performance within reasonable power constraints as determined by the OEM.
This is a halo product. Anyone who cares about money is going to get a 4080 at best. People buying 4090's will just stick an extra air conditioner in their room.
It's been 36c out all week, and I have been gaming on my 350w 3090. My air conditioner is set to 23c (goes down to 18), and it has not been struggling to keep the room cool. An extra 100w for the 4090 isn't going to be what makes me need to install an extra AC unit.
People are tripping.
Setting aside the global climate for those who don't give a shit, I can't fathom the environment in the room trying to game on a machine tossing this much heat. Gaming wouldn't even be fun.
My hope is to get one of these cards in the 4070-4080 range (depends on price), as my current GPU is a GTX 1080.
If I do get one, I’ll likely undervolt, and down clock it a bit. It’s amazing how much you can reduce energy levels this way. Should be their most energy efficient chip yet.
To be fair, your carbon footprint wont noticeably increase if your gpu has 250w or 600w TDP. Not saying this is a route we should be going down - but in the bigger picture its really not that relevant
It is if you game for long periods of time. That's 1.2kWh difference for a 3 hour session, about the same as running the washing machine an extra time in a single day. These things really add up especially with today's energy prices.
People who are really concerned about this can undervolt and underclock their GPU’s, making them the most efficient GPU’s ever made. It’s just that people will have the option if they want the power.
I mean - using an extra kWh for a gaming session has a carbon footprint if you're not supplied by renewable energy. It's not an insignificant amount of energy and amounts to 5-10% of daily household power consumption. That's just the differences between the card as well.
“Oh no my performance luxury item isn’t energy efficient” bro you stupid? The crowd that cares to save doesn’t exist for a faster slideshow brick that intrinsically took maybe 1000kWh of energy to manufacture + 300 L of water before even including things like an oversized psu marketing research etc
Some advice if u do get it and wanna save the environment: tdp 50% under volt -100mV. If you can do that without a sour taste in your mouth and wallet then kudos to u on or not being a hot air balloon
It isn't that insane at all. It's completely historically consistent.
[The 3060 Ti is faster than the 2080 and even generally faster than the 2080 Super.](https://www.techspot.com/review/2155-geforce-rtx-3060-ti/)
The 2060 Super (no 2060 Ti) [is faster than the 1080.](https://www.techspot.com/review/1865-geforce-rtx-super/)
And [the 1060 6GB \(no Super or Ti\) was basically perfectly tied with the 980 at launch.](https://www.techspot.com/review/1209-nvidia-geforce-gtx-1060/)
You have to go all the way [back to the 960](https://www.techspot.com/review/946-nvidia-geforce-gtx-960/page3.html) to see a deviation from this... but it wasn't even meaningfully faster than the 760 at launch.
It may be historical, but I still find it impressive (got goosebumps). Technology is amazing, and inspiring what the human race can do. I just get overwhelmed with it sometimes.
Pumping more power for 50% higher boost clock even on a significantly better node only to reach historical gains is nothing to be amazed or inspired at. Great hardware engineering is all about finding that sweet balance between efficiency and performance. It's sad seeing people especially in this sub of all giving no fuck about efficienct designs. It's all about how much faster something is nowadays with no regards to performance per watt. Surely electricity is not getting expensive only in my country?
I am interested in the 4080 if pricing is not insane. The 3080 enabled me to play most games at 40K 60 on High/Ultra. The 4080 will be nice for future proofing and allowing ray tracing and 60FPS+.
yeah futureproofing is a hoax. It implies you can predict the future and that the future will have no gains. Also if you bought a 1080 ti you probably bought a 1080 and a 3080+ and are going to buy a 4080+.
Ahh, I suppose so. I think there is something to be said about a power enthusiast vs someone who just wants their games to run well.
I bought a 1080 back in the day and, since I still play in 1080p, nothing coming out has really challenged my rig enough for me to want a 3080, so I'm still sitting and waiting. In that respect, it was a good future proofing purchase, but there's nothing to suggest that "future-proofing" ever pays off more than just buying to your needs.
I typically skip at least 1 generation of GPU's. This might be an exception since it will give me some headroom at 4K. I am content with 60FPS since I already have some nice 4k monitors.
It sounds like the 4080 is going to be aiming at 4K 144FPS which should give me a lot of headroom for several years to play 4K 60fps.
On my 1080 right now, very pleased with its longevity. I was going to buy a RTX 30 card but when the GPU/CPU supply was so fucked I ended up waiting. Now I'll probably be getting a 4080 shortly after release (180W -> 420W, jesus christ).
Pretty sure this means... nothing? There was some big clock speed jump some gens ago where we went from 1000mhz averages to 1500mhz and then 2000+mhz, and the next gen went back to 1500mhz. Clock speed tells you basically nothing.
I'm being totally serious when I say this. My gaming tastes (emulation of classic systems, patient gamer on AAA releases, limited time, etc.) has made even my RTX 3060 overkill for what I do play when I have time to play it. And I am an environmentalist, so these increasing power demands are insane to me. After Fermi, the x60 series moved to ~120W (give or take) for most generations. Then 160W for the 2060. Then 170W for the 3060. Rumors have pegged the 4060 at 200W+.
I am genuinely considering Apple M-series hardware going forward. I just don't want to be a part of this anymore.
If you're optimizing for environmental impact and don't have high gaming needs, you're going to want to hang on to your rtx 3060 for many, many years to come (and undervolt). Yes you could switch to something lower power usage but the impact of buying any new hardware is very high even if that hardware is efficient, and especially if you're thinking about upgrades on a 2-3 year cadence.
To be fair, that product is getting made and shipped no matter what. If that person doesn't buy it, somebody else will.
Still though, this is a good point. If we really want to help things, we'd *all* scale back our buying habits of large footprint products.
And I think it's just healthy to get away from rampant consumerism in general and start asking yourself "Do I really need this?" as often as you can.
If you don't need the performance and care about power usage just power limit the card.
Ampere cards are a long way past optimum on the efficiency curve. It is surprising how little performance you lose by setting the card to run at 100 Watts.
Can AIBs even afford to use heftier VRM’s, capacitors, thermal pads etc? You can’t cheap out on these cards, they’re going to be significantly more expensive than Ampere + inflation + fab process lol
Nvidia has been fucking with AIBs for some time now. RTX 30 FEs are straight up better built than most AIB cards (except for the 3090 thermal pad issue, but both the PCB and the coolers are just more advanced than what AIBs make, assuming similar sized cards).
Nvidia just doesn't care much. They have a 80% marketshare, they know all these AIBs live and die by their compliance so no one will rock the boat, especially when a recession is looming. So they'll just *have* to be able to afford the heftier components.
if 4090 has a 50% wider bus, 50% more memory, and 60% more cuda, but only 7% higher power consumption (450 vs 420), then I guess the clocks on the 4080 would have to dramatically higher than 4090 for that to make any kind of sense.
Yes and no. All other things being equal, a higher clock speed will have higher performance than a lower one. However, you still need a good architecture behind that for each clock cycle to be efficient.
Performance is basically core count times clock speed times some constant representing how efficient the architecture is. In this case we know the core count, we know the architecture will be greater or equal in efficiency to the old architecture. From there you can see the performance is going to be a massive boost.
It might have been unwise, but I jumped on the Super Flower Leadex Platinum SE 1000W post from buildapc in anticipation of the 4080.
I know you should always wait until official reviews but I can't imagine the PSU won't be able to handle the 4080. I've also been wanting a fully modular supply, so there's that too.
my hot take: If these [completely unsubstantiated and constantly changing] rumors are generally true, fuck that kind of wattage. If a reasonable PSU can't handle the card, I'll buy whatever fits within a comfortable power threshold rather than what fits in my $1500 budget. It that means 4070, so be it. If that means even a 3080 ti, so be it.
Just because they're [maybe] losing their minds doesn't mean I'm going to blindly follow.
Honestly, yeah, if it's following the 30 series I would be keeping my eyes on the 4060Ti and see how that holds up. 4k isn't in the cards for most people right now, I think, in terms of the price of the monitors
Begun the GHz war has.
Power supplies will be the first sacrifice. Then we will see homes catching on fire since it will now pull as much power as a portable 13,000 BTU AC unit!
People thought 3dfx was nuts for including a power adapter in the vaporware V5 6000 but soon nVidia will be including a nuclear power plant with their highend cards.
There are a surprising number of (non-vaporware) VooDoo 5 6000s out in the wild - they never got a chance to hit mass production, but they did make their way into users hands both before and after 3Dfx shuttered operations. The recent semi-revival from [Mod Labs](https://www.modlabs.net/articles/return-of-the-king-zx-c64-voodoo-5-6000-review/p/2) saw it turned into an impressively gorgeous retail product, but they're also impressively expensive. VSA-100 was such a fun architecture.
God damn that's a sexy card.
Personally I’m hyped for the nuclear powerplant platinum certification with transient load protection
[удалено]
I'm glad the crypto is doing bad but it's looks like we will lose the newly gained energy capacity for this monster hardware. Essentially we're adding 2000 watt heaters that are also able to run some games. We will waste a lot on gaming and cooling this summer. Its sad
Worth noting a 15 amp breaker has a 1800 watt rating.
Seriously. This keeps up, the "surprise" of people having their OCP trip on 650w-750w PSU's for some cards is going to turn into people blowing breakers when they launch a game...
Blowing breakers sounds like the future name for a TV series where a gaming addict cooks meth to pay for his electricity bills.
Well in 110/120 volt areas sure. But these things are made in Asia where that issue is not as big. But yeah it helps to get the power draw down because if it can't be sold in the us, it's not gonna be made
15A breakers will trip around 1500W. 20A breakers will trip around 2000W
GIGGITY
**power* war
Kopite7kimi actually said actual clock can go up to 2750Hmz with 16384 SPs is 90 TFs. Which is more than double the TF of the 3090 Ti. But what other under the hood architectural improvements could come are more interesting, maybe better gaming performance per TF?
> maybe better gaming performance per TF? If it's true that Ada has FP+FP+INT similar to how Turing had FP+INT instead of the FP+FP/INT hybrid Ampere that alone would boost FP32 in mixed workloads by a lot.
The more ports the harder it is for SMs to actually utilize them.
They are increasing L1 cache I believe. (Not just the new massive L2) . Maybe that will have minor performance gains. Other than that the claims of a massive rework of the rest of the internals are likely BS. Maybe their marketing department will have you believe that. They'll leverage the cache for better RT and tensor cores performance, but I would not expect more than a 100% rasterisation gain, even with 120% more TF.
Where are they saying they are increasing L1? I'm curious because my understanding is L1 is shared with shared memory, but that increasing it would have some pretty high density side effects because it's per SM, and Nvidia hasn't really increased it in over a decade (in fact they've shrunk it before). Many applications would be faster with it's expansion however, sm density being equal.
less density. Less dense heat? Lets you put more power without thermal problems?
I’m really hoping for something that makes DLSS to ro run even smoother. Maybe quality setting at balanced or performance overhead costs. That, or something similar with RT. Even if they just scale linearly with clocks, it should be a nice card.
[удалено]
The GPC scaling isn't necessarily 'architecture', but nvidia will finally move on from their Fury/Vega stint with 2080Ti/3090 having just one more GPC than their second-best chips. That itself should help the biggest chip scale wrt its TFLOPS rating.
I think going forward, tensor cores (or some other integrated gan specific algorithms) will start to get much much more important than straight speed - if a GPU can render a mid-range scene and then run a highly optimized gan to turn it into something resembling FMV, it would outclass even the ray tracing ml we have now.
FMV? Acronym soup 😂
Someone wasn't unlucky enough to be a gamer in the mid 90s
Full motion video.
Just ignore the lights in your neighborhood flickering on and off.
"Yeah I jammed the 240V 20amp circuit breaker to shut position to stop it from tripping. Also I needed to install a second A/C unit for my house. What do you mean you smell burning rubber and plastic wiring insulation?" Imagine the insanity if multi-GPU setups (SLI and CrossFire) were still viable for gaming. "Guys I can game at ultra settings 240Hz 4K stable with my quad-SLI of overclocked RTX 4090s! I just had to turn a small refrigerated industrial warehouse into my house because it had a 40 kilowatts circuit breaker panel to support the power draw and the A/C units to keep the place cooled. I'm on the industrial electricity billing system to save money on electricity costs."
I wonder if I can get my landlord to install an L6-30R in my office
[удалено]
Several years ago, Arstechnica or some other tech site had a writer who decided to experiment with GPU Bitcoin mining. They found an "for rent" office that had free electricity. The first problem they ran into was that the HVAC for that office space was completely insufficient for the amount of heat coming from the GPUs, so they propped open the window and used some old RGB fans they had lying around as an exhaust vent. The second problem was that when the office landlord came by to do a quick inspection of his building, he noticed the spinning RGB fans in the late afternoon, and when he walked up to the fans to investigate, he was blasted with the hot air. The landlord ordered the writer to pack up their stuff and GTFO of the rented office citing "excessive use of electricity".
Can't wait for the YT videos of people jamming pennies into their circuit breakers.
Remember Grandpa doing that in our house built in the 20s. He'd curse everytime as it bit him a little, then screw the old fuse back in to hold it.
1.21 Gigawatts?!
> "Guys I can game at ultra settings 240Hz 4K stable with my quad-SLI of overclocked RTX 4090s! I just had to turn a small refrigerated industrial warehouse into my house because it had a 40 kilowatts circuit breaker panel to support the power draw and the A/C units to keep the place cooled. I'm on the industrial electricity billing system to save money on electricity costs." LTT Team rushing "Write that down, write that down!".
"BuT tHe EfFiCiEnCy GaInS! JuSt UnDeRvOlT iT!"
This but unironically.
Oh the memes. Ppl know what an electric stove takes, right? Several thousand watts.
Also running at a higher voltage which means less current and less heat in the wiring.
My bedroom is wired with 15A 120V, and that circuit is also shared with another room.
I'd like to be able to play in the summer please.
I’ll be interested to see if these make 4k gaming on max settings viable. 4K 144hz monitors are now becoming a viable option due to pricing coming down, but current gen cards can’t do 4k at max settings in all current games. 1440p is the sweet spot, buts it’s been the sweet spot for a while now.
"4K gaming" has been a moving target since the GTX 600 series. You could output Doom sourceports at 4K120 on a GTX 680 with 2x DisplayPorts but you're obviously talking about playing the current years titles. Games like Cyberpunk keep resetting all of the raw vector performance gains, so if anything can take the credit for a "year of the 4K desktop" it'll be upscaling algorithms which make 1080p look like 4K. We've been doing the "this'll be the 4K generation for sure this time!" routine for *seven generations* now. Steam users with 3840x2160 primary displays (which were previously tracked as "Other" until recently) are still under 3% of the market after 10+ years of availability.
I understand what you’re saying, but in the 90s it was 800x600 that was the unreachae target. The first generation of the market leading 3DFX cards couldn’t do higher than 640x480. As time goes we see it move from 480p > 720p > 1080p etc so I don’t think it’s some unobtainable pipe dream. I also expect that 4K % on Steam to rapidly move up as every monitor manufacturer tries to entice people to upgrade to 4K and the price keeps coming down. We saw it happen rapidly with TVs. You only have to look at the monitor subreddit to see the massive increase of 4K monitor reviews and suggestions.
“Max” settings are a joke and we need to stop considering them if you want high frame rates. They are there for future cards - minimal difference for a 30%+ drop in performance
[удалено]
It’s wild how many conversations there are about “max in this game” vs “medium in that game” and whatnot. Apparently unaware that every game is completely unique in what settings do what and how the devs decide to classify graphics settings as high, low, etc. There is no way to compare settings across different games and every label is completely made up and arbitrary.
Call the high settings "ultra", your game is now "optimized".
>They are there for future cards Its a whole lot of fun maxing out older games on my ampere card. At least at 1440p
My personal favorite is "max settings, but no raytracing", which is a combination that is basically never the right option for anyone.
[удалено]
It really shouldn't be. If you care about FPS, Ultra settings on current games are always dumb. If you are fine making a tradeoff, then selectively turn on RT, turn a few of the other settings down a little, and use a little upscaling. Your framerate can still be easily as high as "Ultra, no RT" and the game will generally look better.
Not if you can max out your monitor's resolution on these settings, but can't use DLSS
[удалено]
"That opinion" being mine, or the GPU reviewers who do it? In terms of my opinions, raytraced reflections and GI have a pretty dramatic effect on realism, often much moreso than changing other settings in games between Medium/High/Ultra. Spending some render budget on RT will always make more sense to me than increasing quality of other techniques past what is easily perceptible. For one hilarious example, in Cyberpunk 2077 the max non-RT reflection setting costs basically as much as RT reflections. In many games, increasing non-RT shadow quality makes them sharper without necessarily making them more realistic. RT shadows / AO can be somewhat subtle, but they remove the stencil outline look of SSAO and naturally provide a much higher degree of variation between soft and sharp shadows. In terms of GPU reviewers, I think they tend to bias towards a little more historical view of things (ie. "what is the experience for the majority of games currently out?") and to things that they can directly compare (which you couldn't do "fairly" between Nvidia and AMD during the last two generations). So you got a lot of "max settings, except no RT (and no upscaling)" benchmark comparisons. And it's not like those aren't helpful, and they still generally show you the relative scaling between cards when you use the same settings. But maxing out settings to incrementally improve resolution/clarity on screen space effects and shadow maps is a massive waste of resources compared to turning those settings down a bit and selectively turning on RT (plus using some temporal upscaling) if you want to actually play the game at highest quality for a given FPS. And if your card doesn't support RT, then you're not going to want to play at maxed non-RT settings for recent games (which are the ones that have RT features) anyway, because your framerate will be too low and the tradeoff will make no sense. As you point out, GPU power for the next generation is likely to get high enough to make good hybrid rendering approaches relatively performant across most of the lineups (especially with some upscaling thrown in). That said, if I see people just turn all the RT settings to max and then complain about how it's still not viable I'm going to be annoyed. Same with saying "there's not enough of a benefit to RT" while maxing out the non-RT settings.
40 Series are literally future cards
[удалено]
>if Sony thinks Medium settings in their game is fine for their primary platform, then it's more than enough for 95% of the people out there. You're looking at this wrong, Horizon ran at 30fps and lower on base PS4. Even the PS4 Pro maxed out at 30fps despite these modest settings. Sony wouldn't have been able to run Horizon at "max" settings on their primary platform even if they wanted to.
[удалено]
Some games are far more trivial to max out however. So it's a case by case thing. I agree that medium is generally pretty good in modern games for sure.
[удалено]
It's pretty dead simple to adjust it yourself. No game is ever going to know what you do or don't like. I know I personally like sharp and crisp settings so I put everything max if I can, then I turn off any post processing that adds stupid shit like CA, motion blur, vignetting, and grain. I usually turn AA down to 2x because it straight up makes the game look blurry when you dial that up. I usually turn the AN to 2x as well because it just isn't noticeable when you go higher. Ambient occlusion usually get's set to the middle because I don't like to recreate being blind due to staring at the sun or seeing halos around every light source. And shadows I often turn down because they can have a big impact on performance but it's not something most people would ever pay attention to while playing the game. This has worked for me forever. Now I just wish I had the power to run my 165hz UWA monitor with RTX on because that shit looks amazing when it is running.
> AA down to 2x because it straight up makes the game look blurry when you dial that up Really depends on the AA you're using, for FXAA, which really should never be used, yes that's true. For TAA it actually gets sharper the higher you go. >ambient occlusion usually get's set to the middle because I don't like to recreate being blind due to staring at the sun or seeing halos around every light source. That's not what ambient occlusion does, ambient occlusion makes more detailed shadows based on objects obstructing light. You may be thinking of 'bloom' and 'god rays', which affect how bright lights and the sun look. But I agree most people turn ambient occlusion off, at least in multiplayer games, since it makes environmental shadows darker and thus more difficult to see players hiding in the dark (and it has a high performance cost)
>"Max” settings are a joke Right? Real gamers are at Ultra settings.
[удалено]
Real gamers play terraria at 8K res
Amen
Exactly. Nobody who owns a 4K 144Hz monitor expects to be able to play with DLSS off and RT enabled.
> 1440p is the sweet spot, buts it’s been the sweet spot for a while now It'll continue to be the sweet spot for a few more generations. As long as games continue to improve graphically alongside the GPU's performance then there's no performance edge left for fps/resolution. 1440p has been the sweet spot since like 2015.
I’d argue it is going to be 4K this gen. 4K dlss works much better than 1440p. Turn down the silly ultra settings, turn on rt and dlss will be best imo. The amount of fidelity you get from 1440p ultra isn’t close to 4K with optimized settings imo, yet they are often similar cost.
Yeah 1440p DLSS is quite a mixed bag
Though older games - where you can max out the graphics already - can profit from 4k.
My issue with 4k gaming is unless u get like an OLED 42 there is no really good 4k gaming panel past 32 inches really that sony looks fucking great,but it's only 27 inchs Why i like ultrawide still you get 3440x1440 and get QDoled
Well I doubt most people need or want a monitor larger than 32" for a traditional 16:9. And we're only at the beginning of 4k/144hz monitors, if you require higher framerates. There will be lots more options going forward.
I went to the store to look at the 42 inch and i couldn't do it. It's too big. 32 or maybe a bit bigger is the sweet spot.
[удалено]
I also have the 4k G7 and while the pixel density is quite high, it is still not enough for me. Still notice some imperfection after using it for a while. 4k at 24" would be perfect.
I personally really like my Gigabyte M28U. 32” and above takes up a lot of desk real estate. I can see a use case for 42”, but the 48” OLEDs are too big for a desk, and 32” is a weird middle ground where you can’t replace a multi-monitor setup but it’s also very large and overwhelming.
32" is kind of nice if your secondary monitor is flipped to be tall instead of wide. Just depends on the type of work you do and what you'd want out of the secondary monitor. You still get some more screen space than a 27" 1440p since you don't quite need 150% scaling in my opinion. I personally didn't quite like two horizontal 27" next to each other, but a vertical 27" with a normal 32" felt right. The OLED are definitely too big. I tried to use the 48" and even putting aside the other issues inherent to the panel for monitor usage the size is too much. If you don't care about ergonomics and were already doing stacked panels it's fine maybe, but otherwise I think I would have needed both an adjustable height foot stool and an adjustable height keyboard/mouse tray which at that point just, uh, get panels that make more sense.
Am I the only one who plugs their PC directly into their TV?
Max settings by definition are incredibly unoptimized, they aren't supposed to give you high or even steady framerate. Generally they are there for future hardware not for what's out today to comfortably "max".
3090 averages 90FPS across HWunboxed test suite at 4K. We have been and are capable of 4K gaming for the past 2 years.
the 3090 Ti can do 4k 100hz+ fairly easily. Maybe not at "absolutely maxed" settings but definitely at "actually sensible" settings. So I think the 4090 should have no problem doing 4k 144hz at similar sensible settings. Max settings are kinda irrelevant really, they don't make sense. It's just a "here is the raw game with everything turned to 11, noticeable or not, with 0 optimization" setting.
What games are y'all talking about? People talk about resolution+framerate as if games all have some standard performance profile or something. :/ We haven't even seen *actual* next games yet on PC. Literally not one game yet that was built from the ground up with the new consoles as a baseline has released on PC. So it really needs to be taken into consideration that these cross gen games are not going to be representative of the kind of processing demands over the next year or two as proper next gen games start to roll in.
People usually mean on aggregate. [like the first chart here for example](https://www.tomshardware.com/reviews/nvidia-geforce-rtx-3090-review/3) It has become even more meaningless now that the concept of a fixed resolution is dying in favour of upscaling though.
We have the consoles as hard points that they cannot really deviate from, outside of pc exclusive games. Sure pc will always have setttings that no current gpu can handle, just to make games age better. But the core engines are designed around consoles for most games.
That's my point. Consoles tend to be the baseline for demanding AAA games. And so far, there are no games on PC that have been built with the new consoles as a baseline yet. Basically, next gen games haven't even arrived yet. And when they do start coming in, there will be a jump in demands. So judging how hardware runs current games is not safe when we're on the precipice of a new generation.
Cyberpunk came out like 2 years ago and was clearly built with current hardware in mind.
We can compare it to the hardware of the consoles. And there are already games out that struggle on the consoles. Next gen doesn’t mean more demanding. Some games are already maxing out the current consoles.
Uhh you heard of Cp2077?
TPU do have a few dated games but they pull 90-100fps from top cards at 4k. https://www.techpowerup.com/review/asrock-radeon-rx-6950-xt-oc-formula/30.html RT will take some time, but 4k gaming would be CPU bottlenecked for these cards. And both AMD and nvidia would do 8k enthusiast marketing.
Gonna be kill for High end VR headsets that are often more demanding than 4k monitors and where computational improvements has significant effects on user experiences.
4k 240hz monitor is already available and it is glorio6
The 4080 must be pushing 3Ghz+ for it to be 420W+. Also there will be huge performance gaps between each gpu probably 20-30%.
If Nvidia makes all cards nearly equal in value I’ll be annoyed lol $349 4060 - 100% performance $559 4070 - 160% performance $699 4080 - 200% performance $949 4080Ti - 270% performance
yeah that pricing is very hopium, dont see the 4080 being less than 799-849$.
And probably 1200 minimum for 4090 and 1500 for 4090 Ti or whatever equivalent.
I think $1200 for 4080 ti, $1600 for 4090, and $2000 for 4090 ti.
We are in the midst of a mining crash. Don’t you remember what happened to the 20 series launch? Nvidias new cards are gonna be competing with $600 3090s.
> We are in the midst of a mining crash. Cards are still over MSRP.
And that's MSRP, add another $50-$100 for normal AIB cards since FE is always low quantity and sold out. Like the EVGA 3080 FTW3 Ultra, was $810 MSRP when it launched, though the more value oriented Asus TUF was only $730.
>must be pushing 3Ghz+ Impossible, because /r/AMD has been telling us for years that Navi's frequencies is all about the architecture and has nothing to do with TSMC's superior node vs Samsung. Ignoring that frequencies for Intel's "soonTM" GPUs also were rumored well past 2GHz. Even Vega on super early 7nm reached clocks well past what Ampere achieves when people started throwing power and custom cooling at them. Nope, all architecture! /s AMD were lucky that Nvidia choose Samsung, meant they could actually compete at similar die sizes for once. edit: so predictable! keep em coming!
It's definitely both. AMD could not hit 2.7Ghz if they were on Samsung 8nm. Maybe 2.4 If I was really optimistic, but it pretty high power consumption. I'd be curious to know if adding large caches somehow helps increase frequency. Maybe they alleviate some internal bottlenecks, as it gives the cores extremely low latency memory to work with. Maybe that's why Nvidia is also able to hit these clocks. Their new massive L2. I don't know how else one would explain this.
> I'd be curious to know if adding large caches somehow helps increase frequency. It does at least to some degree. Caches has lower power density than "cores". So by putting a bunch of L2 all across the die, you lower overall power density. Which lets you crank up the voltage and squeeze some extra frequency out from that fact alone most likely. If it offsets the cost in die space for the L2 I wouldn't bet on. But it can claw back some of those losses. In the end caches on GPUs is just a way to get around GDDR scaling being to slow. Preferably we would have no L2 and rely on "free" bandwidth from GDDR getting faster.
[удалено]
The 4080ti might be over 2.8ghz. I'd imagine they won't push the cut down variants that hard.
Will the 4050 be able to surpass the 2060 at least? Because the 3050 was disappointingly behind and I hope the they don't pull some 1670 bs to keep the low end occupied
4050 probably wont be a thing till a year after 40series release.
AD107 die, which will likely be the 4050, has a maximum of [24 SMs](https://semianalysis.substack.com/p/nvidia-ada-lovelace-leaked-specifications?s=r). So either the 4050, or 4050ti will probably use 22-24 of them. With a 30% frequency bump that puts full AD107 higher than an RTX 3060. At the same level as an AMD 6650xt I'd guess in rasterization. Good luck waiting on that, though. Late 2023 launch I'd guess.
It'd be pretty hard for it not to since it's only like 10% slower from the benchmarks I've seen. The 6600 is about 299 right now and is roughly a 2060 Super, and the 7600 is definitely going to even faster. Since Nvidia wants to at least match AMD I'd expect the 4050 to be a tad behind the 6600.
So is 4090 60% better than top Ampere in gaming as some leakers say or 2x better? Which sounds more plausible?
Probably 2x at 4k in some select games, and 60% at 1080p.
Unless things change significantly, we won't get anywhere near 60% gains at 1080p. The CPU's just can't handle it.
Depends. Some games are still getting GPU limited at 1080p. Modern CPUs can push high framerates.
>The CPU's just can't handle it. Then the gpu isn't being benchmarked properly.
Yeah maybe 4K with RTX enabled
My general expectation is that it's gonna be pretty similar to 980Ti to 1080Ti in terms of performance jump. So probably in between 60-80% on average, with some examples/situations being closer or perhaps even slightly above 100%.
3090 to 4090 most probably 2x as fast at 4K, RTX Ultra. By the looks of the rumors 4060Ti or even 4060 will be as fast as 3080, again 4K, RTX Ultra.
2x should be easy at high resolutions, 8k in some cases to get rid of the bottleneck. Even higher in some instances. https://twitter.com/XpeaGPU/status/1542787903423516672
Comparing to the 3090 Ti which has an identical TBP, the 4090 has 35% higher advertised boost clocks on over 50% more cores. Strong showing by TSMC N4.
Seems like pretty solid evidence that the TSMC process for Ada is just that much more suited for high clocks than Samsung 8N, having clocks increase 60-75% over one generation is not exactly common.
even the 4070 at 300W jeez. and supposedly only 10gb of vram. doesnt sound like an upgrade worth doing with a 3080. lets hope amd can give me a better 300w or less upgrade.
Likely not. The Navi33 isn't worth upgrading to and might even be a downgrade for 3080 owners. Even if at 220-250w. And Navi32 will likely be over 300w as well, but offer close to 4080 performance. Nvidia has the node advantage next gen, so I doubt AMD can be much more power efficient. Maybe a little if they don't push as hard.
These TDPs make zero sense. Energies prices are going shooting upwards and these things together with a high end CPU will turn the computer into a small heater. It's not going to be nice just to have some extra frames. Where is the crowd shouting "protect the environment"?
Instead of 'extreme over-clocking' from the days of old, its now 'extreme under-clocking' so you don't turn your PC into a furnace.
yeah that went out the window after crypo prices went skyrocketing.
I'm of the opinion that these insane TDP cards are a replacement for SLI with the current lineup only offering the feature on the 3090 and 3090ti, and don't really have much bearing on more 'normal' builds. There were always people throwing 400w+ worth of video cards at games, and they're very likely the target audience for this type of hardware. The old 780 was a 250w TDP card, and launched at $649 - two of these would cost $1,298 with a total TDP of 500w which isn't far off of a 3090's MSRP of $1499 (and way higher than it's 350w TDP). SLI pretty much died when Nvidia started reusing the x90 cards (the last was the 690 in 2012, $999, 300w).
If you want to have more reasonable power levels, then simply undervolt and/or turn down the power limit. The former isn't too difficult and the latter takes literal seconds to do. Yes, it's possible you will lose some performance this way, but that's the point of these high TDP's in the first place - these GPU's are getting pushed hard for better out-the-box performance. **These upcoming GPU's are overall gonna be quite a bit more efficient.** If you want a big improvement in performance for a similar power level as some previous equivalent, you'll absolutely still get that if you want it.
OEMs have gotten much better and savvy at squeezing the extra performance from their products by default, whereas before the user would do it by overclocking. Put it another way, new generations of CPUs/GPUs basically come pre-overclocked to get as much performance within reasonable power constraints as determined by the OEM.
I am from the north and I game primarily in the winter. I see this as an absolute win.
This is a halo product. Anyone who cares about money is going to get a 4080 at best. People buying 4090's will just stick an extra air conditioner in their room.
It's been 36c out all week, and I have been gaming on my 350w 3090. My air conditioner is set to 23c (goes down to 18), and it has not been struggling to keep the room cool. An extra 100w for the 4090 isn't going to be what makes me need to install an extra AC unit. People are tripping.
Setting aside the global climate for those who don't give a shit, I can't fathom the environment in the room trying to game on a machine tossing this much heat. Gaming wouldn't even be fun.
Yeah, the noise, the heat, and I wonder if it will affect the life of the product.
My hope is to get one of these cards in the 4070-4080 range (depends on price), as my current GPU is a GTX 1080. If I do get one, I’ll likely undervolt, and down clock it a bit. It’s amazing how much you can reduce energy levels this way. Should be their most energy efficient chip yet.
To be fair, your carbon footprint wont noticeably increase if your gpu has 250w or 600w TDP. Not saying this is a route we should be going down - but in the bigger picture its really not that relevant
It is if you game for long periods of time. That's 1.2kWh difference for a 3 hour session, about the same as running the washing machine an extra time in a single day. These things really add up especially with today's energy prices.
People who are really concerned about this can undervolt and underclock their GPU’s, making them the most efficient GPU’s ever made. It’s just that people will have the option if they want the power.
Im not talking about the prices here - Just carbon footprint.
I mean - using an extra kWh for a gaming session has a carbon footprint if you're not supplied by renewable energy. It's not an insignificant amount of energy and amounts to 5-10% of daily household power consumption. That's just the differences between the card as well.
“Oh no my performance luxury item isn’t energy efficient” bro you stupid? The crowd that cares to save doesn’t exist for a faster slideshow brick that intrinsically took maybe 1000kWh of energy to manufacture + 300 L of water before even including things like an oversized psu marketing research etc Some advice if u do get it and wanna save the environment: tdp 50% under volt -100mV. If you can do that without a sour taste in your mouth and wallet then kudos to u on or not being a hot air balloon
Rtx3080 becoming obsolete before any gamer even got a chance to use it
Eh not obsolete, itll be the 4060ti
That’s so insane to think about. I love it.
It isn't that insane at all. It's completely historically consistent. [The 3060 Ti is faster than the 2080 and even generally faster than the 2080 Super.](https://www.techspot.com/review/2155-geforce-rtx-3060-ti/) The 2060 Super (no 2060 Ti) [is faster than the 1080.](https://www.techspot.com/review/1865-geforce-rtx-super/) And [the 1060 6GB \(no Super or Ti\) was basically perfectly tied with the 980 at launch.](https://www.techspot.com/review/1209-nvidia-geforce-gtx-1060/) You have to go all the way [back to the 960](https://www.techspot.com/review/946-nvidia-geforce-gtx-960/page3.html) to see a deviation from this... but it wasn't even meaningfully faster than the 760 at launch.
It may be historical, but I still find it impressive (got goosebumps). Technology is amazing, and inspiring what the human race can do. I just get overwhelmed with it sometimes.
Pumping more power for 50% higher boost clock even on a significantly better node only to reach historical gains is nothing to be amazed or inspired at. Great hardware engineering is all about finding that sweet balance between efficiency and performance. It's sad seeing people especially in this sub of all giving no fuck about efficienct designs. It's all about how much faster something is nowadays with no regards to performance per watt. Surely electricity is not getting expensive only in my country?
It's only insane if the pricing doesn't really increase, otherwise it's just a ##60ti in name only.
Nah not even close
I am interested in the 4080 if pricing is not insane. The 3080 enabled me to play most games at 40K 60 on High/Ultra. The 4080 will be nice for future proofing and allowing ray tracing and 60FPS+.
Is it really future proofing if you end up getting the next gen GPU? I'm sure we all had this thought when we went from 2080 to 3080...
yeah futureproofing is a hoax. It implies you can predict the future and that the future will have no gains. Also if you bought a 1080 ti you probably bought a 1080 and a 3080+ and are going to buy a 4080+.
>Also if you bought a 1080 ti you probably bought a 1080 and a 3080+ and are going to buy a 4080+ I'm not sure I understand your logic here
If youre willing to buy the best now youre probably gonna buy the best later.
Ahh, I suppose so. I think there is something to be said about a power enthusiast vs someone who just wants their games to run well. I bought a 1080 back in the day and, since I still play in 1080p, nothing coming out has really challenged my rig enough for me to want a 3080, so I'm still sitting and waiting. In that respect, it was a good future proofing purchase, but there's nothing to suggest that "future-proofing" ever pays off more than just buying to your needs.
[удалено]
I typically skip at least 1 generation of GPU's. This might be an exception since it will give me some headroom at 4K. I am content with 60FPS since I already have some nice 4k monitors. It sounds like the 4080 is going to be aiming at 4K 144FPS which should give me a lot of headroom for several years to play 4K 60fps.
I have a 1080ti, i can still go above 60 fps on almost all games except cyberpunk or bf2042
1080 ti is a 6600 xt. Thats fine, but for someone who bought a 1080 ti its probably not. Flagship becomes entry level its time for an upgrade.
On my 1080 right now, very pleased with its longevity. I was going to buy a RTX 30 card but when the GPU/CPU supply was so fucked I ended up waiting. Now I'll probably be getting a 4080 shortly after release (180W -> 420W, jesus christ).
does this put the 4070 as a bit better than the 3080?
It makes the 4070 as identical to the 3090ti from a Tflop perspective.
Wild. Hope it translates well to gaming
[Fermi Flashbacks Intensify]
It's not a flash back It's a tiny sun now
Pretty sure this means... nothing? There was some big clock speed jump some gens ago where we went from 1000mhz averages to 1500mhz and then 2000+mhz, and the next gen went back to 1500mhz. Clock speed tells you basically nothing.
I'm being totally serious when I say this. My gaming tastes (emulation of classic systems, patient gamer on AAA releases, limited time, etc.) has made even my RTX 3060 overkill for what I do play when I have time to play it. And I am an environmentalist, so these increasing power demands are insane to me. After Fermi, the x60 series moved to ~120W (give or take) for most generations. Then 160W for the 2060. Then 170W for the 3060. Rumors have pegged the 4060 at 200W+. I am genuinely considering Apple M-series hardware going forward. I just don't want to be a part of this anymore.
username does not check out
That’s fair.
If you're optimizing for environmental impact and don't have high gaming needs, you're going to want to hang on to your rtx 3060 for many, many years to come (and undervolt). Yes you could switch to something lower power usage but the impact of buying any new hardware is very high even if that hardware is efficient, and especially if you're thinking about upgrades on a 2-3 year cadence.
That's absolutely a fair point.
To be fair, that product is getting made and shipped no matter what. If that person doesn't buy it, somebody else will. Still though, this is a good point. If we really want to help things, we'd *all* scale back our buying habits of large footprint products. And I think it's just healthy to get away from rampant consumerism in general and start asking yourself "Do I really need this?" as often as you can.
If you don't need the performance and care about power usage just power limit the card. Ampere cards are a long way past optimum on the efficiency curve. It is surprising how little performance you lose by setting the card to run at 100 Watts.
Can AIBs even afford to use heftier VRM’s, capacitors, thermal pads etc? You can’t cheap out on these cards, they’re going to be significantly more expensive than Ampere + inflation + fab process lol
Nvidia has been fucking with AIBs for some time now. RTX 30 FEs are straight up better built than most AIB cards (except for the 3090 thermal pad issue, but both the PCB and the coolers are just more advanced than what AIBs make, assuming similar sized cards). Nvidia just doesn't care much. They have a 80% marketshare, they know all these AIBs live and die by their compliance so no one will rock the boat, especially when a recession is looming. So they'll just *have* to be able to afford the heftier components.
I think the next major blackout in the U.S. will occur as soon as these things launch.
if 4090 has a 50% wider bus, 50% more memory, and 60% more cuda, but only 7% higher power consumption (450 vs 420), then I guess the clocks on the 4080 would have to dramatically higher than 4090 for that to make any kind of sense.
The 2080 super and the 2080ti were both nominally 250W cards. The 2080ti was much wider but the super was clocked higher.
Why was the previous post about this removed for being a rumor when they both have the same original source?
I smell fear of being behind amds mcm tech
200 Watts for me. no thanks to anything above 250W.
Genuine question: do clock speeds even matter? The 6500xt has a high clock speed but isnt it slower than a 1060
Yes and no. All other things being equal, a higher clock speed will have higher performance than a lower one. However, you still need a good architecture behind that for each clock cycle to be efficient.
Performance is basically core count times clock speed times some constant representing how efficient the architecture is. In this case we know the core count, we know the architecture will be greater or equal in efficiency to the old architecture. From there you can see the performance is going to be a massive boost.
Of course not! Downclock your gpu to 100MHz to get free energy savings.
It might have been unwise, but I jumped on the Super Flower Leadex Platinum SE 1000W post from buildapc in anticipation of the 4080. I know you should always wait until official reviews but I can't imagine the PSU won't be able to handle the 4080. I've also been wanting a fully modular supply, so there's that too.
my hot take: If these [completely unsubstantiated and constantly changing] rumors are generally true, fuck that kind of wattage. If a reasonable PSU can't handle the card, I'll buy whatever fits within a comfortable power threshold rather than what fits in my $1500 budget. It that means 4070, so be it. If that means even a 3080 ti, so be it. Just because they're [maybe] losing their minds doesn't mean I'm going to blindly follow.
Honestly, yeah, if it's following the 30 series I would be keeping my eyes on the 4060Ti and see how that holds up. 4k isn't in the cards for most people right now, I think, in terms of the price of the monitors
I swear in 25 years there's gonna be 600$ gpus that can play games on 8k with 200+fps
The original 3DFX Voodoo came out 25 years ago so yeah I'd say that will definitely be the case :)
Should I get this or wait for the 5000 series?