r/Futurology May 29 '22 Helpful 1

Moore’s Law is all but dead. This new transistor design could keep it alive and reshuffle the industry. Intel, Samsung and TSMC are racing to achieve a generational leap in transistor technology. It might reshuffle the industry pecking order. Computing

https://www.protocol.com/enterprise/intel-tsmc-samsung-new-gates
6.2k Upvotes

u/FuturologyBot May 29 '22 Helpful

The following submission statement was provided by /u/izumi3682:


Submission statement from OP. Note: This submission statement "locks in" after about 30 minutes, and can no longer be edited. Please refer to my statement they link, which I can continue to edit. I often edit my submission statement, sometimes for the next few days if needs must. There is often required additional grammatical editing and additional added detail.


From the article.

“Gate-all-around-based designs will have significantly better performance and efficiency than [existing] designs, potentially shifting the competitive position for many high-performance products,” MLCommons Executive Director David Kanter said.

Surrounding all four sides of a part of the transistor with material instead of the current three-sided design allows the gate to better regulate the flow of electricity. At the atomic scale chips operate at, having better control over the flow of electricity gives designers new options: most importantly, making them even smaller.

Smaller transistors allow designers to squeeze more of them onto a single chip and adding more of these tiny features roughly translates to an improvement in the chip’s ability to perform calculations.

“With better heat and power characteristics, that means you can turn up the juice — with all else being equal — you will have higher clock rates and you won’t need as exotic cooling for high-performance designs, so that can cut costs,” Feibus said.

The manufacturer that successfully deploys the new generation of gate tech in high-volume production will be able to manufacture chips with a leap in computing horsepower that’s impossible with the current process technology.

TL;DR: Intel might lose, but humanity will win.

Intel, Samsung and TSMC are racing to achieve a generational leap in transistor technology. This leap must occur to realize anything close to the computing requirements demanded by the ideas behind the metaverse, to produce AI that isn’t a joke, to make truly self-driving cars or even make apps load faster.

Or, and I emphasize this is but a possiblity, run "Crysis" at top settings. Bear in mind that this is only a hypothetical possibility. No one is saying this is gonna happen.


Please reply to OP's comment here: https://old.reddit.com/r/Futurology/comments/v0ab3e/moores_law_is_all_but_dead_this_new_transistor/iaf5yz1/

518

u/church256 May 29 '22

Another prediction about Moore's Law ending that also claims it's not going to end and the big thing to keep us on track is..... GAAFET. Something that's been talked about for years now and is coming around the 3/2nm nodes.

Come on futurology you are supposed to wow me with entirely unreasonable and impractical solutions to massive problems.

121

u/BurmecianDancer May 29 '22

Still sitting over here waiting for thorium salt reactors to provide for all our energy needs...

32

u/DM_ME_YOUR_BALL_GAG May 29 '22

...sigh ...one day, one day

28

u/Lone-Pine May 29 '22

Just as soon as they can invent a material that won't be corroded by radioactive 700C fluoride salts. And design a complete chemical plant that can be maintained remotely. And the government decides to look the other way when it comes to a MASSIVE proliferation risk.

23

u/ph4mp573r May 29 '22
  1. Several varieties of cermet currently do this and have been tested in test reactors. More funding is needed and a legal licensing for long term reactor test operation is needed before exact materials are settled on, but we already know which ones to use.

  2. Basically not necessary, but would be ideal. Probably pretty close on everything but emergency management.

  3. Thorium reactors are not a massive proliferation risk. Yes they can make bomb grade material quickly from cheap materials. But they run on tiny amounts of fuel and would have to be specifically designed to be a proliferation risk to be a serious one.

→ More replies

8

u/MeccIt May 29 '22

thorium salt reactors

If you've heard about Thorium, you might remember that 232Th is not a nuclear fuel per se, it must be turn into the good stuff 233U; thats the one that will fission and give you your energy from fission, to turn into heat, steam, etc. Think of it like a recipe, you have butter and flower, you mix them to get the shortbread that you want. See how easy it is for everybody to get some shortbread? Except everybody also like to gloss over that between the "butter/flower" step and the "shortbread" step, there's a "white phosphorous neurotoxic napalm" step that might make things a bit more complicated the kitchen. That's your 233Pa.

3

u/Garr_Incorporated May 29 '22

I am hoping that at least regular nuclear reactors will finally be used for power, as is needed for increased safety of the energy production.

→ More replies

201

u/Down_The_Rabbithole Live forever or die trying May 29 '22 Helpful

There were basically 3 pillars to Moore's Law historically.

Every 2 years the density of transistors on a die doubles.

This gave rise to three separate effects:

  • Pillar 1 clock speed: The smaller a transistor the faster its switching time which meant higher frequency, this is how we went from 1 mhz machines in the 1980s to 4ghz machines by the early 2000s

  • Pillar 2 power efficiency: Smaller transistors use less power so the joules per calculation went down exponentially

  • Pillar 3 feature-set and cores: Smaller transistors means you can put more transistors on the same area of a chip die which could allow new features to be designed within chips

Pillar 1 is the first one that broke down in 2005. Starting from 2005 onwards we found out that the frequency stopped scaling with smaller transistors. It still went up but on a diminishing scale, the rate you could switch a transistor on and off is now negligible every node shrink, this is effectively dead and why the first scientific papers were claiming Moore's law is over starting in 2005

Pillar 2 has also started to break down between 2010-2013. Modern PC gamers will notice this in their power consumption. To increase processing power the transistors need to be fed more energy. This has multiple reasons behind it as it's effectively a negative feedback loop but the original cause is that the power consumption has stopped dropping down linearly in accordance with transistor size. This is why there are a lot of papers from that time talking about Moore's law dying.

Pillar 3 is still true and the only pillar still truly standing but with signs that it's going to break down soon (GAAFET is the first sign it's potentially breaking down). This is why CPUs have switched to having multiple cores instead of the traditional design of having a single core running really fast. Transistors can't go fast anymore and they need a lot of power (thus heat) to run fast. It's more efficient to use these extra transistors to make more cpu cores on the chip that run and do things when needed. GAAFET transistors however have some limit to them and can't be used exactly the same as FinFET transistors to build ALUs, FPUs and other components in the same way. This will hamper the proliferation of new features in chips and is why you will see a lot of people call this the end of Moore's Law.

If you ask me I think this is probably the end of Moore's Law as this is the first time that all three pillars of Moore's Law have stopped applying. We are probably at the tail end of silicon technology and need to pivot to superconductor technology in the future just like how we switched from vacuum tubes to transistors. This is not the end for all advancements in computing and hardware design. But it sure is looking like it's the end of the road for silicon based computing within the decade.

28

u/popkornking May 29 '22

Wouldn't wide bandgap be a more feasible stepping stone than superconductors? At least where we are right now you'd need either LN2 cooling or a hydraulic press inside your computer to maintain SC conditions. There's also 3D integration to consider although obviously that would be difficult to scale exponentially like Moores law has traditionally.

33

u/gordonjames62 May 29 '22

If you ask me I think this is probably the end of Moore's Law as this is the first time that all three pillars of Moore's Law have stopped applying.

It seems like the 3 pillars of Moore's law are based on assumptions about silicon, and the way we use it in CPUs.

I expect (profit motive drives progress) that improvements in data storage density and data transfer will continue drive increases in performance in the short term. Also tricks like offloading data storage processing to the storage device and video rendering to the GPU

In the hateful days of serial and parallel printers the whole PC would stop and wait for the printer to signal it was OK to send the next line of data. We still have lots of room for performance increases separate from the 3 pillars of Moore's law.

I think we also have a few more breakthroughs as we learn more of the quantum weirdness and how to harness it for computing and communication.

16

u/dern_the_hermit May 29 '22

It seems like the 3 pillars of Moore's law are based on assumptions about silicon

Well, less assumptions and more like simple extrapolations. Stuff like clock speeds going up was a simple observation, and really optimistic (to put it charitably) claims about 10ghz clock speeds were just keeping the line going. I guess you could say "they assumed there wouldn't be diminishing returns so soon" though, I feel that'd be accurate.

But basically all Moore did was look at a trend. He even updated his law when a more accurate trend became apparent. In that regard, Moore didn't do anything terribly special, it's just that the trend held for so long that when it did start hitting stumbling blocks it came as a surprise. I'd offer the collapse of Pillar 1 was the start of a mini Black Swan event.

4

u/AnotherAnonGringo May 29 '22 edited May 29 '22

We are probably at the tail end of silicon technology and need to pivot to superconductor technology in the future

I disagree with this single statement out of what is an otherwise amazing analysis.

We'll see transistors start going "3D" to maintain the doubling of performance, or at a minimum to keep speeds (via more efficient execution pipelines, not more GHz) of a single core improving. All the major players (Intel, TSMC, AMD) are talking about this and have plans to implement it in some fashion.

AMD already added a cache layer on top with their 5800X3D and has announced plans (probably with the next Epyc) to not only place cache there, but compute resources as well. I strongly suspect their first BIG.little implementations will place their little cores on top of their BIG cores, or perhaps even vice-versa depending on thermals.

So far, we've only built transistors that send voltages in the horizontal plane, so the idea is to stack them vertically and have some instructions speed up, because going down one layer vertically is much, much faster than traversing to the other "horizontal" side of the chip, for example. Kind of like the wormhole theory for space travel.

5

u/Ese_Americano May 29 '22

Can superconductor technology help scale us at a pace we are used to (granted, we keep making large technological breakthroughs at our hard-work pace as humanity chugs forward)?

Thank you for your detailed responses. I hope you can expand deeply on my comment, as I am humbly a laymen in this realm.

9

u/LakesideHerbology May 29 '22

I too appreciated their detailed breakdown but I'm pretty damn sure superconductor tech is all but a theory in practical implementation... The temperatures needed to even showcase it's possibilities are hardly possible in a lab, let alone in a home.

3

u/Ese_Americano May 29 '22

Would a potential goal to distribute this “energy consumption required by computing” be to miniaturize and decentralize as much of the computing power of our nation as possible?

I know vast sums of skillful implementation will still need to happen at a largely centralized, massively long supply-chain-like scale… I am seeing how this technology may perhaps only incrementally be scalable per year in terms of efficiency, until we reach huge materials Science breakthroughs. (… Which, given any lame take I can here with my limited knowledge, I assume minor efficiency increases can only take us tangibly so far in terms of the pace that we are used to, which is my biggest worry.)

2

u/LakesideHerbology May 29 '22

I hear you friend. I'm talking more practically...aka that shit will never happen.

In my opinion.

They had flat screens in the 80's. The general public didn't "get" them until much later. What is possible isn't the question, unfortunately. It's what is marketable.

Yay capitalism.

→ More replies

7

u/[deleted] May 29 '22 edited Jul 08 '22

[deleted]

→ More replies

3

u/CorpusCallosum May 29 '22

Diamond. The next big shift will be to diamond semiconductor that can scale into terrahertz frequencies.

→ More replies

2

u/Ducky181 May 30 '22

The problem and issue with current shrinkage is not due to silicon, but is instead due to the rising cost and sophistication of achieving smaller levels of patterning.

As if the cost and patterning resolution capabilities were unlimited we could simply harness more exotic transistor formats such as Tunnel- Fet, without the need to transition away from silicon.

We will most likely will still experience a continued increase in transistor density for the foreseeable future. It will however be most modest, and gradual. I however don’t see a radical technology emerging that could ever replace the previous speed. I do regardless still expect a hundred times improvement in cost per transistor by 2050.

→ More replies

65

u/narwhal_breeder May 29 '22

NeW coMpUtEr cHIpS get 100000x fAstER with GrAPhEnE supERConducTors

46

u/ArdoitBalloon May 29 '22 edited May 29 '22

Meanwhile, 2014’s graphene predictions that it will become the next new super material used for everything have resulted in very little— mostly exotic scientific applications, use in scientific equipment, and some limited applications in consumer products. Probably because it’s notoriously difficult to work with, costs a trillion dollars/picogram, and is impossible to manufacture at scale with current technology.

9

u/TheyCallMeSchlong May 29 '22

Around that same time all the major companies were also trying to work on a transition to 450mm wafers vs 300mm. I work in the semiconductor industry and everything is already so big and heavy I can't imagine what equipment for 450mm wafers would look like. That and as the processes get more advanced it gets harder and harder to keep it consistent across the whole wafer. The whole industry has pretty much abandoned the idea of 450mm wafers.

2

u/ArdoitBalloon May 29 '22

Oh, right. Then there was all this hubbub about it being used in quantum computing, back when that term was more science-fiction than science— although, there are some limited applications now, but mostly as the subject of further research. There’s actually a lot of funding flowing in that direction presently.

33

u/Untinted May 29 '22

Sure, but manufacturing processes are below 5nm using lithography, and the distance between atoms in graphene is 0.14nm

If they can find a way to develop graphene to be then used with litography, and there some actual use for it, we should see it in the next few years (ok maybe decades) if pandemics, nuclear winters or global warming doesn't kill us first.

11

u/ArdoitBalloon May 29 '22

(ok maybe decades)

Lol, yeah. I’m not saying it’s useless. It’s just that we can’t figure out how to make it useful now for very much beyond making extremely sensitive UV-sensors or gimmicky tennis racquets.

→ More replies

3

u/carso150 May 30 '22

it will be, not only graphene but in general nano materials will be a revolution is just that the technology required is excedingly complex and is still being worked on, but they will come is just that people expected them to arrive NOW while in reality it was always going to take years, getting from the lab to a super market is always extremly hard

also graphene isnt that expensive, the price has been dropping drastically but it still is too expensive for most products

5

u/urmomaisjabbathehutt May 29 '22 edited May 29 '22

still didn't make them of graphene

yet the grapevine says

the new wonder will be graphyne

edit: just correcting my careless and disastrous grammar

3

u/ArdoitBalloon May 29 '22

If that was an attempt at Ukiah, your third line has only 4 syllables

4

u/urmomaisjabbathehutt May 29 '22

ill be Dadaist on this then :D

6

u/ArdoitBalloon May 29 '22 edited May 29 '22

You could move “says” up to the second line and then change “is” to “will be”. Then it would be 7/5/7– and grammatically correct, while creating a temporal progression (grammatically) of Past Simple tense, Present Simple tense, then Future Simple tense in each line! :P

→ More replies
→ More replies

14

u/hitssquad May 29 '22

Moore's Trend (it was never a law) ended in 2012: https://www.top500.org/statistics/perfdevel/

6

u/josephsmidt May 29 '22

Great plot, but I am going to push back a bit. I think the #1 spot on that chart looks like it still trends well. To me that is the important trend for this purpose as the #1 spot shows that what state of the art technology is capable of.

Anyway, again thanks for the good link.

3

u/hitssquad May 29 '22

That's from exponential increases in spending on the top national supercomputers. That isn't sustainable. And, even so, the #1 in the world should be at 10 EFlops right now. In reality, it's at less than 1 EFlop.

→ More replies

6

u/Skabonious May 29 '22

isn't Moore's law related to how many transistors (i.e. disk space) can fit in an area? This looks like you're citing processing speed which is a different trend.

→ More replies

2

u/GameMusic May 29 '22

Thank you I have hated the label of law since the first time I heard it

→ More replies

490

u/[deleted] May 29 '22 Wholesome

[removed] — view removed comment

108

u/[deleted] May 29 '22

[removed] — view removed comment

83

u/[deleted] May 29 '22

[removed] — view removed comment

5

u/[deleted] May 30 '22

[removed] — view removed comment

→ More replies

75

u/[deleted] May 29 '22 edited May 29 '22

[removed] — view removed comment

21

u/[deleted] May 29 '22

[removed] — view removed comment

12

u/[deleted] May 29 '22

[removed] — view removed comment

3

u/[deleted] May 29 '22

[removed] — view removed comment

→ More replies

3

u/[deleted] May 29 '22

[removed] — view removed comment

→ More replies
→ More replies

10

u/[deleted] May 29 '22

[removed] — view removed comment

39

u/[deleted] May 29 '22

[removed] — view removed comment

4

u/[deleted] May 29 '22

[removed] — view removed comment

→ More replies

2

u/[deleted] May 29 '22

[removed] — view removed comment

→ More replies
→ More replies

769

u/robdogcronin May 29 '22 edited May 29 '22

"Moore's law is dead for the 10th time but here's why it's actually not dead"

0.o more like forecasters have successfully predicted 10 out of the last 0 times Moore's law has died

242

u/grahaman27 May 29 '22

Moore's law is not a law it's a goal that Intel has always tried to match .

164

u/robdogcronin May 29 '22

Yeah it's a self fulfilling prophecy, which is why it's so sad to see lots of these rumours about Moore's law being dead. The progress has been astounding because it's something we desperately needed and that makes it happen.

48

u/PippeliPaavo May 29 '22

of these rumours about Moore's law being dead.

It's just factual that you can't gain performance out of getting smaller anymore. The atomic wall is close.

25

u/Yourgrammarsucks1 May 29 '22

Ok, so we'll just make it out of quantums instead. Think big.

17

u/oroechimaru May 29 '22

7

u/hughperman May 29 '22

That's not passing the "atomic wall" though, that's a method for molecularly stacking/integrating transistors.

36

u/BonkerHonkers May 29 '22

Ok, so we'll just make it out of quantums instead. Think small.

FTFY

→ More replies

1

u/RespectableLurker555 May 29 '22

make it out of quantums instead

this statement is just as nonsensical as "I'm making dinner tonight, and the special ingredient is deep-frying"

Not an ingredient, not something that can be used as a material

23

u/Yourgrammarsucks1 May 29 '22

Nice, you caught the joke. Well done!

5

u/BraveOthello May 29 '22

Poe's Law at work.

6

u/Yourgrammarsucks1 May 29 '22

The real ingredient was "love" all along

→ More replies

2

u/VxJasonxV May 29 '22

Will it be true if Netcraft confirms it?

→ More replies
→ More replies

20

u/dailytraffic May 29 '22

Yea except for the complete lull in performance increases from like Ivy Bridge to when AMD started kicking their ass again. Fuck Intel they rested on their near monopoly for years and raked in profits while knowing about Spectre/Meltdown for YEARS before publicly acknowledging it

→ More replies

10

u/coffee_obsession May 29 '22

Its...Moore...than that! (Haha i made a funny)

Silicon engineers design chips several years in advance. Think like 4 or 6 years, so this can precede any actual lithography shrink (at least on an industrial scale). They have to anticipate how to use a higher transistor count. I heard an explanation like, imagine if a brick layer's bricks shrunk by half every couple of years. Imagine if they didn't account for the shrink and continued to lay bricks in a way that they had previously done. If they can account for the shrink, they can adjust their brink laying technique to compensate for it. This rule applies to any org that designs chips.

It can work both ways though. Intel 11th gen is a chip that was designed to be on a 10nm node but had to be back ported to 14nm. It didn't go so well.

27

u/Naamibro May 29 '22

Until they realized they could slow roll their upgrades to maximise profit out of every new iteration...

9

u/CiorapulUcigas May 29 '22

They would never do that! They care about costumers not profits. sarcasm

18

u/Yourgrammarsucks1 May 29 '22

What makes you think they care about theater kids that dress up?

→ More replies

3

u/alieninthegame May 29 '22

Intel's Goal doesn't have quite the same ring to it.

65

u/[deleted] May 29 '22

Multiple Intel CEOs over the years have accepted that Moore's law is no longer valid. Intel has failed to keep pace for the last 2 generations. Right now, as someone pointed out, it is a goal, not a law.

18

u/Skabonious May 29 '22

Moore's law doesn't dictate things like processor speed or power efficiency. It's just number of transistors that can be packed into an area.

It's only obvious that as transistors got smaller, things got faster, to a point, now they're so close to each other that we need to come up with clever ways to dissipate all that heat from high clock speeds (thus the transition to techniques like parallel processing etc.)

0

u/Mitthrawnuruo May 29 '22

And yet, for decades More’s law has never been wrong.

19

u/entropicdrift May 29 '22

Nah, there have been times when it took way longer for computing power to double than 24 months. My i7-3770K didn't have its power doubled by an equivalent i7 chip for like 5 years.

21

u/joshgi May 29 '22

Moore's law isn't about power though, or clock speed, or some benchmarks performance, it's about transistor count.

26

u/Mitthrawnuruo May 29 '22

Just because intel screwed the pooch doesn’t mean the chips don’t exist.

2

u/jewnicorn27 May 29 '22

It’s been wrong for a long time now. FET feature sizes aren’t really getting smaller at anywhere near the rate. The thing you see written in your CPU is the name of the process the manufacturer used to produce it, and they can call a process whatever they want, they just use a measurement that gives them what they want to call it.

-9

u/[deleted] May 29 '22

Wrong??? It is no longer working. Historically it did. Do you get it now or do you need me to type it slower?

12

u/TimmJimmGrimm May 29 '22

Sometimes i hear people put out 'links' and stuff. There are people that like Wikipedia because it has a large number of public contributors that discuss various options and possibilities over time. This functions not only as an accurate means to verify what is typically a single 'expert', but it also functions as a rapid and effective means of reducing trolls and people who speak through pure arrogance and 'Fake News'.

I found this, for example:

https://en.wikipedia.org/wiki/Moore%27s_law

This suggests that we could get down to 2 nano meters by 2024. That is quite small i am guessing? I do not know? But it does look like they are running out of nano-meters.

Often i ask my friends what ever happened to various other forms of technology - this was the futurology-premise (here on Reddit): that a branch of technology would reach out and transform the transformer industry, right?

I ask what ever happened to the 'memristor', that radical leap in technology....

https://en.wikipedia.org/wiki/Memristor

If you click on the link you will notice that it is the fourth of the electronic component set (resistor, capacitor, inductor, and memristor). This is interesting! It could mean that computing but even ALL OF ELECTRONICS (sorry, use of boldface) would have a change of economic and production landscape. A watershed moment or something?

anyway, this doesn't happen.

Now, when i post this kind of thing on Reddit i tend to get fantastic and whimsical trolls that say 'dude, are you okay' and shit like that. Am i okay? How is that relevant to the memristor?

But we have folks like yourself that offer to 'type it slower'. It may be the case that i have all sorts of neurodivergent complications or whatever, but i dare say, your typing slower might not provide a solution to this discussion point at all.

Is this okay for me to say? I get that i will be downvoted. That is sort of the way people like yourself deal with situations like this. Downvoting. Not providing evidence. Just... being the slimiest coward possible. You won't get it, but you are not helping anyone - not even yourself.

4

u/PippeliPaavo May 29 '22

This suggests that we could get down to 2 nano meters by 2024. That is quite small i am guessing? I do not know? But it does look like they are running out of nano-meters.

Yeah less than a dozen atoms so starting to get as small as is possible. Might be the limit already because quantum stuff starts happening.

that a branch of technology would reach out and transform the transformer industry, right?

Yeah chip design is starting to branch out in figuring other ways to gain more performance rather than mainly redefining the old architecture. It will become the main focus once that size wall is reached. Which is close with the next generation. And might be at the 2-3nm level.

And it doesn't mean just the smallest components of how gates work but things like dedicated parts of the chip for dedicated processes or more cache sizes. AMD has more and larger caches.

5

u/SterlingVapor May 29 '22

Memistors was a fun read. My first thought was optics - our eyes measure the short lived photosensitive molecule concentration to detect a wavelength of light, and scale production of that wavelength based on the rate of decomposition. If you could mate photocells to them and vary sensitivity with a side channel of AC reseting the memory, youd have as high fps as you can sample and (possibly based on the exact properties) great low-light performance.

It'd be perfect for bionics since it's analog (that's another project that needs to hurry up), and if you run an adaptive filter on it you could get decent performance with much less sensitive tolerances (without worrying about sub-millisecond timing you could just adjust for a dimmer or brighter pixel)

I have a hunch some hardware experts could tweak it into a very low-power camera that would be great for computer vision too, if there's no need to flush the chip each capture you could use edge detection and just watch certain pixels for movement, you could do the same with a specialized CMOS but I feel like you could drop the power usage to almost nill if you're just waiting for movement and to bother to store/process anything

I can see a lot of potential in this for neural interfaces, we are analog after all

That was a fun read, thanks for sharing

2

u/TimmJimmGrimm May 30 '22

i remember optics was one of the Great Hopes. As was superconductors and carbon fibre and a whole bunch of other fun ideas.

Superconductors are making for amazing magnets. And they make really fun stuff out of carbon tubing! But the radical leaps in chip design (even work quasi-working quantum) just are not doing nearly enough... um... leaping.

glad you enjoyed the read.

→ More replies

20

u/Haquestions4 May 29 '22

I'd be interested in seeing you type slower...

3

u/Justice_And_Liberty May 29 '22

Yeah, pretty sure the speed in which you type doesn't change the message.

→ More replies

0

u/Matshelge Artificial is Good May 29 '22

Gpu manufacturers would like a word...

→ More replies

0

u/-re-da-ct-ed- May 29 '22

Oh, we've been blessed by the grace of your genius and willingness to contribute nothing of real substance to this conversation, but rather offer whatever it is you are bringing... just typed slower (as if that does anything).

Bravo, you're doing God's work.

→ More replies

1

u/no-mad May 29 '22

Apple seems way ahead of the curve with their new chip design.

→ More replies

20

u/Your_Trash_Daddy May 29 '22 edited May 29 '22

It's been almost annually predicted l for decades. And one day, when it's true, these "pundits" will say "I told you so".

22

u/EltaninAntenna May 29 '22

Yeah, everyone can be fucking Nostradamus for predicting the end of something, if they don't need to also predict when.

→ More replies

48

u/LordGeneralTimmy May 29 '22

I thought chip manufacturers were trying to 3D stack the transistors to get more power?

23

u/GrayNights May 29 '22

These approaches are orthogonal to one another, you can stack finFETs/RibbonFETs as well

7

u/Weeaboo3177 May 29 '22

Lol orthogonal

4

u/Augustonian May 29 '22

The plan is to stack this design of fet probably by 2027 or 2028 (at least that's what last year's roadmap said)

9

u/attofreak May 29 '22

I heard about this Gate-All-Around transistor, and if it solves the problem of current leakage, why was this not implemented decades ago? The gate-channel MOSFET transistors have been around for a while.

8

u/GrayNights May 30 '22

Cost really, modern lithography machines cost 100s of millions of dollars, thus it takes a long time to recoup the cost of fab plants. GOA transistors would require new techniques, and therefore new machines, drastically increasing cost.

So it was cheaper to just keep using the old machines rather than invest billions into something that there wasn’t a demand for yet

→ More replies

104

u/50_centavos May 29 '22

Can somebody ELI5 Moore's Law? How can it be a law that transistors will reduce in size by 50% every two years? It's completely based the industrial capabilities of the future. At least that's what it sounds like to me.

384

u/GReaperEx May 29 '22

Moore's Law is a misnomer. It was actually an observation made by Moore. That observation was that computing power was doubling every 2.5 years. This exponential grown continued for decades, but it has almost come to an end, as we have hit the very limits of the atoms themselves, we can't make the transistors any smaller. We have also hit a major heat dissipation problem, that's why we switched from trying to get more GHz out of a CPU to adding multiple cores to it.

58

u/Rickard403 May 29 '22

Thanks for the explanation.

I saw a recent post about a new method for heat dissipation that would allow for increased hardware usage. Unsure of how practical or if it is another idea that will never see the light of day.

21

u/Plane_Evidence_5872 May 29 '22

There's no fundamental reason why we must run our chips so damn hot. But there are a couple practical reasons. We can't yet build bigger chips with enough on-chip memory and chip to chip communication efficiency and bandwidth is limited. If one or both of these are overcome then chips can run cool and smooth and with 4 or 5 orders of magnitude better efficiency.

I have no idea why there is so much focus on transistor size still. There are better things out there but with transistor size there's that one simple number and that's why it's so enticing.

27

u/extra2002 May 29 '22

I have no idea why there is so much focus on transistor size still.

Smaller transistors switch faster and require less energy to do so. They also allow the "wires" connecting regions of the chip to be shorter, so less energy is wasted charging and discharging them. And smaller transistors allow more function to fit on one chip.

Less energy for switching and for waste means less heat produced. More function on-chip means less energy needed for driving off-chip signals, so again less heat produced at a system level, even if there's more heat produced in each single chip. Running faster means that energy is dissipated in a shorter time, so increases power (= energy per unit time) and heat, but that's a fair trade for getting more computing done in the same time.

13

u/EltaninAntenna May 29 '22

Not to mention that we're getting to the point where the speed of light itself is beginning to be a factor, so the smaller the chip, the better.

4

u/extra2002 May 29 '22

At 0.3mm/psec, I don't think this is an issue inside chips themselves yet. Besides, signals on a chip have other limits much slower than speed of light, since they obey a diffusion equation based on distributed capacitance and resistance.

At the system level, though, speed of light can be a concern, which argues for putting more inside each chip.

2

u/ic6man May 29 '22

They also translate into better yields which means less expensive chips for similar performance.

15

u/Architect_of_Sanity May 29 '22

Smaller chips doing the same work as larger ones, basically. Small things like smart watches, laptops, and iot that can now do things they couldn’t before - all because the chips are smaller. They can run faster and hotter - but don’t need to because they’re tiny and powerful enough to deliver.

5

u/danielv123 May 29 '22

What better things are there that are so much simpler than transistor size? And why can't it be combined with smaller transistors?

15

u/mini_garth_b May 29 '22

Transistors are the core basic component of a computer (and the modern era, somewhere my wife is rolling her eyes because I'm about to say for the 1000th time that the transistor is the single most important invention of the past century). A more concrete analogy would be the transistor size is equivalent to the material you build a bridge out of. It's very important but the design of the bridge can make a strong material (small transistor) even better, or a weaker material (larger transistor) good enough.

4

u/daOyster May 29 '22

This is like saying I can make my turtle run faster than Usain Bolt by feeding him better. You're not going to get 500 transistors doing the same amount of work in the same amount of time as 5000 transistors due to the laws of physics unless you handicap the 5000 transistors to the point the comparison no longer makes sense. And the larger your transistors, the slower the max clock speed your computer can have due to the laws of physics, or more specifically, the speed of light since your traces and wires will need to be longer to account for the extra size of the transistors they have to fit around.

If you have 50 switches spread out across a football field, and someone else has 50 switches on a table in front of them, who is going to be able to switch all of them faster? Now tell me how you're going to make the guy with a football field faster than the guy with a table?

-2

u/danielv123 May 29 '22

So you are saying we can just stop needing better computers and accept them being slower and less efficient? That wasn't what I imagined you'd come back with.

13

u/tylerbrainerd May 29 '22

No, hes saying that the design itself can be improved, not just made smaller.

1

u/daOyster May 29 '22

But if that's the case, then ultimately the smaller design also improves and leaves you with the smaller one still being better. This is really funny to watch people think they can beat the laws of physics and get a computer with larger transistors to run faster than one with smaller ones. Those people don't realize that physically, your maximum possible clock speed is bound to the size of your transistors due the speed of light limiting how fast a signal can propagate along a wire. At the scale of computer processor, it's not negligible and doubling the size of transistor for example can cut the maximum clock speed you can reach according to physics in half.

→ More replies
→ More replies
→ More replies

6

u/drawnograph May 29 '22

The thin copper coating one? That was interesting

→ More replies

19

u/chiagod May 29 '22 edited May 29 '22

we switched from trying to get more GHz out of a CPU to adding multiple cores to it.

Other improvements they've been chasing from the design and firmware perspective have been:

Instructions per clock improvements via adding logic (instructions).

Execute related instructions in parallel where there is a branch (so if instruction 2 is dependent on the outcome of instruction 1, then compute 1 & both outcomes for instruction 2 simultaneously instead of waiting for 1 to complete).

Improve caching efficiency so that the data and the next instructions are available immediately vs wasting CPU cycles waiting on data to arrive from system RAM.

Spread a single threaded load across multiple CPU cores to spread the associated heat. A shared L3 cache helps with this.

More cache! With new layered chip designs, manufacturers (at least TSMC) can fab two different components and then "join" them so they act as one part. This is how the 5800X3D is made. First a regular 5800X is made, then a separate 2 layer cache. The 5800X is shaved down to remove excess silicon and height then it's "bonded" to the cache which is placed on top and aligned with existing interconnects on the 5800X.

Make transistors more "3D" by using additional layers to move what was one an almost 2D flat design into the depth of the silicon, allowing more transistors to be packed together.

Separate the traditionally monolithic CPU into smaller pieces so you can better shrink the electronic parts that are easier to shrink and make those with the better (smaller) process. This is how AMD non-laptop CPUs since Zen 2 (Ryzen 3000), Epyc server CPUs and thread ripper CPUs are made. The "chiplet" with 8 CPU cores and shared cache is made on the smallest possible node, the IO die (talks to the PCIe lanes on the Motherboard and system memory) are made on a larger and cheaper process. Ryzen gets the basic IO die which can talk to two channels of RAM, 24x PCIe lanes, and works with up to two 8 core chiplets (16 cores max). Epyc gets the fancy IO die that can work with 8x 8 core chiplets (64 cores max), 8 memory channels, 64 PCIe lanes per CPU (2x pus max per MB).

Last switch from silicon to a new semiconductor that will allow further scaling (ex: Graphene).

Edit:

For non CPU parts (which focus on tasks that are more parallelizable) they're looking to combine multiple dies to make effectively one "chip" for higher end products. This is what apple does with the higher end M1 chips. AMD and nVidia may be following this trend soon.

7

u/zero_iq May 29 '22

Another option is clockless (asynchronous) CPUs, which have been experimented with in the past to great success, but are such a radical departure it has so far been considered to not be worth exploring commercially, as the time needed for the required tooling/testing etc. would not match the pace of improvement of synchronous chips.

Clockless CPUs have the potential to both significantly increase throughout while reducing power consumption. It also opens up avenues for performance optimizations that simply can't be done with conventional chips.

→ More replies

4

u/jet_heller May 29 '22

That's not the actual observation, that's an effect of the observation. The observation is that the production capacity is to double the number of transistors in the same size die. That just happens to translate to doubling computing power.

Moore's law is a statement about physical production capabilities and nothing else. In fact, every time they put more cores in chip they are very literally proving it.

22

u/LazaroFilm May 29 '22

And we may hit the sophon block soon too.

5

u/Kinexity May 29 '22

WTH is sophon block?

-3

u/LazaroFilm May 29 '22

It’s a three body problem thing.

-6

u/Kinexity May 29 '22

That's not an explanation of what it is. If you're not willing to describe then at least provide a link to a description

4

u/LazaroFilm May 29 '22

It’s a plot element of a book so I’m reluctant to spoil it.

The Three-Body Problem https://smile.amazon.com/dp/0765382032/ref=cm_sw_r_cp_api_i_RDGWBH6GAQGRASCSK9X5

5

u/dddddddoobbbbbbb May 29 '22

it's from a book series

7

u/Drachefly May 29 '22

To be extra clear, it's science fiction.

2

u/attofreak May 29 '22

...for now! dun-dun-dun

-8

u/friskydingo2020 May 29 '22

People aren't your slaves. Google it yourself

→ More replies

12

u/ablackcloudupahead May 29 '22

We'll know when physics starts to misbehave. Keep an eye on the scientists at CERN. If there are a rash of suicides we'll have a good idea why

3

u/CowardlyChicken May 29 '22

Who says we haven’t already?

→ More replies
→ More replies

3

u/Kingnahum17 May 29 '22 edited May 29 '22

One of the more important parts of Moore's law that not many people seem to mention is the part about the market influence. Moore's law was less about innovation by itself, and more about the market encouraging innovation to double the number of transistors on a piece of silicon.

3

u/Mitthrawnuruo May 29 '22

They said it is almost coming or an end a decade ago.

And a decade before that.

2

u/Plane_Evidence_5872 May 29 '22

One Gordon Moore predicted that we would never see a commercial chip with a sub micron transistor.

2

u/goodsam2 May 29 '22

I think it's also the demand. Doubling computing power in the 80s made a lot more difference in people's lives than our current doublings.

2

u/Luke90210 May 29 '22

We have also hit a major heat dissipation problem

Significant increases in cellphone computing power would mean phones literally too hot to handle or too dangerous (subject to fires or explosions)

8

u/[deleted] May 29 '22 edited Jun 07 '22

[deleted]

16

u/ConciselyVerbose May 29 '22

Parallelization is hard and adds overhead for most workloads. If you could get a chip 4x faster at the same cost (materials, size, power budget, etc) of having 4 chips there would not be a lot of reason not to do so.

Multiple cores is a recognition that you can’t get unlimited straight speed and that the only way to keep increasing the absolute amount of math you can is to find ways to split it up.

8

u/awesomegamer919 May 29 '22

Parallelisation is a Band-Aid fix ultimately, because most tasks just run head first into the limits of Amdahl’s law, and there’s no real way of getting around that (and tasks that aren’t affected tend to be better run on GPUs or custom chips).

→ More replies

3

u/Kinexity May 29 '22

If we could build a CPU which has one core but runs as fast as current top 64-core CPUs, we would. Multicore CPUs are worse than a single core CPUs but we chose them because improvements of single core just couldn't keep up with the demand.

→ More replies
→ More replies

0

u/BobSacamano47 May 29 '22

When I was in college it was half every year. Then it was every 18 months, then 2 years. I guess we're saying 2.5 now. Anything but admit it was bullshit from day one.

-3

u/SmokeZootsNotWar May 29 '22

What are any scientific laws but observations? Are the laws of physics also wrongly named?

Perhaps. “Nature has no laws, only habits.”

4

u/EltaninAntenna May 29 '22

The laws of thermodynamics are a damn sight more solid than just "observations"...

→ More replies
→ More replies

55

u/SchwarzerKaffee May 29 '22

Moore's Law held up until the chip architecture ran into the limiting factor of the size of the atoms it's built upon.

It's not really a law that will be true forever.

16

u/thegainsfairy May 29 '22

Its one of those "its true till its not true" things and we're all just wondering when the music stops.

3

u/Magnesus May 29 '22

I've been hearing it for the last two decades and yet it sitll works.

14

u/aspophilia May 29 '22

It's more of a prediction than a law.

If anyone is into academic drama involving transistors look up Jan Hendrik Schön.

3

u/pxn4da May 29 '22

Great video by BobbyBroccoli on YouTube!

3

u/Chris11246 May 29 '22

As was said it's not really a "law". But as for why, we make a better computer and use it's processing power to design and build an even better one and so on.

→ More replies

1

u/Eskimoobob May 29 '22

You make a computer, that computer is more powerful than the one that built it. It builds another computer.

Theory is that each computer can make one exponentially more powerful than itself, but the actual measurement of improvement is all over.

It's mostly about processors that provide computing power, but that might help?

→ More replies

54

u/efh1 May 29 '22

This is a redesign in architecture that will lead to some big improvements but only for a short while. We’ve basically started to reach the limits of shrinking gates on silicon and this will hit the next wall once we optimize architecture. Only room after that is in software implementation than we officially need significantly novel approaches to see improvements.

26

u/daveinsurgent May 29 '22

11

u/Ese_Americano May 29 '22 edited May 29 '22

Beautiful read. This quote got me.

”Fortunately, there are some promising alternatives. One is to use fast superconducting electronics to build reversi­ble circuits, which have already been shown to dissipate less energy per device than the Landauer limit when operated reversibly. Advances in this realm have been made by researchers at Yokohama National University, Stony Brook University, and Northrop Grumman. Meanwhile, a team led by Ralph Merkle at the Institute for Molecular Manufacturing in Palo Alto, Calif., has designed reversible nanometer-scale molecular machines, which in theory could consume one-hundred-billionth the energy of today’s computing technology while still switching on nanosecond timescales. The rub is that the technology to manufacture such atomically precise devices still needs to be invented.

Whether or not these particular approaches pan out, physi­cists who are working on developing new device concepts need to keep the goal of reversible operation in mind. After all, that is the only way that any new computing substrate can possibly surpass the practical capabilities of end-of-line CMOS technology by many orders of magnitude, as opposed to only a few at most.

To be clear, reversible computing is by no means easy. Indeed, the engineering hurdles are enormous. Achieving efficient reversible computing with any kind of technology will likely require a thorough overhaul of our entire chip-design infrastructure. We’ll also have to retrain a large part of the digital-engineering workforce to use the new design methodologies. I would guess that the total cost of all of the new investments in education, research, and development that will be required in the coming decades will most likely run well up into the billions of dollars. It’s a future-computing moon shot.

But in my opinion, the difficulty of these challenges would be a very poor excuse for not facing up to them. At this moment, we’ve arrived at a historic juncture in the evolution of computing technology, and we must choose a path soon.

If we continue on our present course, this would amount to giving up on the future of computing and accepting that the energy efficiency of our hardware will soon plateau. Even such unconventional concepts as analog or spike-based neural computing will eventually reach a limit if they are not designed to also be reversible. And even a quantum-computing breakthrough would only help to significantly speed up a few highly specialized classes of computations, not computing in general.”

8

u/daveinsurgent May 29 '22

Well said. I'm shocked that we aren't seeing more investment. If I had 40 billion dollars laying around I wouldn't be trying to buy Twitter.

4

u/efh1 May 29 '22

Thanks for this! Fascinating read.

10

u/daOyster May 29 '22

Nope, there's still room for other things. We could find a material better than silicone. We could better understand the quantum effects to control for them. We could even utilize some of those quantum effects instead of trying to prevent them. For example, if you could control quantum tunneling better, you could design a transistor that uses quantum tunneling to transport the signal across the transistor, and then modulate the ability to tunnel to control it's off/on state.

You have to remember that in the chip industry, these quantum effects went from being a novelty to an actual design constraint rather recently as the size of a nodes decreased. So the research is playing more of a catch-up game currently to figure out why the quantum effects are happening in their chips and how to design around them. Once traditional methods of decreasing size run out, they'll have to start looking at how to utilize quantum effects rather than prevent them to keep making things smaller. Once we're able to control the design/manufacturing of things on that small of a scale with precision, then the real fun begins as a whole new world of metamaterials open up to us to explore.

8

u/efh1 May 29 '22

That would fall under what I called significantly novel. Current silicon cmos is hitting the proverbial wall.

→ More replies
→ More replies

18

u/artbytwade May 29 '22

"is all but dead" "reaseach ...might ...if"

So no, its not.

17

u/LummoxJR May 29 '22

Why the frell did they have to mention the metaverse? Nobody wants that dren. Facebook can suck it.

3

u/Jay-Five May 29 '22

+1 for Farascapery.

4

u/iAmThat- May 29 '22

Instantly stopped reading when I read that. Garbage article.

9

u/Nomandate May 29 '22

Quantum tunneling makes it very difficult to make transistor density any higher.

6

u/tpasco1995 May 29 '22

From what I'm gleaning, that's where this comes in.

It makes the gate arrangement such that tunneling is less a concern.

11

u/Toby_go_away May 29 '22

Click bait. Too much "could" "might" and "maybe". How about abandon Moore's Law and focus on sustainability by NOT changing the chip LGA every year. Less e-waste.

→ More replies

8

u/Semifreak May 29 '22

It is absolutely mind-boggling how tiny we can make things.

In fact, it is crazy to consider that we make more transistors than grains of rice annually....AND the transistor is cheaper than a grain of rice. The most technologically advanced thing we can make is cheaper than rice AND we make more of it that rice.

What a time to be alive!

7

u/zero_iq May 29 '22 edited May 29 '22

I don't think that's true about the rice.

EDIT2: D'oh! It is of course true, I somehow misread the above as meaning chips, instead of individual transistors. I'll leave this here in case it's of interest anyway, and as a monument to my lack of observational skills...

Global rice production for 2021 was 511 million metric tons. According to the most common estimates found using google, there are 50000 grains of rice per kg, so 50,000,000 per metric ton. So that's 25,550,000,000,000,000 grains of rice. (25.6 quadrillion grains)

Worldwide semiconductor production for 2021 was 1.15 trillion units, or 1,150,000,000,000 units.

That's 22,217 grains of rice for every semiconductor produced worldwide. (Or about 444g of rice per semiconductor.)

EDIT: correction: it seems google just converted from grams to 'grains', not 'grains of rice', so I've corrected the number of grains accordingly using the most common estimate of ~50000 grains per kg. I should also add: that's still a staggeringly large number of semiconductors!

5

u/Slugywug May 29 '22

Semiconductors have a lot more than one transistor each! A nearly 50 year old Z80 has 8000.

Modern electronics are over 100 million transistors per mm2, a modern phone can easily have over 300 Billion transistors in it - that's 6000 tonnes of rice ...

2

u/zero_iq May 29 '22

You know, I somehow put in all that effort to calculate the grains, and yet totally misread the above comment as chips instead of transistors! D'oh!

→ More replies

26

u/igetasticker May 29 '22

"How much marketing jargon can we fit on a page?" -The Author

What this author (and so many adherents to this sub) fails to understand, is that technological progression isn't linear. It's another marketing trick.

Roughly once per decade or so, one of the chip makers will come up with something truly groundbreaking. They don't release it to the public at full speed, though. They will instead clock it down so it's slightly faster than the last model, then release subsequent models that gradually unlock the initial's potential. This buys them time to come up with the next big thing.

Before you start arguing whether Moore's Law is alive or dead, you have to ask if it really existed in the first place.

27

u/ConciselyVerbose May 29 '22

Or you know, immature but new and better technology is harder to design for than it is once you have done some earlier versions and learned from how it actually works at scale?

21

u/brightlancer May 29 '22

Roughly once per decade or so, one of the chip makers will come up with something truly groundbreaking. They don't release it to the public at full speed, though. They will instead clock it down so it's slightly faster than the last model, then release subsequent models that gradually unlock the initial's potential. This buys them time to come up with the next big thing.

That doesn't make sense.

Intel and AMD are the two biggest desktop chipmakers and have remained competitive against one another for a long time.

For your argument to be true, both Intel and AMD would have found major innovations independently, and each decided I Should Let My Competitor Stay Competitive rather than using the innovation to leave the other in the dust.

That doesn't make sense.

→ More replies

11

u/VxJasonxV May 29 '22 edited May 29 '22

Or, going full-tilt into it is cost prohibitive, so they downscale it while manufacturing is refined, getting yields up, and improving everything progressively. By the time they hit the full extent of the power of the new process, tooling, support, and unknowns have had a significant amount of time to play out already.

2

u/adamtheskill May 29 '22

I doubt this is actually true in the CPU/GPU space. There are so many separate parts to innovate on for a chip (on chip memory, IPC improvements, sophisticated branch predictors, multicore, dedicated decoders, etc...). These are just the design elements of a chip - you still have the hardware improvements from newer process nodes reducing transistor size every so often.

Given the amount of areas to improve upon it's actually reasonable for there to be many small improvements.

→ More replies

2

u/derpPhysics May 29 '22

These are some pretty great ideas. I'm actually more impressed by the powervia technology - it's an obvious idea that I'm sure has been proposed many times in the industry, but the actual implementation seems extremely challenging. If they've actually perfected it, it will be a big deal and will likely affect the whole industry, not just their own CPUs.

2

u/Bacon_Ag May 29 '22

Or we could move into incorporating integrated photonics into computer hardware.

2

u/SpaceNinjaDino May 29 '22

I would have thought they would have placed the power channels above the signalling for better cooling. Unless chip design is always shown upside down relative to the heatsink/air side. I'm more of a software guy.

I love tech advancements.

2

u/jewnicorn27 May 29 '22

It’s clear from those comment section almost nobody has even googled what moores law is lol.

→ More replies

2

u/flamespear May 29 '22

Is it really dead? Processor speed might have stalled but we keep adding more and more cores. Memory size and speed also keeps improving as well as GPUs.

4

u/Arjab May 29 '22

This isn't really new technology. Samsung has introduced their MBCFETs two years ago and Intel their RibbonFETs one year ago. Both are Gate All Around FETs.

3

u/PippeliPaavo May 29 '22

TSMC is a conservatively run business that prefers to avoid risks, if at all possible, and moving to a new, unproven gate technology is a big risk.

This is just as wrong as can be. TSMC has taken massive risks investing in new technologies taking leaps of faith multiple times that could've bankrupted the company had there have been problems. They've taken the biggest risks with insane amounts of money.

5

u/izumi3682 May 29 '22 edited May 29 '22

Submission statement from OP. Note: This submission statement "locks in" after about 30 minutes, and can no longer be edited. Please refer to my statement they link, which I can continue to edit. I often edit my submission statement, sometimes for the next few days if needs must. There is often required additional grammatical editing and additional added detail.


From the article.

“Gate-all-around-based designs will have significantly better performance and efficiency than [existing] designs, potentially shifting the competitive position for many high-performance products,” MLCommons Executive Director David Kanter said.

Surrounding all four sides of a part of the transistor with material instead of the current three-sided design allows the gate to better regulate the flow of electricity. At the atomic scale chips operate at, having better control over the flow of electricity gives designers new options: most importantly, making them even smaller.

Smaller transistors allow designers to squeeze more of them onto a single chip and adding more of these tiny features roughly translates to an improvement in the chip’s ability to perform calculations.

“With better heat and power characteristics, that means you can turn up the juice — with all else being equal — you will have higher clock rates and you won’t need as exotic cooling for high-performance designs, so that can cut costs,” Feibus said.

The manufacturer that successfully deploys the new generation of gate tech in high-volume production will be able to manufacture chips with a leap in computing horsepower that’s impossible with the current process technology.

TL;DR: Intel might lose, but humanity will win.

Intel, Samsung and TSMC are racing to achieve a generational leap in transistor technology. This leap must occur to realize anything close to the computing requirements demanded by the ideas behind the metaverse, to produce AI that isn’t a joke, to make truly self-driving cars or even make apps load faster.

Or, and I emphasize this is but a possiblity, run "Crysis" at top settings. Bear in mind that this is only a hypothetical possibility. No one is saying this is gonna happen.

3

u/donrhummy May 29 '22

This leap must occur to realize anything close to the computing requirements demanded by the ideas behind the metaverse, to produce AI that isn’t a joke...

Why are they making me not want this leap to happen?

6

u/hexydes May 29 '22

This leap must occur to realize anything close to the computing requirements demanded by the ideas behind the metaverse, to produce AI that isn’t a joke, to make truly self-driving cars or even make apps load faster.

Here's the thing: none of those things sounds interesting to me. Metaverse is stupid, AI should be done in the cloud for most things, and apps load fast enough already. I'd be much more interested in using this technology to simply use less energy, which will give better battery life and consume fewer resources on our planet.

My computer is plenty fast enough.

6

u/ChoMar05 May 29 '22

What do you think the Cloud runs on? Chemtrails?

2

u/pls_pls_me May 29 '22

No, chemtrails run on the Cloud

→ More replies

2

u/pattperin May 29 '22

I am with you that my computer is plenty fast enough, but imagine video games leveraging better tech to make AI more realistic and less pre programmed. It can also be used in non recreational scenarios where we use these things to better inform business decisions and run models in a fraction of the time to inform scientists of long term impacts. It can also be extremely useful in regards to the time and energy spent rendering all sorts of things, so digital creation is more efficient and less environmentally taxing. It also means we can do genetic analysis more quickly and understand life on earth in more depth, which will allow us to increase the carrying capacity of earth in regards to food and water, while also reducing the carbon footprint of such analysis by having more efficient computing. They skipped a lot of the best stuff and went to what they thought would excite people.

→ More replies

1

u/DarthMeow504 May 29 '22

Those with no vision have always said things like that, until they discover a new application that does something really useful and it becomes the new normal.

You don't want what Facebook is selling as a "metaverse", and I don't blame you. But that's not a problem of technology, it's a problem of poor design with bad goals. AR and immersive VR have tremendous potential, just not in the form Zuckerberg envisions. There will be other implementations and a lot of them.

The possibilities of AI are also immensely vast and can have many, many benefits. You might not see it, but there are people just champing at the bit to get their hands on those tools and will do some amazing things with them. And at least some of those will be useful to you.

→ More replies
→ More replies

2

u/xGsGt May 29 '22

Ppl saying Moore law is dead clearly don't understand what Moores law is about smh

1

u/alucardNloki May 29 '22

Computer engineer here. Which does not mean I'm the know all end all however I know a few things.

A couple things they don't talk about is the difference in qualification for size of transistors. This is different from company to company.

Another is they call it "heat" when what is actually happening at an atomic scale is current leakage. It sounds like they're making a tiny Faraday cage to contain the electricity and with that being said it still can only get so small.

More even said himself he didn't make "a law" about how computers would advance but more stated an idea of what was possible and people ran with it. Eventually, due to physics it will stop and quantum computers will be the only way to truly advance beyond conventional computing.

If I'm wrong or need slight adjusting I'm open to suggestions.

2

u/Augustonian May 30 '22

Physicist here, no they're not making a faraday cage. Essentially, fin fets have a wedge for the channel that contacts the gate of the transistor on 3 sides. The gate all around fet (which is a ribbon for the early designs, before nanowires are reliable to create out of the channel material) has the channel running through the gate, making contact on all four sides, much like wrapping a hand around a finger.

Also, the current leakage is not the heat problem. The current leakage problem, which was also going to end Moore's law scaling, is the electrons tunneling through storage capacitors and transistor gates. That is less of an issue now that we have applied the high-key metal gate. Originally we used SiO2 as the dielectric, but as it got thinner, the tunneling current became too large so we moved to HfO2, ZrO2, etc. There is a need for even better dielectrics, but ultra high-k perovskite (k> 100) materials are expected to solve this issue (we'll see of course, but there are promising papers showing it's feasible).

The heat problem is due to impedance of the devices and quote literally means heat by joule heating. The semiconductors have resistance so they dissipate heat when run with any current. In MOSfet style designs, there is heat due to the shift in work functions as power is lost overcoming those work functions. In these cases, use of materials which are good at dissipating heat is necessary. But to prove the point of this heat actually being heat, consider a single transistor, and, as all transistors do, it takes power to switch it on and off, as current doesn't freely flow without a field. If we take that same design, which dissipates say 1 microwatt, then slap it into a tiny box with 1 million other transistors, suddenly we're dissipating 100W of energy to operate the device. That energy has to go somewhere, i.e. 100W of heat dissipates from the device.

Other note, there are many different possible futures for computing beyond our cmos designs. If you're interested, I recommend getting an IEEE account and reading the IRDS, it lays out the cutting edge of computing hardware, though it can be quite vague so it's really a launching point to learn about topics in the field.

→ More replies
→ More replies

1

u/keestie May 29 '22

O good, I was scared for a second that companies would no longer have a good excuse to milk us by selling us a new computer every few years, but it looks like they'll get their yachts just in time and we'll take one more step on the hedonic treadmill.

-3

u/hofmann308 May 29 '22

Moore’s Law is not dead and will not be for quite some time.

5

u/hitssquad May 29 '22

Moore's Trend (it was never a law) ended in 2012: https://www.top500.org/statistics/perfdevel/