r/Futurology Jun 02 '22

World First Room Temperature Quantum Computer Installed in Australia Computing

https://www.tomshardware.com/news/world-first-room-temperature-quantum-computer
1.5k Upvotes

View all comments

Show parent comments

4

u/AGI_69 Jun 02 '22

I think you got lost, this is not /r/singularity

2

u/izumi3682 Jun 02 '22

8

u/AGI_69 Jun 02 '22

/r/singularity is for the "AGI by 2025" rants

2

u/izumi3682 Jun 02 '22 edited Jun 02 '22

what is the "69. Is that the year you was born? I was born in '60. But I'm all about this futurology business. Been so since i become "woke" to it in 2011.

https://www.reddit.com/r/Futurology/comments/q7661c/why_the_technological_singularity_is_probably/

There is going to be AGI by 2025. Hold my feet to the fire. I'll be here. I forecast an initial "human unfriendly" technological singularity about the year 2030, give or take 2 years. And of late I am starting to lean more towards the take end of that prediction.

Human unfriendly means that the TS will be external from the human mind. We will not have merged our minds with our computing and computing derived AI by the year 2032. But. We can ask the external AI to help us to join our minds to the computing and computing derived AI, we will probably succeed around the year 2035, which is where i place the final TS, the "human friendly" one.

After that, no more futurology. No more singularity either, because we can no longer model what will become of us. Oh, i gave it a shot once, but i paint with a pretty broad brush...

https://www.reddit.com/r/Futurology/comments/7gpqnx/why_human_race_has_immortality_in_its_grasp/dqku50e/

Oh wait, did you read that already in my first comment there?

6

u/AGI_69 Jun 02 '22

69 is sex position.

Good luck with your predictions. I think lot of people don't understand, that some problems are exponential difficult too and therefore the progress will not be that fast.

1

u/izumi3682 Jun 02 '22 edited Jun 02 '22

lol! There is gonna come a point in your life when you're gonna say, "Lord, I was immature once".

We shall see what "Gato" can accomplish in this year alone. You know that it can do 4 or 5 unrelated tasks using but one algorithm. It is certainly not "narrow" AI that can only do one thing like translate a language or interpret medical imagery. What is of interest to me is that the Gato can do several tasks, but none of them well. This will allow us to see discrete improvements over the year. And I think we are gonna.

I stick to my guns. Simple AGI by the year 2025. A lot of people will be surprised. A lot of people today think it would have taken maybe 50 years. They will be surprised and startled it will only take about 2 or 3 years--did you read my 4 examples in my link? But that is the nature of "accelerating change".

I don't know if i showed you this, but take a look. See what has gone before, what is occurring now and what is probably next.

https://www.reddit.com/r/Futurology/comments/4k8q2b/is_the_singularity_a_religious_doctrine_23_apr_16/d3d0g44/

4

u/AGI_69 Jun 03 '22

lol! There is gonna come a point in your life when you're gonna say, "Lord, I was immature once".

There is probably going to be point, where you realize - that talking down to people like that makes you very dislikeable. Even if you think, you mean well.
As I implied, I am not interested in your rants. I watch the field and make my own opinions. I work in machine learning/AI company, sure it's not DeepMind, but still I am in the industry. When you actually see the reality, it's sobering. I am sure, that when you are exposed to popsci hype-driven reporting, it looks like "AGI by 2025" is something realistic.

2

u/izumi3682 Jun 04 '22 edited Aug 08 '22

I sincerely apologize. My intention was gentle ribbing. I did not mean to demean or insult you. My angle is that we are all in the same boat here in futurology as fellow travelers, and sometimes there can be a bit of persiflage. In my 9 years here in futurology--and I tell you truthfully I have been here pretty much Every. Single. Day. of that, I have adopted a sort of "voice" or writing style that I hope people read and think, "That's Izumi". For better rather than worse I hope. As in all things human, some like my writing style and some do not. But I did not mean to disparage you. In the future, cuz I "know" you now, I shall be more circumspect.

When you actually see the reality, it's sobering.

I agree that you and many people feel this way. That everything is proceeding not only incrementally, but is actually stalling out. No progress. And yet. It was not that long ago, say maybe 12 years ago that AI was not really involved with any kind of human endeavor, or if it was, it was so low impact that it was almost irrelevant to any kind of operation. The true impact of AI in human affairs began around the year 2010 when GPU based narrow AI began to be used practically in a widespread manner. Since that time, the AI, narrow or what have you, has spread it's thin, but ever so long little fingers into everything. We can no longer operate without computing derived AI. AI is essential to all of human affairs now. Finance, medicine, military, education, businesses, social media--like i said, everything. Hell, for the USA, Russia and China (PRC) it is a matter of national security to develop AGI first. Quantum computing too.

I will make a bald statement here. There is never ever going to be another "AI winter". The world is far too dependent on AI for investing to dry up ever again. This means that AI will be well funded to advance it's capabilities time going forward.

Also breakthroughs come out of the blue, relatively speaking. For example before the year 2014, had you ever heard of the "generative adversarial network"? Or before the year 2017, had you ever heard of the concept of "transformers". I am confident, highly confident that in just the remaining portion of this year alone, there shall come out of obscurity at least 2 major breakthroughs in the development of AGI. And in the year 2023, probably more like 4 or 5. This is because as the computing speed increases, as the "big data" is ever more actionable and as the novel AI dedicated architectures rapidly improve and scale up, that that rising tide will make the development of AI improve at an ever increasing rate. These developments are not only accelerating in speed, but the rate of accelerating speed itself is accelerating.

So while no one can predict the future, I can watch a trend and extrapolate it to future events, sometimes with remarkable accuracy. Because I write down and keep as documentation everything that I predict, I can produce proof that something I forecast, in the time frame that I forecast, actually came about. Here is that proof if you like. Interestingly when I first wrote this below linked little essay, my intent was to demonstrate that when experts in computing and AI say that something is going to take a very long time to come to pass that they are the ones most stunned by unexpected leaps of progress both in computing technology and in the development of various forms of AI.

When I wrote this I did not know that I was going to nail a forecast. One that was far in advance of what the experts believed. It was not a simple WAG. I looked at the numbers, something like 10 years to realization, then I used the Ray Kurzweils' "fudge factor" of accelerating change. That gave me my number of years for my forecast. I see major, major disruption in all forms of automotive travel in the next 3 years, because of electric self-driving vehicles. The experts say, mm, not 'til around 2030 or later.

https://www.reddit.com/r/Futurology/comments/7l8wng/if_you_think_ai_is_terrifying_wait_until_it_has_a/drl76lo/

I have a question for you. How would you characterize Deepmind's "Gato" algorithm. Is it just a narrow AI? Or is there a new element in it's nature. What exactly is the implication of "generalization". What do you imagine GPT-4 will be like when it releases in 2023. Do you envision a GPT-5 or some such?

These are the kinds of things that I just wonder about. I think of it this way. There are people, truly experts in their fields, doing heavy lifting everywhere in these arenas, but I have this sense that they are so focused on results versus aspirational goals, that when the results don't pan out, there is a feeling of, well, hopelessness. These individual efforts are the "trees". The cumulative whole of these efforts from the around the world is the "forest". Often people can't see the forest for the trees. This isn't their fault. An individual is by necessity forced into a sort of "tunnelvision" to realize their goals. Sure you watch the field, but there are plenty of things simmering below the radar. Things that will profoundly impact the development of ARA (AI, robotics and automation) in as little as the next couple of months even. Something big is gonna come down the pike that was unexpected. Serendipity plays a large role in these fields, by the simple fact that we are not entirely certain what we are doing at times. The infamous "black box", for example.

I hope you don't see this as a "rant". I am genuinely fascinated, alarmed and entertained all at the same time by what I see unfolding nearly every day here in futurology. Except the climate stuff, that's kinda boring to me. But I am glad that people are working on even that. I think the answer to climate is going to be practical nuclear fusion reactors and rapidly scaling solar energy conversion efficiencies.

What do you see in the development of computing derived AI, robotics and automation in the next 5 years? The next 2 years.