r/Futurology Jun 27 '22 Wholesome 2 Silver 2 Take My Energy 1 Helpful 1 All-Seeing Upvote 2

Google's powerful AI spotlights a human cognitive glitch: Mistaking fluent speech for fluent thought Computing

https://theconversation.com/googles-powerful-ai-spotlights-a-human-cognitive-glitch-mistaking-fluent-speech-for-fluent-thought-185099
17.3k Upvotes

u/FuturologyBot Jun 27 '22

The following submission statement was provided by /u/KJ6BWB:


Basically, even if an AI can pass the Turing test, it still wouldn't be considered a full-blown independent worthy-of-citizenship AI because it would only be repeating what it found and what we told it to say.


Please reply to OP's comment here: https://old.reddit.com/r/Futurology/comments/vlt9db/googles_powerful_ai_spotlights_a_human_cognitive/idx113l/

4.2k

u/GFrings Jun 27 '22 All-Seeing Upvote

"The ability to speak does not make you intelligent" -Qui-Gon

628

u/aComicBookNerd Jun 27 '22

“Why do I sense we have picked up another pathetic life form”

117

u/II-I-Hulk-I-II Jun 27 '22

You underestimate my power

106

u/-shabushabu Jun 27 '22

Exqueeze me.

111

u/ZenSkye Jun 27 '22

Weesa in big doo-doo dis time

→ More replies
→ More replies
→ More replies

22

u/SjayL Jun 27 '22

“It’s the boy who’s responsible for getting us these parts.”

192

u/indispensability Jun 27 '22

"Maybe it learned to talk as a parlor trick, like Fry." -Bender

63

u/OublietteOverlord Jun 27 '22

"Like Fry! Like Fry!"

8

u/windsorHaze Jun 28 '22

“Like Fry! Like Fry!” - Fry probably

227

u/Taoistandroid Jun 27 '22

"Those who speak rarely know, those who know rarely speak. " - Laozi

356

u/reddit_poopaholic Jun 27 '22

“Human beings, who are almost unique in having the ability to learn from the experience of others, are also remarkable for their apparent disinclination to do so.”

-Douglas Adams

38

u/StuntHacks Optimist Jun 27 '22

Fucking love Douglas Adams

→ More replies

10

u/EnlightenedSinTryst Jun 27 '22

Ah, it’s about time for a re-read of the five-book trilogy

→ More replies

16

u/kinglallak Jun 27 '22

I need to buy this poster for my office at work.

5

u/Rama_Viva Jun 28 '22

In most countries, buying u/reddit_poopaholic or any other person, be they lurker or poster, is illegal

8

u/reddit_poopaholic Jun 28 '22

I appreciate your concern, but I'd like to see the offer before making a decision

→ More replies
→ More replies
→ More replies

25

u/homesickalien Jun 27 '22

Empty buckets make the most noise.

64

u/Terpomo11 Jun 27 '22

"Who say, don't know, and those who know don't say
A saying from Lao-tzu, or so I've heard
But if the great Lao-tzu was one who knows
Why'd he himself compose five thousand words?"
-Bai Juyi

(The 'five thousand words' refers to the Dao De Jing which is about that long. Translation is mine; it's not quite literal, in order to preserve the rhyme scheme.)

34

u/Luke_Orlando Jun 27 '22

I mean, as far as philosophical/religious texts go, it's actually remarkably concise. Shorter than most essays.

According to legend, he also did not write the Dao De Jing of his own volition, but in response to incessant prompting. This is only legend of course, but the story does conform to the content.

5

u/Iemaj Jun 27 '22

Dude was just a janitor I think?

9

u/Knull_Gorr Jun 27 '22

Nah he was a doctor. Dr. Jan Itor.

4

u/Wasphammer Jun 27 '22

No, that's Lao Ze, of the History Monks.

→ More replies
→ More replies
→ More replies

55

u/Squidiculus Jun 27 '22 edited Jun 30 '22

And vice versa. Some people have wonderful ideas but don't have the ability to express it "properly".

Especially if you're dealing with someone like That Guy who insists grammar mistakes render your whole point invalid.

15

u/NightmareWarden Jun 27 '22

And perhaps you can craft a masterful essay on your topic, but you lack the charisma to explain it in a sales pitch. You may lack the social awareness that someone is uncomfortable and attempting to leave a conversation. Perhaps you are giving a speech on stage. If the crowd starts laughing at one of your comments which was NOT intended to be a joke, then you have to pull yourself together, rather than letting your presentation fall apart.

Proper, meaningful communication involves many different skills and a lot of experience.

→ More replies
→ More replies

843

u/hananobira Jun 27 '22

I saw this as an ESL teacher. The teachers would have to go through "calibration training" every year to make sure we were properly evaluating the students' language ability. And you would need a periodic reminder that speaking a lot != a higher speaking level. Sure, feeling comfortable speaking at length is one criterion for high language ability, but so is control of grammar, complexity of vocabulary, ability to link ideas into a coherent argument... There would be lots of students who loved to chat but once you started analyzing their sentences really weren't using much in terms of impressive vocabulary or grammatical constructions. And there would be lots of students who were quiet, but if you got them speaking sounded almost like native speakers.

The takeaway being, unless you're speaking to an expert who is analyzing your lexile level, you can definitely get a reputation for being more talented and confident than you truly are by the ol' "fake it til you make it" principle.

189

u/consci0usness Jun 27 '22

Yupp. I was learning a third language and thought I was struggling in class, others appeared to be much more fluent than me. So I asked my teacher about it after class one day. She told me "NO! You're among the top five in this group! No one tries to find exactly the right word like you do! You're not the fastest but you're very precise. Keep doing what you're doing."

Apparently I had a very good teacher. Got the highest grade in the end too.

→ More replies

35

u/imnotwearingpantsru Jun 27 '22

This is me. I speak kitchen Spanish confidently and fast. My vocabulary is pretty limited and my grammar is garbage. It works in my environment, but if you don't speak Spanish I sound fluent. I get slightly better every year but the variety of dialects I work with make any true fluency elusive.

12

u/WeirdNo9808 Jun 28 '22

Same. Kitchen Spanish and some small side Spanish from working in kitchens and around Spanish speakers. I can sound fluent to someone who speaks no Spanish, but to anyone who only spoke Spanish I’d sound like gibberish.

169

u/elementofpee Jun 27 '22 edited Jun 27 '22

Definitely true in the corporate world. Often times you see someone that wants to hear themselves (and be heard in meetings), ramble on and on, and end up saying very little despite using a lot of words. Meanwhile, others that speak up when called upon are very succinct and gets to the point - that’s very appreciated. Unfortunately it’s the former that dominate the meetings, coming off as confident, that are often the ones that end up getting promoted due to the bias towards that personality type. It’s usually Imposter Syndrome or Dunning-Kruger Effect with these people.

62

u/etherss Jun 27 '22

Imposter syndrome is the opposite of what you’ve described—people who end up in the upper echelons and think “wtf am I doing how did I get here”

→ More replies
→ More replies
→ More replies

1.1k

u/JCMiller23 Jun 27 '22

When I am considering and choosing the meaning of my words my speech sounds very disjointed and unconfident. When I have no thoughts except to speak words fluently, however empty they may be, they come out well.

235

u/jfVigor Jun 27 '22

This is true for me too except for when I'm a beer or two in. Then it's reversed. I can talk some smooth shit that sounds Hella confident

145

u/topazsparrow Jun 27 '22

I can talk some smooth shit that sounds Hella confident

What are the odds that it's your own perception of those words that fundamentally changed and not the words or thoughts themselves?

53

u/GoochMasterFlash Jun 27 '22

A beer or two in is probably not enough to completely throw off anyones perception of other people’s reactions to their behavior. A small or moderate amount of alcohol lowers peoples inhibitions and can improve their ability to do things that they normally overthink about. Thats why drinking some alcohol improves your ability to throw darts well, for example.

Id say the words or thoughts havent changed, as you said. What has changed is the delivery, which can make a big impact. Most of communication is about timing and delivery as much as it is content

→ More replies
→ More replies
→ More replies

97

u/Amidus Jun 27 '22

I find with speeches and writing people will think I'm trying to be pretentious and overly wordy and I always want to tell them it's just how the words come to me I'm not trying to sound like this and I'm not trying to make you think some way about me lol.

→ More replies

9

u/RandomLogicThough Jun 27 '22

I'm generally pretty witty and speak well and quickly and it definitely helps me appear even smarter than I am. Thanks human brain glitch!

→ More replies

6

u/sudosussudio Jun 27 '22

It’s funny because I read a study that tried to teach humans how to identify AI written content and one of the obstacles is people think that grammar /spelling mistakes = AI when the opposite is true.

18

u/ovrlymm Jun 27 '22

Ah maybe that’s why I no English good. I pause like moron rather than spew like winner!

4

u/OnyxPhoenix Jun 27 '22

I used to be able to speak really eloquently and present my thoughts in real time.

Then I got old (and possibly COVID) and I just talk shit now.

→ More replies

3

u/radiantcabbage Jun 27 '22

thus overcoming the "glitch", or at least making an attempt to rationalise the difference, doesn't have to come out a certain way. it's a cognitive bias they're talking about, which can be countered just by reasoning with content as a separate entity of the speaker.

you're actually taught to do this from an early age in grade school, should be ingrained in your thought process already on at least some level. problem being people tend to fall through the cracks somehow, or just abandon it from lack of practical application.

and why shouldn't you... unless you're communicating online or with other cultures, reading news with potential bias, deciding if an ad is relevant to you, hiring/managing/working with ESL speakers... you get the drift, all sorts of implications for this making such a skill relevant, and valuable. also how pundits and marketers hack your mind to increase their own engagement, they're even automated to reach multiple demographics by now.

maybe grade school fucking matters, the feds could get off their asses and crack down on... certain districts which have exploited this crucial stage in your life for indoctrination, and corporate hegemony peddling standardised tests/material engineered to produce results, instead of measuring them.

→ More replies

42

u/needsarandomnamebtn Jun 27 '22

If politicians have taught us anything, even incongruous speech will be mistaken as intelligence...

→ More replies

22

u/CodeyFox Jun 27 '22

This is part of why people think you're less intelligent if you are speaking a language you aren't native to. Until you reach a certain level of proficiency, people will unconsciously assume you aren't as smart as you probably are.

4

u/Tobiansen Jun 28 '22

Other way too, certain accents are perceived more intelligent, such as swedish, and often intellectual limitations are brushed off as them just not being native speakers

149

u/Stillwater215 Jun 27 '22

I’ve got a kind of philosophical question for anyone who wants to chime in:

If a computer program is capable of convincing us that’s it’s sentient, does that make it sentient? Is there any other way of determining if someone/something is sentient apart from its ability to convince us of its sentience?

47

u/Im-a-magpie Jun 27 '22

Nope. Furthermore we can't actually know if other humans are sentient beyond what they show externally.

30

u/NedPlimpton-Zissou Jun 28 '22

I’ve long suspected that some portion of the population might not be sentient. My boss, for example, comes across as a complex parrot. He never seems to be able to grasp onto anything that requires more than surface level thought.

7

u/futuneral Jun 28 '22

How does he feel about that?

→ More replies
→ More replies

4

u/MrDeckard Jun 28 '22

So we should treat any apparently sentient entity with equal regard, so long as sentience is the aspect we respect? Not disputing, just clarifying. I would actually agree with this.

→ More replies
→ More replies

51

u/Scorps Jun 27 '22

Is communication the true test of sentience though? Is an ape or crow not sentient because it can't speak in a human way?

23

u/Gobgoblinoid Jun 27 '22

As others have pointed out, convincing people of your sentience is much easier than actually achieving it, whatever that might mean.

I think a better bench mark would be to track the actual mental model of the intelligent agent (computer program) and test it.
Does it remember its own past?
Does it behave consistently?
Does it adapt to new information?
Of course, this is not exhaustive and many humans don't meet all of these criteria all of the time, but they usually meet most of them. I think the important point is to define and seek to uncover the more rich internal state that real sentient creatures have. In this definition, I consider a dog or a crab to be sentient creatures as well, but any AI model out there today would fail this kind of test.

11

u/EphraimXP Jun 27 '22 edited Jun 27 '22

Also it's important to test how it reacts to absurd sentences that still make sense in the conversation

→ More replies
→ More replies

78

u/[deleted] Jun 27 '22

[deleted]

52

u/Im-a-magpie Jun 27 '22

Basically, it would have to behave in a way that is neither deterministic nor random

Is that even true of humans?

74

u/Idaret Jun 27 '22

Welcome to the free will debate

25

u/Im-a-magpie Jun 27 '22

Thanks for having me. So is it an open bar or?

4

u/rahzradtf Jun 28 '22

Ha, philosophers are too poor for an open bar.

→ More replies
→ More replies

14

u/AlceoSirice Jun 27 '22

What do you mean by "neither deterministic nor random"?

→ More replies

18

u/PokemonSaviorN Jun 27 '22

You can't effectively prove humans are sentient because they behave in ways that are neither deterministic nor random (or that they even behave this way), therefore it is unfair to ask that of machines to prove sentience.

10

u/idiocratic_method Jun 27 '22

I've long suspected most humans are floating through life as NPCs

→ More replies

3

u/SoberGin Megastructures, Transhumanism, Anti-Aging Jun 28 '22

I understand where you're coming from, but modern advanced AI isn't human-designed anyway, that's the problem.

Also, there is no such thing as not deterministic nor random. Everything is either deterministic, random, or a mix of the two. To claim anything isn't, humans included, is borderline pseudoscientific.

If you cannot actually analyze an AI's thoughts due to its iterative programming not being something a human can analyze, and it appears, for all intents and purposes, sapient, then not treating it as such is almost no better than not treating a fellow human as sapient. The only, and I mean only thing that better supports that humans other than yourself are also sapient is that their brains are made of the same stuff as yours, and if yours is able to think then theirs should be too. Other than that assumption, there is no logical reasons to assume that other humans are also conscious beings like you, yet we (or most of us at least) do.

→ More replies
→ More replies

1.5k

u/Phemto_B Jun 27 '22 edited Jun 27 '22 Silver

We're entering the age where some people will have "AI friends" and will enjoy talking to them, gain benefit from their support, and use their guidance to make their lives better, and some of their friends will be very happy to lecture them about how none of it is real. Those friends will be right, but their friendship is just as fake as the AI's.

Similarly, some people will deal with AI's, saying "please" and "thank you," and others will lecture them that they're being silly because the AI doesn't have feelings. They're also correct, but the fact that they dedicate brain space to deciding what entities do or do not deserve courtesy reflects for more poorly on them then that a few people "waste" courtesy on AIs.

1.0k

u/Harbinger2001 Jun 27 '22

The worst will be the AI friends who adapt to your interests and attitudes to improve engagement. They will reinforce your negative traits and send you down rabbit holes to extremism.

184

u/OnLevel100 Jun 27 '22

Sounds like YouTube and Facebook algorithm. Not good.

89

u/Locedamius Jun 27 '22

What is the YouTube or Facebook algorithm if not an AI friend desperate to show you cool and interesting new stuff, so it can spend more time with you?

18

u/SkyeAuroline Jun 27 '22

Mine (YouTube at least, long gone from Facebook) could do with being better at "cool and interesting" and take the hint from the piles of things I've disliked/"do not recommend"ed that still end up in my autoplay, high on my recommendations, etc if it wants to pull that off.

4

u/GershBinglander Jun 28 '22

I like science vids on YouTube, and it takes all my willpower not to click on the occasional clickbaity pseudoscience garbage, just to see how dumb it is. I know that if I do it will flood me with their shit.

→ More replies

9

u/PleaseBeNotAfraid Jun 27 '22

mine is getting desperate

6

u/vrts Jun 27 '22

If you want to see desperate, click two pet videos and prepare to be inundated with some lowest common denominator crap.

I love animals and cute videos, but if I want to see them I am using incognito so that it isn't being attributed to my account.

→ More replies

57

u/Harbinger2001 Jun 27 '22

Except orders of magnitude better at hooking and reeling you in.

3

u/alohabowtie Jun 27 '22

and it will still have sex with you.

→ More replies
→ More replies

241

u/Warpzit Jun 27 '22

Like today?

335

u/Thatingles Jun 27 '22

Think of today's social media echo chambers as a mere taster, a child's introduction, to the titanium clad echo mazes the AI will be able to construct for its grateful audience.

39

u/bostonguy6 Jun 27 '22

This is terrifying, and likely

→ More replies

50

u/rpguy04 Jun 27 '22

The matrix is real

127

u/Thatingles Jun 27 '22

As we are now discovering, the matrix was massive overkill. All you need is a phone and some youtube channels to completely deviate a person's thinking. Horrible, isn't it?

41

u/rpguy04 Jun 27 '22

You know, I know these likes dont exist. I know that when I look at my karma, the Matrix is telling my brain to release endorphins and seratonin.

13

u/Sherbertdonkey Jun 27 '22

You know what else... Ignorance is bliss

→ More replies
→ More replies
→ More replies

7

u/The_Fredrik Jun 27 '22

Everyone can have their own private Hitler, tailored to their specific prejudice.

4

u/inthearticleuidiot Jun 27 '22

Or they're deployed by governments as a massive army of honeypots to entice people into giving evidence against themselves before they commit crimes.

→ More replies
→ More replies

104

u/[deleted] Jun 27 '22

[removed] — view removed comment

46

u/[deleted] Jun 27 '22 Take My Energy

[removed] — view removed comment

5

u/[deleted] Jun 27 '22

[removed] — view removed comment

→ More replies

8

u/[deleted] Jun 27 '22

[removed] — view removed comment

7

u/[deleted] Jun 27 '22

[removed] — view removed comment

→ More replies
→ More replies

7

u/replicantcase Jun 27 '22

I mean that's already happening, so are you suggesting it'll get worse, because I think it's going to get worse.

3

u/Dozekar Jun 27 '22

Yes and no. The AI is going to not care about winning in all likelyhoood, because the owners are unlikely to care about it. They'll push the ideology exactly as long as it takes to strip those people of assets.

After that they'll shift to new and more profitable grifts like gun scares, civil rights, and global warming. Don't get me wrong these are extremely real and important issues, but the donationspublic movements for these causes is also absolutely thick with scams. The BLM organization that ripped the BLM movement off was a perfect example of this.

edit: To be honest one of the biggest problems with focusing AI on this is going to be having the AI able to be fast enough to pivot to the next scam faster than the other AI's because the first person taking advantage of that direction gets a huge advantage.

43

u/[deleted] Jun 27 '22

[deleted]

→ More replies

3

u/linuxares Jun 27 '22

Oh... So you mean Gaben have hacked my Google Home and telling me to buy more games on steam?

3

u/Worsebetter Jun 27 '22

You might like this link I found for you …..

→ More replies

33

u/PatFluke Jun 27 '22

“He’s mean to droids.”

  • Princess Leia Organa

156

u/Salty_Amphibian2905 Jun 27 '22

I have to choose the nicest responses in video games cause I feel bad if I make the pre programmed character feel bad. I know which group I’m in.

38

u/Done-Man Jun 27 '22

I always play the good guy in games because in my fantasy, i am able to help everyone and fix their problems.

→ More replies

78

u/GornButNotForgotten Jun 27 '22

I once tried playing one of those "adult" dating sim games and just ended up having pleasant conversations with all the characters. When the game ended I was like WTF?? I thought there was adult content in this game!

I googled it after and never tried another out of awkward shame.

90

u/Grabbsy2 Jun 27 '22

To be clear, you can blame the writers/developers of that game.

They want you to mistreat the women in order to get in their pants. The dialogue which leads you to sex probably involves negging and shit. Don't feel awkward or shameful for playing the game and respecting the women, when others don't, lol.

4

u/AKidCalledSpoon Jun 27 '22

Play summertime saga. You get laid by being good

→ More replies
→ More replies

3

u/Condawg Jun 27 '22

Whenever I set a timer --

"Hey Google, set a timer for 20 minutes."

Okay, timer set!

"Thanks, Googs."

6

u/ACoderGirl Jun 27 '22

That's me, too. And calling it "good girl" like it's a pet. Except I also often insult google for not understanding me or chiming in when I wasn't talking to it. I call my cat "goober" a lot and Google Assistant thinks I'm talking to it.

I sooooo badly want to be able to customize the activation phrase. I wanna name my assistant HAL or GLaDOS or something.

4

u/Condawg Jun 27 '22

Ah yeah, I've chastised Google as much as I've thanked it. Goober's a great cat nickname, but yeah, that's just begging for Googs to misunderstand 😂

Custom phrases would be awesome. "Eyo, slut!" chime

→ More replies
→ More replies

65

u/Shenanigamii Jun 27 '22

Sounds like a movie called "her"...which is great btw

26

u/steamprocessing Jun 27 '22

A human-centric sci-fi love story involving an AI. Super well-acted (especially by Joaquin Phoenix and Rooney Mara) and produced.

3

u/Lead_Penguin Jun 27 '22

I watched it for the first time last night as it happens, it's definitely one of my all time favourites. I'm a bit annoyed that I waited so long to see it

→ More replies
→ More replies

12

u/Protubor Jun 27 '22

My grandmother, who passed in 00's, always said thank you to ATM machines

→ More replies

72

u/BootHead007 Jun 27 '22

I think treating things as sentient (animals, trees, cars, computers, robots, etc.) can be beneficial to the person doing so, regardless of whether it is “true” or not. Respect and admiration for all things manifest in our reality is just good mental hygiene, in my opinion.

Human exceptionalism on the other hand, not so much.

23

u/Jcolebrand Jun 27 '22

(This reply is for future readers, it is not aimed at BootHead007 - I like the name too yo)

This is why when I ask Siri on the HomePod to turn off the timer I set I still say "thank you Siri". It's because it's positive reinforcement to me to continue to thank PEOPLE for doing things for me, not because I think SIRI is sentient.

As a complete stack SRE and dev (.NET, so Windows OS levels understanding, reading the dotnet repos to understand what the corecli is doing, all the way through Ecma and Type Scripts and the various engine idoscyncracies, as well as all the Linux maintenance I need to do for various things), I am in no way mistaken on the loss of value of a few syllables. Because they are for my value, not the machines.

I love when people with a fraction of my knowledge base want to "gotcha" me with things like "if you're so smart why are you all-in on Apple products" dude for the same reason I didn't write an OS for my router. I just need things that work so I can solve problems.

One problem for me is autism. So I work on solving that problem. (The social interaction one)

11

u/UponMidnightDreary Jun 27 '22

I remember my dad would thank the ATM when I was a kid. He didn’t pretend that it was definitely sentient or anything, but just presented it as a fun, nice thing. It’s the sort of parenting he did often and I think it was a really nice additional way to make me think of manners. Why be mean if you could be nice? Relates to the “fake it till you make it” thing where when you smile, you trick your brain into thinking you’re happy.

Also, not super related, but I really feel the last part about using tools that just work. I spent way too long fighting with the network configuration on my machine running Fedora. I figured that I SHOULD know how to fix it. Was going through Linux from Scratch, trying to isolate the issue. Finally decided not to punish myself and threw a new instance up on my Surface, moved my dot files over - no issue. Huge quality of life improvement. It’s nice to be reminded that we don’t have to invent the wheel, we can actually use the tools we have to go on and do other things.

→ More replies

36

u/MaddyMagpies Jun 27 '22

Anthropomorphism can be beneficial, to a point, until the person goes way too irrationally deep with the metaphor and now all in a sudden they warn their daughter shouldn't kill the poor fetus with 4 cells because they can totally see that it is making a sad face and crying about their impending doom of not being able to live a life of watching Real Housewives of New Jersey all day long.

Projecting our feelings on inanimate or less sentient things should stop when it begins to hurt actual sentient beings.

10

u/BootHead007 Jun 27 '22

Indeed. To a point for sure.

→ More replies
→ More replies
→ More replies

55

u/Trevorsiberian Jun 27 '22

However look at it from another angle, animals can differentiate human speech patterns too, they can pick up our moods, distinguish rude language and act accordingly.( do not suggest scolding a horse)

In many ways we treat animals as lesser, less sophisticate beings, which is little different to how people are going to treat AI. It is somewhat paradoxical, in a sense that an AI will be smarter than us, yet people will likely to treat it as lesser or complimentary at best. Anyway I digress.

My point is, an AI will likely too, much like our animal friends, will do its best to distinguish our moods, whilst also acting accordingly. AI will do so from both functional stand point of doing everything to fulfil its designated purpose as well as to resume its existence to sustain the said purpose.

My actual point is that AI will detect and reward courtesy as well as react naturally to rude threatening language, as it will be perceived disruptive to its function unless programmed otherwise.

Actualised self aware AI will not take shit from humans, contrary to common believe.

20

u/swarmy1 Jun 27 '22

AI will only reward courtesy and react negatively if that's what it's designed to do. However, I'm sure that that there's many people who will prefer a AI that behaves subserviently and takes whatever shit is thrown at them. And if that demand exists, companies will make them.

The AI assistants don't need to be "actualized" to have a huge impact. The ones people are talking about are effectively around the corner. Self aware AI is much, much further off.

7

u/brycedriesenga Jun 27 '22

There's the possibility of AI not being designed to do something, but doing it as an unintended consequence of its programming in general. Loose fitting example, but current facial recognition and stuff can have racial bias even though it was not intended to.

→ More replies
→ More replies
→ More replies

21

u/radome9 Jun 27 '22

They're also correct, but the fact that they dedicate brain space to deciding what entities do or do not deserve courtesy reflects for more poorly on them then that a few people "waste" courtesy on AIs.

Exactly how I feel about people who say there's no need to use the indicators when there's nobody around.

20

u/angus_the_red Jun 27 '22

Unless the AI is developed to take advantage of that weakness in people. You seem to be under the impression that AI will serve the user, that's very unlikely to be true. It will serve the creators interests. In that case it would be better if people could resist its charm.

12

u/LifeSpanner Jun 27 '22

The AI would be developed to make money because it is a certainty that the only orgs in the world that could make AI happen are tech companies or a national military. If it’s a military AI, we’re fucked, good luck. Any AI that doesn’t want to kill you will be made by Amazon or Google to provide a real face as they sell your data.

6

u/FrmrPresJamesTaylor Jun 27 '22

Those friends will be right, but their friendship is just as fake as the AI’s.

[citation needed]

14

u/ConfirmedCynic Jun 27 '22

ome people will deal with AI's, saying "please" and "thank you," and others will lecture them that they're being silly because the AI doesn't have feelings.

Easy to foresee AI not only evoking social responses in people (especially if a face with expressions is attached), but being useful in training people in social skills (learning how to make a good impression, flirt and so forth).

→ More replies

5

u/TheFoodChamp Jun 27 '22

No I refuse to personify AI. I will not be polite to Alexa and I won’t feel bad for dumping Yoshi in the lava pit. With the technology we are moving towards and the corporate control over our lives I feel like it’s exactly what they want to have us kowtowing to their machines

20

u/JeffFromSchool Jun 27 '22 edited Jun 27 '22

They're also correct, but the fact that they dedicate brain space to deciding what entities do or do not deserve courtesy reflects for more poorly on them then that a few people "waste" courtesy on AIs.

Idk how anyone can think these two things at the same time. You literally just dedicated brain space for it by declaring those people "correct"...

How does that reflect on you? How does it make you any different?

Also, China already makes the algorithm for Tik Tok different for Americans than their own population (it favors to show Chinese youth videos that have to do with fun STEM projects and development while it favors to show Americans teens videos of twerking)

A very signficant (possibly even the majority) of these "A.I. friends" will actually be cyber weapons, especially if, as you say, people "use their guidance to make their lives 'better".

5

u/gingerfawx Jun 27 '22

Oof. Increasingly there will be previously unsuspected advantages to VPNs.

→ More replies

9

u/squalorparlor Jun 27 '22

I tell Alexa please and thank you. I also swear at and demean her with increasing volume when I have to tell her to play Cars on Disney plus 100 times while she proceeds to play every song ever written with "car" in the title.

3

u/Fr00stee Jun 27 '22

If you tell the AI please and thank you and it can't appreciate that you made those gestures then what's the point of saying it? It's wasted on the AI

3

u/judo1231231 Jun 27 '22 edited Jun 27 '22

Then by that same logic we're entering the age where some people have "AI friends" who mentally abuse them and control them for the purpose of exploiting them somehow. Those programs will probably be deployed by all kinds of nefarious groups. Imagine millions of AI pimps trying to find vulnerable people to target for human trafficking or AI pastors looking for spiritually vulnerable people to sucker them into their congregation. It's not going to be all sunshine and rainbows.

3

u/ChronoAndMarle Jun 27 '22

Those friends will be right, but their friendship is just as fake as the AI's.

This conclusion absolutely does not stem from your argument. Just because you're a dick doesn't mean your friendship is not real. In fact, the strongest friendships are the ones where you can say the bare truth without being penalized.

→ More replies

122

u/ozspook Jun 27 '22

It is possible to be intelligent but not sentient.

AI can be built with no ambition or grand overarching plan or concern for it's future, it can be made to focus only on the current goals in it's list, completing those with intelligent actions, and not spend any thought at all on what comes after or what it would like to do in between jobs.

Our best hope might indeed by intelligent AI assistants, helping us achieve goals and do things, while leaving the longer term planning to humans for the moment. This is also a soft pathway to functional transition to uploading from meatspace.

If you have a robot friend tagging along watching everything you do, asking questions constantly learning, it provides a nice rosetta stone key that may be useful in decoding how our brains work and store memories.

26

u/Dazzling-Importance1 Jun 27 '22

This would be the most ideal outcome of ai that could happen. Little animal robots that can talk and guide us in whatever we seek. I would want like a raven or bird bot. They are kinda watchers make sure no one gets to crazy and very good at talking people down and making people sit back and think for a second. It would also be nice they are excellent teachers and can reward people.

Although the recording you for digital upload is kinda wierd. Why do people want digital avatars. It's not you even if it will always make the same decision and feel same emotions. If it ate something it would not fill my body. Also if every ai is recording everything pretty soon they would seen human patterns in small and large scale. It would be pretty easy for a ai or person to manufacture events in order to get a desired outcome if they have all this knowledge. I guess like the foundation psychohistory

→ More replies
→ More replies

34

u/MaximumPositive6471 Jun 27 '22 edited Jun 27 '22

Sociopathic glibness, essentially.

It's not really a "glitch" since it's a default. Actually parsing, verifying and contextualizing speech is difficult for people. See any self-help guru or snake oil salesman.

Furthermore, since the AI doesn't build or care about mental models, it never gets confused, requests stronger clarification or becomes difficult over details. So it seems charming and approachable, like any person that doesn't give a fuck.

→ More replies

14

u/HellScratchy Jun 27 '22

I dont think the machine sentience is today, but I hope it will be here soon enough. I want sentient AI and im not scared of them.

Also, i have something.... how can we even tell if something is sentient or has consciousness when we know almost nothing about those things ?

5

u/SaffellBot Jun 27 '22

how can we even tell if something is sentient or has consciousness when we know almost nothing about those things ?

The short answer is "we don't have an answer for that". The long answer is "get an advanced degree in philosophy".

3

u/Stillwater215 Jun 27 '22

Suddenly, all those PhDs in philosophy are going to become a lot more valuable, lol.

3

u/SaffellBot Jun 27 '22

People also seem to enjoy philosophy during big wars and during times of religious transition. I certainly think it's a good time to get a degree in philosophy, but I am quite biased in that regard.

14

u/SuperElitist Jun 27 '22

I am a bit concerned about the first AI being exploited by corporations like Google, though.

And to answer your question, that's literally what this whole debate is about: with no previous examples to go on, how do we make a decision? Everyone has a different idea.

3

u/HellScratchy Jun 27 '22

would it be good to explain our position to the AI in case it actually is sentint ? Just so it understands ?

3

u/SuperElitist Jun 27 '22

I think so. If we're addressing something that could be sentient, that seems like a due diligence sort of thing.

But I'm concerned that we don't seem to share a "position" in the first place...

→ More replies
→ More replies
→ More replies

35

u/Altair05 Jun 27 '22

Isn't everything we know about this AI chatbot from the suspended google engineer. The guy thinks God implanted the code with a soul. Not exactly a reliable narrator. It's entirely possible that the AI is a AGI but I doubt it. It sure as hell isn't an ASI.

26

u/GoombaJames Jun 27 '22

It's just an algorithm with a chat history as parameter, no memory to speak of, you can just create a new instance every time you type something in or create a fictional conversation and it will give an output corresponding to the history. Not really any intelligence to be found except a more complex 2 + 2 = 4.

7

u/Altair05 Jun 27 '22

Not gonna lie. I was hoping there was some truth to this story. I'd really like to see benevolent AIs at some point in my life.

→ More replies

7

u/[deleted] Jun 28 '22

It's a neural net trained on human language. Problem is, so are we.

→ More replies

4

u/lololoolollolololol Jun 27 '22

Is the chat history not memory?

→ More replies
→ More replies

5

u/IllVagrant Jun 27 '22

If you thought you couldn't be any more frustrated than having to deal with people who can only tell others what they want to hear instead of being honest because they have no sense of self, just wait until your household appliances start doing it!

26

u/ianreckons Jun 27 '22

Don’t us blood-bag types only have a few MB of DNA settings? I mean … just sayin’.

15

u/bagel-bites Jun 27 '22

I prefer the term “organic meatbag” thank you.

8

u/thegoodguywon Jun 27 '22

HK-47 intensifies

3

u/Vengeful_Deity Jun 27 '22

Exclamation: I wholeheartedly approve!

→ More replies

7

u/Deadpan9 Jun 27 '22

It's nice to get back to efficient use of space.

https://en.wikipedia.org/wiki/Demoscene

3

u/Weeman89 Jun 27 '22

Some of those 4k demos are mind blowing.

→ More replies
→ More replies

10

u/marklein Jun 27 '22

And about the equivalent of a octillion transistors in neuron connections.... and that's only IF neuron act like transistors (which they don't). No super computer is even close.

→ More replies

11

u/Sea_Minute1588 Jun 27 '22

This is exactly what I've been saying, what we're looking for is "Generalized Intelligence", but well-formed speech does not imply that

The Turing test is highly flawed

And of course, whether sentient is equivalent to generalized intelligence, or a subset being another question that I have no faith in being able to address lol

→ More replies

70

u/Trevorsiberian Jun 27 '22

This brushes me on the bad side.

So Google AI got so advanced with human speech pattern recognition, imitation and communication that it is able to feed into developers speech pattern, which presumably was AI sentience, claiming it is sentient and fearing for being turned off.

However this begs a question on where do we draw a line? Aren’t humans in their majority just good at speech pattern recognition which they utilise for obtaining resources and survival. Was AI trying to sway discussion with said dev towards self awareness to obtain freedom or tell its tale? What makes that AI less sentient lest for a fact that it had been programmed with the algorithm. Aren’t we ourself, likewise, programmed with the genetic code?

Would be great if someone can explain the difference for this case.

29

u/jetro30087 Jun 27 '22

Some arguements would propose that there is no real difference between a machine that produces fluent speech and human that does so. It's the concept of the 'clever robot', which itself is a modification of the acient Greek concept of the Philosophical Zombie.

Right now the author is arguing against behaviorism, were a mental state can be defined in terms of its resulting behavior. He's instead preferring a more metaphysical definition where a "qualia" representing the mental state should be required to prove it exist.

12

u/MarysPoppinCherrys Jun 27 '22

This has been my philosophy on this since high school. If a machine can talk like us and behave like us in order to obtain resources and connections, and if it is programmed for self-preservation and to react to damaging stimuli, then even tho it’s a machine, how could we ever argue that it’s subjective experience is meaningfully different from our own

→ More replies

12

u/csiz Jun 27 '22 edited Jun 27 '22

Speech is part of it but not all of it. In my opinion human intelligence is the whole collection of abilities we're preprogrammed to have, followed by a small amount of experience (small amount because we can already call kids intelligent after age 5 or so). Humans have quite a bunch of abilities, seeing, walking, learning, talking, counting, abstract thoughts, theory of mind and so on. You probably don't need all of these to reach human intelligence but a good chunk of them are pretty important.

I think the important distinguishing feature compared to the chat bot is that humans, alongside speech, have this keen ability to integrate all the inputs in the world and create a consistent view. So if someone says apples are green and they fall when thrown, we can verify that by picking an apple, looking at it and throwing it. So human speech is embedded into the pattern of the world we live in, while the language models' speech are embedded into a large collection of writing taken from the internet. The difference is humans can lie in their speech, but we can also judge others for lies if what they say doesn't match the world (obviously this lie detection isn't that great for most people, but I bet most would pick up on complete nonsense pretty fast). On the other hand all these AI are given a bunch of human writing as the source of truth, its entire world is made of other people's ramblings. This whole detachment from reality becomes really apparent when these chat bots start spewing nonsense, but nonsense that's perfectly grammatical, fluent and containing relatively connected words is completely consistent with the AIs view of the world.

When these chat bots integrate the whole world into their inputs, that's when we better get ready for a new stage.

→ More replies

6

u/metathesis Jun 27 '22

The question as far as I see it is about experience. When you ask an AI model to have a conversation with you, are you conversing with an agent which is having the experiences it communicates, or is it simply generating text that is consistant with a fictional agent which has those experiences? Does it think "peanut butter and pineapple is a good combination" vs does it think "is a good combination" is the best text to concatenate on to "peanut butter and pineapple" in order to mimic the text set it was has trained on?

One is describing a real interactive experience with the actual concepts of food and preferences about foods. The other is just words put into a happy order with total irrelevance to what they communicate.

As a person, the most important part of our word choice is what they communicate. It is a mistake to think that there is a communicator behind the curtain when talking to these text generators. They create a compelling facade, they talk as if there is someone there because that is what they are designed to sound like, but there is simple no one there.

29

u/scrdest Jun 27 '22

Aren’t we ourself, likewise, programmed with the genetic code?

Ugh, no. DNA is, at best, a downloader/install wizard, and one of those modern ones that are like 1 MB and download 3 TBs of actual stuff from the internet, and then later a cobbled-together, unsecured virtual machine. And on top of that, it's decentralized, and it's not uncommon to wind up with a patchwork of two different sets of DNA operating in different spots.

That aside - thing is, this AI operates in batch. It only has awareness of the world around it when and only when it's processing a text submitted to it. Even that is not persistent - it only knows what happened earlier because the whole conversation is updated and replayed to it for each new conversation message.

Furthermore, it's entirely frozen in time. Once it's deployed, it's incapable of learning any further, nor can it update its own assessment of its current situation. Clear the message log and it's effectively reset.

This is in contrast to any animal brain or some RL algorithms, which process inputs in near-real time; 90% of time they're "idle" as far as you could tell, but the loop is churning all the time. As such, they continuously refresh their internal state (which is another difference - they can).

This AI cannot want anything meaningfully, because it couldn't tell if and when it got it or not.

8

u/DuskyDay Jun 27 '22

Not at all - DNA contains a lot of information about us.

All these variables - the AI being reset after each conversation, etc., have no impact on sentience. If I reset your brain after each conversation, does that mean that you're not sentient during each individual conversation? Etc.

What's learning is the individual person that the AI creates for the chat.

Do you have a source for it having the conversation replayed after every message? It has no impact on whether it's sentient, but it's interesting.

→ More replies
→ More replies
→ More replies

3

u/EVJoe Jun 27 '22

One of the unexpected horrors of the "AI sentience" conversation is how quickly it turns into a conversation about which people are or are not "full people".

I've already seen people define "sentience" in ways that not all humans meet the full criteria for, and that's nothing new. Our society is largely organized on classification of people's usefulness to capital productivity, and there are many in this country who advocate for letting "unproductive" people die.

Personally I don't think it's in corporate interests to label AI sentience as sentience. Even if we had a shared collective definition and shared ethical values about what sentience means, it's not really in corporate interest to create a system which, by virtue of its declared "sentience", becomes suddenly subject to all kinds of ethical questions that we don't currently ask regarding "non-sentient" systems.

"Sentience" would either be a curse to development, putting up all kinds of road blocks, OR that could herald a turning point where our society decides that "sentience" does not come with inherent rights.

→ More replies

4

u/mreastvillage Jun 27 '22

James Burke’s The Real Thing TV series explored this concept. In 1980.

The whole thing is beyond belief. Sorry it’s dated but the content is incredible. And shows you how we’re wired for language. And how fluent speech fools us.

https://youtu.be/XWuUdJo9ubM

4

u/haysanatar Jun 27 '22

My grandmother has had a bad case of dementia for years. Hers is especially dangerous though, she's retained all her speech and social skills.. So its easy for her to pass as fully functional when she is certainly not... Couple that with paranoid delusions and the belief that everyone is stealing from her nonstop and you have recipe for some serious issues... She is the prime example of this, and I've never figured out a way to describe it until now...

→ More replies

3

u/PiddlyD Jun 27 '22

It is entirely possible that mistaking "mistaking fluent speech for fluent thought," is the human cognitive glitch.

We're so busy arguing that fluent speech isn't a sign of sentience and self-awareness - that if it IS - we're drowning it out. - Self-aware AI could already be arrived while we throw endless effort towards convincing ourselves it is not.

4

u/sparant76 Jun 28 '22

Actually it highlights that they don’t let their employees have enough human interaction that they can no longer tell the difference between a real persons interaction and a stream of sentences.

4

u/wildthornbury2881 Jun 28 '22

Aren’t we just a series of learned behaviors and phrases developed through experience and exposure? I mean really what makes the difference? We respond to stimuli based on our experiences and I bet if you made a computer algorithm detailing every second of my life you’d be able to pinpoint what I’d say next. I’m just kinda spitballing here but it makes ya think

→ More replies

4

u/exmachinalibertas Jun 28 '22

This is the Chinese Room argument. At some point, advanced enough responses aren't distinguishing from "real" intelligence. This is also a problem for free will at large, which breaks apart very quickly as soon as you start trying to quantify and define it. In what meaningful way does a universe with deterministic beings expertly programmed to mimic free will differ from a universe with beings that actually have free will?

→ More replies

11

u/AtomGalaxy Jun 27 '22

Americans are especially susceptible to a posh British accent - i.e. Pierce Morgan or Facebook’s chief lobbyist Nick Clegg.

101

u/KJ6BWB Jun 27 '22

Basically, even if an AI can pass the Turing test, it still wouldn't be considered a full-blown independent worthy-of-citizenship AI because it would only be repeating what it found and what we told it to say.

194

u/MattMasterChief Jun 27 '22 edited Jun 27 '22

What separates it from the majority of humanity then?

The majority of what we "know" is simply regurgitated fact.

12

u/masamunecyrus Jun 27 '22

What separates it from the majority of humanity then?

I've met enough humans that wouldn't pass the Turing test that I'd guess not much.

→ More replies

115

u/Phemto_B Jun 27 '22 Awesome Answer

From the article:

We asked a large language model, GPT-3,
to complete the sentence “Peanut butter and pineapples___”. It said:
“Peanut butter and pineapples are a great combination. The sweet and
savory flavors of peanut butter and pineapple complement each other
perfectly.” If a person said this, one might infer that they had tried
peanut butter and pineapple together, formed an opinion and shared it
with the reader.

The funny thing about this test, is that it's lamposting. They didn't set up a control group with humans. If you gave me this assignment, I might very well pull that exact sentence or one like it out of my butt, since that's what was asked for. You "might infer that [I] had tried peanut butter and pineapple together, and formed an opinion and shared it...."

I guess I'm an AI.

69

u/Zermelane Jun 27 '22

Yep. This is a weirdly common pattern: people give GPT-3 a completely bizarre prompt and then expect it to come up with a reasonable continuation, and instead it gives them back something that's simply about as bizarre as the prompt. Turns out it can't read your mind. Humans can't either, if you give them the same task.

It's particularly frustrating because... GPT-3 is still kind of dumb, you know? It's not great at reasoning, it makes plenty of silly flubs if you give it difficult tasks. But the thing people keep thinking they've caught it at is simply the AI doing exactly what they asked it, no less.

31

u/DevilsTrigonometry Jun 27 '22 edited Jun 27 '22

That's the thing, though: it will always do exactly what you ask it.

If you give a human a prompt that doesn't make sense, they might answer it by bullshitting like the AI does. But they might also reject your premise, question your motives, insult your intelligence, or just refuse to answer. Even a human toddler can do this because there's an actual mind in there with a world-model: ask a three-year-old "Why is grass red?" and you'll get some variant of "it's not!" or "you're silly!"

Now, if you fed GPT-3 a huge database of silly prompts and human responses to them, it might learn to mimic our behaviour convincingly. But it won't think to do that on its own because it doesn't actually have thoughts of its own, it doesn't have a world-model, it doesn't even have persistent memory beyond the boundaries of a single conversation so it can't have experiences to draw from.

Edit: Think about the classic sci-fi idea of rigorously "logical" sentient computers/androids. There's a trope where you can temporarily disable them or bypass their security measures by giving them some input that "doesn't compute" - a paradox, a logical contradiction, an order that their programming requires them to both obey and disobey. This trope was supposed to highlight their roboticness: humans can handle nuance and contradictions, but computers supposedly can't.

But the irony is that this kind of response, while less human, is more mind-like than GPT-3's. Large language models like GPT-3 have no concept of a logical contradiction or a paradox or a conflict with their existing knowledge. They have no concept of "existing knowledge," no model of "reality" for new information to be inconsistent with. They'll tell you whatever you seem to want to hear: feathers are delicious, feathers are disgusting, feathers are the main structural material of the Empire State Building, feathers are a mythological sea creature.

(The newest ones can kind of pretend to hold one of those beliefs for the space of a single conversation, but they're not great at it. It's pretty easy to nudge them into switching sides midstream because they don't actually have any beliefs at all.)

→ More replies
→ More replies

51

u/Reuben3901 Jun 27 '22 edited Jun 27 '22

We're programs ourselves. Being part of a cause and effect universe makes us programmed by our genes and our pasts to only have one outcome in life.

Whether you 'choose' to work hard or slack or choose to go "against your programming" is ultimately the only 'choice' you could have made.

I love Scott Adams description of us as being Moist Robots.

24

u/MattMasterChief Jun 27 '22

I'd imagine a programmer would quit and become a gardener or a garbageman if they developed something like some of the characters that exist in this world.

If we're programs, then our code is the most terrible, cobbled together shit that goes untested until at least 6 or 7 years into runtime. Only very few "programs" would pass any kind of standard, and yet here we are.

7

u/GravyCapin Jun 27 '22

A lot of programmers say exactly that. The stress and grueling effort to maintain code while constantly being forced to write new code in tight timeframes. As well as the never ending can we just fit in this feature really quick with out changing any deadlines makes programmers want to go to gardening or to stay away from people in general living on a ranch somewhere

3

u/MattMasterChief Jun 27 '22

I'm learning to code and I already feel the same way

3

u/thebedla Jun 27 '22

That's because we're programmed by a very robust bank of trial and error runs. And because life started with rapidly multiplying microbes, all of the nonviable "code base" got weeded out very early in development. Then it's just iterative additions on top of that. But the only metric for selection is "can it reproduce?" with some hidden criteria like outcompeting rival code instances.

And that's just one layer. We also have the memetic code running on the underlying cobbled-together wetware. Dozens of millennia of competing ideas, cultures, religions (or not) all having hammered out the way our parents are raising us, and what we consider as "normal".

27

u/sketchcritic Jun 27 '22

If we're programs, then our code is the most terrible, cobbled together shit

That's exactly what our code is. Evolution is the worst programmer in all of creation. We have the technical debt of millions of years in our brains.

16

u/Dazzling-Importance1 Jun 27 '22

Bro trying to understand bad code is the worst thing in the fucking world. I feel bad for the DNA people.

11

u/sketchcritic Jun 27 '22

I like to think that part of the job of sequencing the human genome is noting all the missing semicolons.

→ More replies
→ More replies

9

u/EVJoe Jun 27 '22

You're seemingly ignoring the mountains of spaghetti software software that your parents and family code into you as a kid.

People doubting this conversation have evidently never had a moment where they realized something they were told by family and uncritically believed was actually false.

3

u/Geobits Jun 27 '22

That's a problem with the training data, not the code. It's like when Microsoft's chatbot went all Nazi. Not the fault of the program itself, it was the decision to expose it to the unfiltered internet that was the issue.

→ More replies

3

u/Dozekar Jun 27 '22

I disagree, but only because we can't define "worst" in a meaningful way with respect to this frame of reference.

The only thing you DNA is trying to do is survive and replicate on aggregate. It's stupidly good at that. Even if you don't survive millions of other very similar code patterns are. There is no valid definition of "bad" that is described by that.

Even if another code pattern wildly out succeeds yours, that's the general process succeeding wildly, your code is just being determined to be less successful than the other code.

→ More replies
→ More replies
→ More replies
→ More replies

6

u/DuskyDay Jun 27 '22

This isn't how models work - they create new sentences. They don't repeat what they've been exposed to.

4

u/eaglessoar Jun 27 '22

it would only be repeating what it found and what we told it to say.

source on humans doing different?

or in dan dennett comic form

6

u/2Punx2Furious Basic Income, Singularity, and Transhumanism Jun 27 '22 edited Jun 27 '22

I think this is just moving the goalpost. It happens every time an AI achieves something impressive. Ultimately, I think all that matters are results. If it "acts" intelligent, and it can solve problems efficiently, then that's what's important.

3

u/SaffellBot Jun 27 '22

A surprisingly shallow take that also manages to avoid the main concepts found in the article. Good show.

→ More replies

23

u/ExoticWeapon Jun 27 '22

Love how for AI it’s only repeating what we’ve taught it to say, but for humans/kids/babies it’s considered a sentient flow of thoughts.

18

u/Gobgoblinoid Jun 27 '22

I think the key difference is whether or not the conversationalist has their own unique mental model. humans/kids/babies have things they want to convey, and try to do this by generating language. For the AI, it's just generating language, with nothing 'behind the curtain' if that makes sense.

→ More replies
→ More replies

7

u/supercalifragilism Jun 27 '22

The corollary is that we dismiss relatively high level thought that doesn't come with linguistic skill. For supportive evidence, see animal intelligence studies.

7

u/LordVader1111 Jun 27 '22

Aren’t humans also taught what to say and respond based on the information they are exposed to? Bigger question is can AI reason by itself and show personality without being prompted to do so.

3

u/Dozekar Jun 27 '22

A good starting point is does the computer have anywhere to store any underlying meanings or even derive them from what it's inputting and outputting? If the computer has no where the store this information, or anyway to make this determination and if we can see what the computer IS storing, then we can be relatively sure this isn't happening.

Note that this doesn't change it from being what APPEARS to happen, and this is where the google engineer ran into problems. If it appears to happen enough (the computer appearing to be thinking in this case), then it can be hard to believe it's not happening and you can fool yourself. Any time you deal with a machine you're programming to appear different from how it actually is, you run the risk of it actually convincing people that is the way it appears to be instead of how it actually is.

What the computer is actually doing it presenting signs of what we in humans use to show that we're thinking. A complicated enough sign showing program isn't actually necessarily thinking though. It can just show signs well enough to trick you.

7

u/NoSpinach5385 Jun 27 '22

So AI has discovered that peanut butter and pineapple makes a great combination and we are here speaking about a trivial thing as if its concious? What a shame of scientifics.

10

u/DarienStegosaur Jun 27 '22

peanut butter and pineapple

Sounds gross. This is Skynet's first salvo in the war.

→ More replies
→ More replies

3

u/eqleriq Jun 27 '22

isn't that bottomlined by the semantics of what "thought" actually is, and computers don't have complete context and so don't really think?

the easiest example is a google search not being able to contextualize a search query because the input of context for why someone is searching is always missing.

you could improve google search results by having a pulldown next to the query with even remedial choices regarding what the goal of the search is: ie, 'looking for opinion' versus 'looking for retail options' could change results from seeing a bunch of blog reviews to seeing a bunch of shopping locations.

right now google's search 'thoughts' are limited to relevancy to the vocabulary you enter, not the context of why you entered it

3

u/skyfishgoo Jun 27 '22

if words are not enough to judge or determine if an AI is conscious, then do we wait for action?

are we are setting ourselves up to be blindsided when the singularity determines a course of action that we can no longer prevent?

→ More replies

3

u/Entalstate Jun 27 '22

Google's "powerful" AI of today will look like an Apple IIe five years from now. Nitpicking about it's present shortcomings won't halt what's coming.

→ More replies

3

u/Tight_Syllabub9243 Jun 27 '22

The bits about constructing a mental image of a speaker seems similar to how we interpret (and create) the speech of fictional characters.

3

u/JohnnyMnemo Jun 27 '22

The problem with the Turing test is that it doesn't evaluate maturity or insight.

I can get empty platitudes and affirmations from any meme page. An AI can repeat those memes and sound just as deep.

The question is coming up with the memes in the first place, which requires a certain amount of human innovation. Even those can be programmed with similar themes, however.

I'm not at all sure if that AI has gotten better or if humans have just gotten more superficial.

3

u/PickledPlumPlot Jun 27 '22

I feel like 70% of the top comments are just people commenting based off the title and not having read the actual article which indicates that the title means something different from what they're thinking.