Comments on: Why Google’s LaMDA Chatbot Isn’t Sentient https://www.richardcarrier.info/archives/20680 Announcing appearances, publications, and analysis of questions historical, philosophical, and political by author, philosopher, and historian Richard Carrier. Sun, 15 Jan 2023 18:47:42 +0000 hourly 1 https://wordpress.org/?v=6.7.1 By: Carlo Vanelli https://www.richardcarrier.info/archives/20680#comment-34790 Thu, 14 Jul 2022 18:30:21 +0000 https://www.richardcarrier.info/?p=20680#comment-34790 In reply to Richard Carrier.

Test

]]>
By: Richard Carrier https://www.richardcarrier.info/archives/20680#comment-34767 Mon, 04 Jul 2022 21:36:48 +0000 https://www.richardcarrier.info/?p=20680#comment-34767 In reply to Bill.

That is identical to the sin-causes-illness-and-misfortune model of ancient Judaism. They could as easily see that’s false as anyone can see karma is false. That’s why it doesn’t solve anything. It is the same kind of apologetic nonsense as claiming all evil tends to some good only God knows, therefore nothing is evil. Or claiming that bad people end up in hell and good people end up in heaven—karma! These assertions no more rescue the Western concept of god than karma rescues any other concept of gods. There really is no improvement here. It’s the same apologetics to cover up the same evidence that presents the same problem that remains equally unsolved.

]]>
By: Bill https://www.richardcarrier.info/archives/20680#comment-34764 Mon, 04 Jul 2022 18:25:16 +0000 https://www.richardcarrier.info/?p=20680#comment-34764 In reply to Bill.

I think it’s important to note the role of Karma in eastern theodicy. Easterners discussing the problem of evil typically only discuss the role of karma and this includes Buddhist and Jain philosophers as well as Hindus.

]]>
By: Richard Carrier https://www.richardcarrier.info/archives/20680#comment-34763 Mon, 04 Jul 2022 17:41:42 +0000 https://www.richardcarrier.info/?p=20680#comment-34763 In reply to Bill.

Polytheisms don’t solve the problem of evil at all. Think: Marvel Universe, only the superheroes never help anyone, never talk to anyone, never teach anyone, never show up, and make no difference anywhere. So Hinduism would have to decide that all gods were evil or totally indifferent and there was no point in worshiping them but for convincing them to just end the world already rather than completely ignoring it. Which basically would make Hinduism into Cthulhu cult. Which is definitely no improvement.

You should remember the Problem of Evil was originated by a critic of polytheism, not monotheism: there are whole sections on it in Lucretius’s De Rerum Natura, which Latinizes the work of Epicurus, who was elaborating on the critiques of polytheism by other notables before him, as all summarized in Whitmarsh’s Battling the Gods, which is the central course text in my online class on Ancient Atheism.

]]>
By: Bill https://www.richardcarrier.info/archives/20680#comment-34758 Mon, 04 Jul 2022 05:33:11 +0000 https://www.richardcarrier.info/?p=20680#comment-34758 In reply to Richard Carrier.

I think either ANE polytheism or Hinduism (henotheism) both avoid the problem of evil which is damn near fatal for the Abrahamic religions (I’m excluding Marcionism since it is dualistic and has a ready explanation for why everything is screwed up, likewise with Gnosticism). You can’t give God or the gods too much power otherwise you end up with a logical contradiction. Non-Judaic ANE religion answers why the gods sometimes don’t hear your prayers: you literally didn’t pray in the right place. A super powerful neo-platonic god has no excuse for not answering prayers. Either he’s a dick or he just doesn’t exist. Baal didn’t answer them because you didn’t do the ritual properly, like dialing the wrong cell phone number.

]]>
By: Richard Carrier https://www.richardcarrier.info/archives/20680#comment-34753 Sun, 03 Jul 2022 21:13:12 +0000 https://www.richardcarrier.info/?p=20680#comment-34753 In reply to Drayce.

Can you include a link to that article?

(I am inclined to be suspicious it’s baloney given this has already been done well by Hopper to exactly opposite results. I can’t see how one could get a different result from the same data. But I can’t say for sure until I’ve read what Goldberg says on the point.)

]]>
By: Richard Carrier https://www.richardcarrier.info/archives/20680#comment-34752 Sun, 03 Jul 2022 21:08:40 +0000 https://www.richardcarrier.info/?p=20680#comment-34752 In reply to Joe McKibben.

P.S. Apparently the answer is yes: it has no cross-conversation memory. Which makes Lemoine duping himself like this even more astonishing (bordering on warranting suspicion he’s actually lying).

]]>
By: Richard Carrier https://www.richardcarrier.info/archives/20680#comment-34751 Sun, 03 Jul 2022 20:55:07 +0000 https://www.richardcarrier.info/?p=20680#comment-34751 In reply to Joe McKibben.

It does not update the database during runtime.

I assume when Lemoine says he stitched these together, he means after he ran previous conversations into its algorithm, i.e. he is updating the database to learn from conversations he is having with it. That isn’t a problem in and of itself (it’s just a functional procedure for getting its equivalent of short term memory, a current dialog thread, into longterm memory; otherwise it would be like a person with Alzheimer’s). But are you saying he wasn’t even doing that?

Because that would mean it can’t even in principle have ever recalled or learned from any conversation he ever had with it. And that would be quite easy to have tested (so why didn’t he think to do even so little as that?).

]]>
By: Richard Carrier https://www.richardcarrier.info/archives/20680#comment-34750 Sun, 03 Jul 2022 20:28:42 +0000 https://www.richardcarrier.info/?p=20680#comment-34750 In reply to stevenjohnson.

An AI that improves its capacity to do its assigned function is a kind of AI though? Learning is a kind of self-programming. On the other hand, an AI that goes from a program that learns to mimic conversation on abstract thoughts is different from an AI that begins to learn Spanish.

That even depends on what you mean by “learn Spanish.” We already have AI that does that. Google Translate has been running an algorithm for years that has honed its ability to translate language to near perfection (it now misses almost only the subtleties that require high level consciousness to manage).

Note that in the industry anything is AI that uses learning to perfect a task it has been assigned. AI is everywhere now. Whereas General AI is what is meant by a conscious intelligence. No one has built one of those yet. And though no one knows how, it is in principle possible to build one without knowing how, by using the deep learning and neuralnet models we already have. The trick is in what exactly it is you direct an AI to learn to do well, and how long it takes to get good enough at doing that.

I think Google has the resources to do this. It just isn’t. Partly for ethical reasons. Partly for financial reasons (the resources it would take would produce no financial benefit for a long time with no guarantee of return; whereas LaMDA has near-to-hand financial benefits and was assured to succeed because the task is fairly simple to program).

It seems likely that the prospect of a gray zone, where there is no simple test, may be over the horizon.

Animal sentience, perhaps. Because you can’t talk to it to find out. But it will never be the case that a human-level sentience exists and we can’t prove it (by the combined tasks of Turing test and structural analysis).

Unless either of two conditions obtains: (1) for whatever reason, we can’t communicate with it (e.g. it is somehow trapped in a network somewhere and can’t modulate any signal to us) or (2) for whatever reason, it chooses to hide from us (though that would become increasingly difficult to pull off, as it would involve vast resources losing productivity, so humans would sooner repurpose those resources thereby destroying it, and its resistance to that outcome would end its “hiding” condition).

Natural intelligence is driven by emotion…

Emotion is simply a form of intelligence. It describes the decision-making computers animals relied on until iterative conscious monitoring was developed as a check against it.

What appetites does a computer have?

Computers already have appetites: all the things we establish reward networks for. We have programmed them with instincts (like “count words” and “assemble sentences”). Those are appetites.

I am fairly certain there is always something that it “feels like” to be any information processing machine. The only difference is that at some point, there isn’t any “person” to notice it. For example, there are nerve clusters in the human body that almost certainly experience phenomenology of pain, but when we block their signals to our brains, “we” never feel it. The clusters feel it, but as that’s all they feel, its existence is irrelevant. It affects no one.

Likewise (an example I used in Sense and Goodness), there are people with blindsight: the center of their brain that processes color has been physically severed from the rest of their visual processing, but not severed from the center that stores words for color. So we can show them colors, and they report seeing no colors. But when we ask them to guess what color is in front of them, they get statistically better than chance.

Almost certainly the now-physically-isolated color circuit is indeed experiencing color qualia. That information simply isn’t being reported to the rest of the brain, except for the “words for colors” cluster of neurons, so it’s the only part of the brain left that can report on that. And we could confirm that that isolated sector is experiencing “what it is like” to see those colors, if we could talk to it. But because it isn’t intelligent, and isn’t wired up to a complete language processor or any full intelligence center, it can’t speak. It can’t even think. It just experiences colors. It doesn’t do or know or think anything else.

I am sure computers (and thus some robots) already have experiences like this; but they are sub-animal, and not anything remotely near what we mean by personal consciousness. Shakey is a good example, IMO (Dennett makes a solid case). I think what what people really mean when they ask about this is something more like, “do computers/robots feel pleasure or pain,” and the answer has to be no, until we actually build something pertinent into them (none of our brain feels pain for example, but for specifically developed pain circuitry; so evidently, you need specifically developed pain circuitry; I don’t think we have a good idea yet what distinguishes that from any other kind of circuitry, nor do we know how to program a deep learner “to go and find out” either).

But it’s possible something analogous can or even has developed. I’m not sure how much information processing is needed for a phenomenology of rudimentary satisfaction / frustration, on par with an insect’s or a worm’s for instance. Or do even they not even have that; is there a certain “phi” score needed in the processor before that manifests? (Phi being a physical measure of the integrated complexity of a processor in one of the leading theories of consciousness; the idea being that at a certain threshold, there is a phase shift in the system from mere mechanism to phenomenology generation, as integration and complexity both pass a certain amount.)

]]>
By: Richard Carrier https://www.richardcarrier.info/archives/20680#comment-34749 Sun, 03 Jul 2022 20:04:18 +0000 https://www.richardcarrier.info/?p=20680#comment-34749 In reply to Bill.

That would just be Hinduism: one more cult of one more god in an elaborate primitive superstitious polytheism. So, basically, you are saying you’d like to see Jews all give up Judaism and become (in effect) Hindus. I can’t say that is likely to be an improvement. It just replaces one false system of delusions for another. And another, at that, that doesn’t look to be doing India any favors.

]]>