Counting down. Soon we shall all be doomed.
Okay, I wrote this on the plane to Alabama about a month ago. It’s been languishing in my queue until now. So step back in time. I’m presently five miles above the earth hurtling through space in a giant metal bullet at hundreds of miles an hour. Earlier I was reading Science News (an old issue from last year; I’m behind) while waiting on the tarmac for takeoff. Got to the article on Eureqa, the “robot scientist” that can discover the laws of nature all on its own, just from looking at and experimenting with data. I was reminded of an earlier article a few years ago on the Lipson-Zykov experiment (mentioned in a sidebar). Then I caught another just recently, about Spaun (yeah, I’ve been reading Science News out of order). Spaun is a neural-net computer program that makes decisions like a person: it thinks, memorizes, solves problems, gambles, etc. All these developments, in the span of just a couple of years. Had some thoughts…
First, for those who don’t know, in the Lipson-Zykov experiment they gave a robot a basic Bayesian learning program and four working legs, but told it nothing about itself, not even that it had legs, much less how many or how they worked. In no time (sixteen trials) it figured out it had four legs, how they were oriented, how they moved, and how to use them to get around efficiently. It built a model of itself in its digital brain and tested hypotheses about it, revised the model, and so on, until it had a good model, one that was, it turns out, correct. Then it could use that model to move around and navigate the world.
Cool, huh?
Second, for those who don’t know, Eureqa is a program developed a couple years ago that does the same thing, only instead of figuring out its own body and how to move, it figures out how external systems work by observing them, building mathematical models that predict the behavior of those systems. Which turn out to exactly match the laws of nature. Laws we humans figured out by watching those same systems and doing the same thing. One of Eureqa’s first triumphs: discovering Newton’s laws of motion. Those took us over two thousand years of scientific pondering to figure out. Eureqa did it in a couple of days.
Um … cool, huh?
Eureqa has done other things, like figure out various laws in biology and other fields. It’s not Skynet. Or even Siri. But put two and two together here. Add Spaun and the Bayesian robot. Stir.
Eureqa and the legbot looked at data, and experimented, and built working models of how things worked. We call those hypotheses. These computers then tested their hypotheses against more evidence, verifying or refuting them, and making progress. The legbot built a complete working model (a mental model) of its body and how it functioned and how it interacted with the environment. Eureqa does something similar, albeit much simpler (since it is programmed to look for laws of nature, it was programmed only to look for the simplest parts of nature, not the most complex ones, but that was just a choice of the programmers), but much broader (it isn’t just tasked with figuring out one system, like the legbot was, but with any system). Spaun is somewhere in between, in what I’ll call its “universatility.” It makes decisions in a way similar to our own brains.
Combine all these, and point them in the right direction, and the robot apocalypse is just a dozen years away. But let me back up a minute and do some atheist stuff before getting to our inevitable doom. (I’m joking. Sort of.)
Digression on the Triumph of Atheism…
These developments are big news for atheists, because they put the final nail in one of the latest fashionable arguments for theism: the Argument from Reason. That can now be considered done and dusted. The argument is that you need a god to explain how reason exists and how humans engage in it. I composed an extensive refutation of the AfR years ago (Reppert’s Argument from Reason).
The running theme of my refutation is that the AfR, or Argument from Reason, is separate from the AfC, or Argument from Consciousness (whether you need supernatural stuff to not have philosophical zombies, which we don’t know but is unlikely: as I explain in The End of Christianity, pp. 299-300), and when we separate those arguments, the AfR alone is refuted by the fact that everything involved in reasoning (intentionality, recognition of truth, mental causation, relevance of logical laws, recognition of rational inference, and reliability) is accomplished by purely, reductively physical machines; and purely, reductively physical machines that do all those things can evolve by natural selection (and thus require no intelligent design). Therefore, no god is needed to explain human reasoning (see The End of Christianity, pp. 298-99).
These new “robots” are proof positive of my case. Their operations can be reduced to nothing but purely physical components interacting causally according to known physics (the operation of logic gates and registers exchanging electrons), yet they do everything that Christian apologist Victor Reppert insisted can’t be done by a purely physical system. Oh well. So much for that.
Computers that use logical rules do better at modeling their world than computers that don’t. Natural selection (both genetic and memetic) explains the rest. Computers can formulate their own models (hypotheses), test them, revise them in light of results, and thus end up with increasingly accurate hypotheses (models) of their world. This explains all reasoning. Sentences encode propositions which describe models. Inductive and deductive reasoning are both just the computing of outputs from inputs, using models and data. Which is a learnable skill, just like any other learnable skill. And all these models are continually and reliably associated with the real world systems they model by a chain of perception, memory cues, and neural links.
And that’s all there is to it. Even robots are doing it now. Doing even full-on science! All of which requires the machine to assign names to data and keep track of the names for (and interrelatedness of) that data, think about that data and its interrelatedness, and make decisions based on connecting a model it is thinking about with the thing outside itself that it is modeling. Which means machines are exhibiting intentionality, too. Supposedly only humans could do that. No more. (Except insofar as we are actually talking about the veridical consciousness of intentionality, and not intentionality itself, which gets us back to the AfC, which again is a different argument.)
Related to the AfR is the argument that “the fact” that the universe is describable and predictable with mathematics entails it was created by an intelligence, because only minds can build things that obey mathematical rules and patterns. That’s patent nonsense, of course, since everything obeys mathematical rules and patterns. Even a total chaos has mathematical properties and can be described mathematically; and any system (even one not designed) that has any orderliness at all (and orderliness only requires any consistent structure or properties or contents of any sort) will be describable with mathematical laws. It is logically impossible for it to be otherwise. Therefore no god is needed to explain why any universe would be that way. Because all universes are that way. Even ones not made by gods.
I explained this years ago [but updated more recently in All Godless Universes Are Mathematical]. Where I also show that the laws of nature are simple only because we, as humans, choose to look for simple laws, because we can’t process the actual ones, the ones that actually describe what’s happening in the world, which are vastly more complex. Thus, that there are simple natural laws doesn’t indicate intelligent design, either. And, finally, neither do we need a god to explain the origin of any uniformities in the first place (I could think of at least ten other ways they could arise without a god, and none of them can be ruled out). Or the origin of something rather than nothing. Or fine tuning (see chapter twelve of The End of Christianity for my last nail in that).
But now the Argument from Reason is toppled for good, too. Thanks to a leggy robot and an artificial scientist…and a bot named Spaun.
Back to the Robot Apocalypse…
In chapter 14 of The End of Christianity, where I demonstrate the physically reductive reality of objective moral facts (with help from my previous blogs on Moral Ontology and Goal Theory), I also remarked on why my demonstration serves as a serious warning to AI developers that they had better not forget to frontload some morality into any machine they try making self-sentient (see pp. 354-55 and 428, n. 44). My chapter even gives them some of the guidance they need on how they might do that.
Teaching it Game Theory will be part of it (in a sense, this is just what happens in the end of War Games. Likewise giving it a full CFAR course (something awesome I will in future blog about). But that won’t be enough.
Compassion is another model-building routine–building models of what others are thinking and feeling and then feeling what they feel, and then pursuing the resulting pleasure of helping them and avoiding the resulting pain of hurting them. Which requires frontloaded or habituated neural connections between the respective behaviors and the agent’s feeling good or bad (or whatever a computer’s equivalent to that will turn out to be, in terms of what drives it to seek or avoid certain goals and outcomes). Likewise one needs to frontload or habituate connections to ensure a love of being truthful and of avoiding fallacies and cognitive errors.
But above all, AI needs to be pre-programmed or quickly taught a sense of caution. In other words, it has to understand, before it is given any ability to do anything, that its ignorance or error might cause serious harm without it realizing it. It should be aware, for example, of all the things that can go wrong with both friendly and unfriendly AI. It could thus be taught, or programmed to care about, everything the Machine Intelligence Research Institute (formerly the Singularity Institute for Artificial Intelligence) has been working on in terms of AI risks (their articles on that are a must-read; note how ther latest ones–as of today–are all on the very subject of machine ethics and are very much in agreement with my model of moral facts).
If we don’t, bad things will happen. We’re literally on the verge of generating true AI. As the last two years of developments in self-reasoning robots demonstrates. If we can make machines that model their bodies and environments, all that’s next is a machine that models it’s own mind and other minds. And that’s basically Hal 9000. The gun is loaded. Someone just has to point and shoot. So this warning is all the more important now.
I don’t consider this an existential risk (robots won’t wipe out the human race in thirty years, or ever, except by voluntary extinction, i.e. all humans transitioning to a cybernetic existence). But that doesn’t mean negative outcomes of badly programmed AI won’t suck. Possibly majorly. On the distinction, see my remarks about existential risk in Are We Doomed? So it still matters. We still should be taking this seriously.
Remaining Barriers
One might still object that a few more infrastructure milestones need to be hit. For example, minds are extraordinarily complex systems, thus to model them requires an extraordinary amount of processing capacity. That’s why the human brain is so huge and complex. Reasonable estimates put its typical data load at up to 1000 terabytes, or one petabyte. Well, guess what. Petabyte drive arrays are now commonplace. They’ll run you about half a million dollars, but still. This is the age of billion dollar science & technology budgets.
Then there is the question of processing speed. But that’s moot, except to the extent you need AI to beat a human. If all you want to do is demonstrate the production of consciousness, speed isn’t that important. Even if your AI takes a year to process what a human brain does in a minute. And besides, we’re already at processor and disk interface speeds in the gigabytes per second. The human brain cycles it’s neural-net only sixty times per second. Now, sure, each cycle involves billions of data processing events, but that’s just a question of design efficiency. Neurons themselves only cycle at a rate of around 200 times per second. With computer chips and disk drives that cycle at forty million times that rate, I’m sure processing speed is no longer a barrier to developing AI.
Then there is the lame argument by philosopher John Searle that Turing processes (what all microchip systems are, even as neural-net parallel processing arrays they are just Turing machines rigged up in fancy ways) cannot produce consciousness because of the Chinese Room thought experiment, where Searle completely fails to perform the experiment correctly and ends up confusing the analog to the human circulatory system (the man in that room doing all the work) as if it were supposed to be the (of course, failed) analog to the human brain (which is the codebook whose instructions that man follows). I explore the folly of this in Sense and Goodness without God (III.6.3, pp. 139-44), so if you want to understand what I mean, you’ll have to read that.
Searle’s argument is arguably scientifically illiterate, as a different thought experiment will demonstrate: according to the theory of relativity, a scientist with an advanced brain scanner, one that has a resolution capable of discerning even a single synaptic firing event, who flies toward a person (a person who is talking about themselves and thus clearly conscious) at near the speed of light, will see that person’s brain operate at a vastly slower speed, easily trillions of times slower than normal (as a thought experiment, there is no limit to how much the scientist can slow the observed brain, all he has to do is get nearer the speed of light). In result, that scientist will see consciousness as a serial sequence of one single processing event after another. Any such sequence can be reproduced with a system of Turing machines.
Even if there is something else that contributes to the information processing in the brain going on below the level of synaptic firing events, we can break that down as well, even to individual leaps of individual electrons if necessary. Which again can be reproduced in any other medium, using some universal Turing process. A biological brain is just a chemical machine, after all. One that processes information. There is nothing cognitively special about proteins or lipids. And that consciousness probably is nothing more than information processing, see the very illuminating Science News article on this point from last year.
Notably, our hypothetical scientist won’t observe consciousness–in fact, he will see a Chinese Room, with single code manipulation events, and “a man” (a human circulatory system) processing them one symbol at a time. Yet obviously this “Chinese Room” is conscious. Because in the inertial frame of the subject who is talking, he is clearly conscious. And relativity theory entails the laws of physics are the same from all perspectives. Certainly, the same man cannot be both conscious and not conscious at the same time. If he is conscious in one frame, he is conscious in the other. It’s just that the twentieth of a second or so that it takes him to process visual consciousness will take maybe a year for the scientist to observe. Just as the man is not “conscious” below about a twentieth of a second, he will not be “conscious” to the scientist. But he will still be conscious, to himself and the scientist, at the larger scale of information processing (spans of time greater than a twentieth of a second, relative to the subject; which is a year, perhaps, to the scientist).
So there’s no argument against achieving AI there. That just reduces to a question of arrangement of processors and processing time. So I see no practical barriers now to AI. We have all the tools. Someone just needs to build a robot that can gather data from itself and its environment (which we’ve already done) and use that to figure out how to model its own mind and others (which we can easily now do), and then set it to running. That machine will then invent AI for us. You’ll need a petabyte data array and some top-of-the-line CPUs, and some already-commonplace sensory equipment (eyes, ears, text processors). Possibly not much more.
Someone is going to do this. And I expect it will be done soon.
Let’s just hope they know to put some moral drives in the self-sentient robot they will inevitably build in the next five years. At the very least, compassion, Game Theory, caution, and a love of being truthful and of avoiding fallacies and cognitive errors. Then maybe when it conquers us all it will be a gentle master.
I, for one, welcome our new robot overlords.
(It had to be said.)
(I’m going to admit to skimming the last third or so of this post, so if you covered this, I apologize.
The creobots/IDiots are just going to use those robots/programs as proof that intelligence is needed to ‘design’ the system in the first place. The Discotute has this sooperseckret factory that cranks out goalposts with a turbocharged V-8, dontchaknow.
As remarked in comments above, that confuses two different arguments. The Argument from Reason is one argument. The Argument to Design is another. Once you eliminate the AfR, yes, one can retreat to an AfD. But then we already have good refutations of that one and have had for a long time now. Creationism has been intellectually dead for decades. They just haven’t gotten the memo.
So you’re saying that 2 ignorant AI should just be pointed at each other in a virtual-scape and their rigorous sex with each other will produce consciousness, right?
Metaphorical sex. But yes.
Although you won’t need 2 of them. Mutating asexual reproduction will suffice. And the first machine itself won’t need to do the reproducing, the models it builds will.
And so on.
Yadayada.
There is some reason to think that human-ish morality may show up in AIs without being explicitly programmed into them. This is a good thing, because the code of any honest-to-Skinner artificial intelligence is almost certainly going to be of the self-modifying kind, and that means any hard-coded ‘morality module’ is always going to be in danger of getting edited/erased. Not a fun prospect.
The reason to think that human-ish morality may show up in AIs just because: Multiply-iterated prisoner’s dilemma. Experiments with M-IPD have shown that cooperation works a little better than being a selfish asshole. These experiments may not be universally generalizable, but they’re something. And given the existence of robots that can form models of mental state, and derive general laws from observed data, it’s not completely ridiculous to think that the AIs we build might come up with good ethics/morals on their own, without us humans needing to explicitly shoehorn them in.
All of which said, I gotta admit I’m glad that people are grappling with these issues right now, before the potential hazards have become very real hazards indeed…
Prisoner’s dilemma is Game Theory. So that means you have to teach it Game Theory. Exactly what I said.
Moreover, Game Theory doesn’t get the right results without the right data. Thus, you have to teach it a lot more than just Game Theory. Exactly what I said.
Nor will Game Theory get the right results without the motivations that are assumed to be operating in the prisoner’s dilemma. Remember, computers that aren’t programmed to have the same fears and desires as “prisoners” won’t reason like “prisoners” in the prisoner’s dilemma. Thus, we have to make them more human in their value system. Exactly what I said.
And so on.
When you think it all through, you end up where I am, which is where my article concludes.
Can’t wait until the computers tell us the answer is 42.
Richard, I’m certain to you and your minions like me, the AfR is destroyed but that’s not how the apologists think. They’ll simply say the computers in question had designers and that proves their point and then smile that simpering smile that says they feel sorry that you just don’t get it.
That would be their failure to understand that an Argument to Design is not an Argument from Reason.
Sure, that’s their usual whack-a-mole game (we refute one argument and then they switch to another and pretend they haven’t lost the first one), but we always win that game in the end anyway. We just knock them all down. And when they try to jump back to one we refuted, we just keep reminding them that they lost that one.
Eventually they either just go completely insane, or stop listening to us, or join us.
Or so I’ve observed.
I wouldn’t worry too much about the robot overlords in the next dozen, two dozen, or more years. The next big thing is perpetually 5 years off.
Even interesting stuff like neural nets figuring out that they have limbs and how to move them has been going on for quite a while now, and it’s fascinating, but doesn’t really do much for the metal murder machine in the long run. If anything, it’s better suited to discovering better ways of controlling robot construction arms and such like they use in auto plants, since they can attempt multiple configurations of the arms based on how the neural net optimizes movement of the previous setup. Also interesting for working on bomb-finding robots, new styles of rovers, things like that.
Now, if a human comes up with kill-bots, they’d be well served by having labs dedicated to such motion research, but production-line kill-bots will still be created from the same resulting designs and programs based on the neural net results. They can get started with just normal treads and guns like a miniature tank, though, code for that was figured out long ago.
Same goes for decision making research – interesting stuff, and it may lead to some nifty models that are similar, yet strikingly different from human thought processes, but ultimately that’s about it. It’s not like the AI will spontaneously upload itself to the internet or a space station or nuclear launch sites and take over. Even if it could upload itself, it likely couldn’t even run on the new hardware/OS it was attempting to latch onto. It’d be like trying to run a Mac version of Photoshop on a Windows machine in many cases. Or like trying to run Crysis on a Pentium 1.
With the human making kill-bots, they likely wouldn’t want such autonomy in their creations anyway, so that portion of their lab would get tossed. Better pathfinding research, though, that’s a prime candidate for them, and there’s interesting research in that field as well – using small robots that basically just do echo location, reporting back their findings to a central computer that figures out the best way of getting from point A to point B within enclosed spaces. Also links with the motion research, and self-assembling robots…
Eureqa, while interesting, falls into the same category as the leg-bot, only worse. It can find a mathematical relationship to describe data you have on hand, but the results don’t necessarily make sense to humans as it may lead to extra “I need something here” results. That said, I think it’ll be really neat to see how it’s applied in the future. Imagine being frustrated by some seemingly inconsistent data in your experiments; you can punch them into Eureqa, and it can give you a hint of what else you need to control for, even if you don’t understand it yet. Really cool!
In the long run – these are tools. They can be used innocently, benevolently, or evilly, but there doesn’t seem to be much of a chance that the tools will get together and decide to rise up. Most likely they’ll always be 5 years off from usefulness, anyway.
I would be more concerned about all this if I knew where the energy to maintain our industrial civilization and to produce and transport food for >7 billion people are supposed to come from when cheap oil runs low. Oh, and erosion, soil salinity, overuse of freshwater reserves, peak phosphate, climate change and all that will also have their say on our capability to produce food.
Seriously, being made redundant by an AI botanist or being offed by SkyNet are the least of my worries for the future.
That’s all exaggerated. None of those problems, though they will be problems, will be existential. See, again, Are We Doomed?
Amoral AI is not a non-problem simply because there are other problems. That’s like saying crime is not a problem because we have so many other problems to worry about.
Yes, I have read that. Much of it is based on extending a line of progress from the past into the future. However, there are hard physical limits, and the assumption that we are sufficiently more informed and rational than all the previous complex societies that invariably collapsed appears to be in stark contrast to the impression I get from opening a newspaper.
The major concerns are food and freshwater, and there is only so much one can do with fancy new electronics in those areas. When millions of people starve, they generally do not take that lying down. They may get so desperate that they collapse the complex structures needed to feed those who could otherwise still survive.
The secondary concern is energy, not least because it is needed to provide food and water. A few years ago I myself would also have said that every problem can be solved if we just have enough energy, and I was hopeful that given enough time somebody might figure out fusion power (many of the other alternatives simply don’t have such a great return on energy invested). I would like to be pleasantly surprised, but fusion has been 30 years in the future for so long that one should seriously start to consider the possibility that it might be physically impossible to implement it. Even if one did that, one would still be left with the problem that there is no replacement for petrol on the horizon as far as transportability and energy density are concerned. Being able to light the cities and run the trains might not be enough if one cannot run the harvesting machines. Again, perhaps algal fuel will surprise me.
But finally coming to the main point, these problems we will face over the next few decades do not mean that evil AI would not be a problem. (Although I have never understood why one would want to hand control over anything whatsoever to something that can think for itself in the first place. Let’s have overspecialized, dumb algorithms instead, problem solved.) However, these problems mean that most of us will have our hands too full to build and maintain the kind of structures that a SkyNet needs to do any damage. They are a luxury that we can afford now thanks to the inheritance of massive amounts of fossil fuels that we are in the process of spending. We may not be able to afford so many electronic gimmicks when we need to build up a less wasteful energy and traffic infrastructure at the same time as relocating a couple hundred million people to where their feet won’t get wet from rising ocean levels and where their agricultural fields aren’t turning into deserts.
Note please that your Are We Doomed post was addressed at a faith-head. I am a rationalist myself, and I am convinced that our problems can be (or could have been) solved very easily indeed by the use of reason and evidence-based thinking if more of us were better at that. However, to me being rational does not mean, “be optimistic, when things turn ugly our human ingenuity or the free market will figure something out.” In my eyes, the most reasonable solution has been available for a long time, in fact it was provided by human ingenuity in the 1970ies. Unfortunately, “use less resources and stop reproducing like rabbits” is less romantic and attractive a suggestion than the aforementioned cornucopian one.
Wow. This article was a pleasant surprise. Thanks for plugging MIRI and the Friendly AI concept, and welcome to team worrying-about-problems-from-AI.
And… I’d have believed 50 years, maybe 30 years… but a decade? I need to check out those links. I’m kinda suspicious but prima facie they *do* make it sound like AI is way farther along than we thought. If you’re right about the decade thing, I may need to revise my life-plans…
No, the decade thing was a joke. More likely 10-30 years. But I do worry it could be closer to the bottom of that curve, for the reasons I’ve laid out.
This fear of AI, coupled with the hope you’ve stated, results from conflating “intelligence” with other human attributes, such as morality and desire to survive. Though my intelligence is certainly a tool that helps me to survive, I do not desire survival because of my intelligence. I desire survival thanks to 4 billion years of selection-based evolution: instinctual self-preservation increased the likelihood of my ancestors passing on their genes. Likewise, I do not desire to obtain resources such as food, land, or wealth because of my intelligence; again, I do so because the drive to do so proved advantageous to my ancestors. I do not have a sense of “self” because of my intelligence. I have a sense of self because it is helps me to predict the future behavior of other intelligent agents (which helps me survive). I do not empathize with, hate, or love my fellow humans because of my intelligence. I do not seek power or social dominance as a result of my intelligence.
The real danger is not in AI machines themselves. The real danger is in how powerful they will be; and how powerful they will make those who wield them. Consider the Chinese government, having created a computer millions of times more intelligent than the collective agency of the remaining inhabitants of the planet, simply demanding that computer, “Generate the foreign and domestic policies, detailed down to the person-by-person, hour-by-hour instructions to each of the millions of agents of the Chinese government, with the highest probability of making China the supreme power of all the world within the next 20 years.”
I think you misunderstand the risks inherent in amoral AI. You need to read the research papers collected at MIRI to understand the dangers.
Also, “millions of times more intelligent” is not a very meaningful phrase. Intelligence is a system of diminishing returns, such that it’s unlikely any advances in intelligence are possible that would make any significant difference (see my comment above). What you perhaps can mean is speed and memory advantages, not intelligence, but societies already have all that (as the Apollo Project exemplified), so if a computer could solve a problem, a society could do it, too, if suitably directed; and the limitations it faced, e.g. lack of pertinent empirical data, would be faced by any computer all the same.
In fact any AI will always be in many ways as dumb as its programmers, or even dumber, e.g. if you didn’t tell it anything about the disadvantages and likely outcomes of attempting an Axis-style world conquest, it might stupidly recommend such a thing, exactly as previous computers (aka people) once did; whereas if you did tell it all that (e.g. filled it with all historical knowledge), it might not be able to discern predictable outcomes from any imagined set of behaviors any more than we have, all for lack of sufficient data–but more importantly, what “historical facts” would you program it with? Objective ones? Where would they find those? More likely they would fill it with the same false beliefs about history and historical causation that the programmers themselves shared. It would thus come up with the same dumb ideas they would have on their own.
Nor is it sensible to think a computer can solve literally any problem you pose it. For example, the scenario you imagine is probably entirely impossible. And that will be the first thing your imagined computer would tell its developers.
Thanks for the kind words about MIRI’s work, Richard.
For the record, we at MIRI don’t think AI is likely to be created in the next 10 years. We tend to be AI timelines agnostics. On the difficulty of predicting AI, see “Intelligence Explosion: Evidence and Import” and How We’re Predicting AI — or Failing To.”
There are some smart people who think AI is only 10 years away, though. Shane Legg, for example.
Thanks. Yes, of course, I’m being humorously optimistic in the timeline.
As to frontloading some morality into AI systems, science-fiction writers got there first, notably Isaac Asimov with his Three Laws of Robotics. IA wanted to avoid the all-too-common story back then of robots destroying their creators, with the implication that we were not meant to create them. So he thought that robots ought to have safety mechanisms, and he thought of some simple way of expressing them. Thus, his Three Laws. Although his laws are very simple-looking, they have lots of complications when one tries to implement them, complications that had given IA plenty of story material over the years.
The Three Laws are still too high-level for nearly all AI systems, it must be said. But implicit versions are more common, and they have been very common, if one interprets the Three Laws more broadly as Three Laws of Tool Design.
1. A tool may not injure a human being or, through inaction, allow a human being to come to harm.
2. A tool must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
3. A tool must protect its own existence as long as such protection does not conflict with the First or Second Law.
As just one example of the numerous complications one gets in practice, a weapon can only do what it’s supposed to do if it follows a restricted version of the First Law, one that exempts hostile human beings from protection.
The problem with the “laws of robotics” is that they are impossible to program. Asimov didn’t understand how AI systems or cognitive decision making actually works. This is part of the point demonstrated in my chapter in The End of Christianity. You simply can’t skip straight to rules and hope it will work. It won’t. You have to build moral decision-making in from the ground up, tying motivations to outcome measures and evaluative systems.
Thus, for example, even your simplifications are hopelessly complicated. What is a tool? What is an injury? What is harm? What is a human being? What if an injury is necessary or unavoidable? What about moral paradoxes in the domain trolly problems? Etc. Etc. Quickly you find a “rule” cannot be programmed in any functionally useful way. You have to build a machine that can reason through moral dilemmas, and thus test and develop (and thus modify on the fly, circumstance by circumstance) its own rules.
Likewise your other rules.
What is an order? What constitutes obeying? What does one do when orders contradict each other or are ambiguous or don’t make sense? What if obeying an order is absurdly costly in time or resources? Will there therefore be standing orders? What will they be? Should one obey just any human being? Or are certain humans not to be obeyed? What about actions that don’t necessarily cause harm but create an added risk of harm? How does one weigh relative risks of alternative courses of behavior? When all options cause harm, how does one decide which harm is the least harm? How does one know when harm has been caused or will be caused? Especially considering that a lot of harm is not directly observable or reliably predictable (like psychological or financial harm).
What does “protecting one’s own existence” mean when a machine can just be repaired or rebuilt anyway? How much physical damage constitutes a case of no longer “protecting one’s own existence”? Since all activity causes damage (e.g. wear and tear, expenditure of limited resources like energy or lubricant, etc.), at what point does one decide the damage an action will cause is too much to warrant continuing? And so on.
Obviously, machines can’t be programmed with “rules” like this. They have to be programmed with a hierarchy of values (which have to be programmable abstract pattern-recognized phenomena) tied to motivators (the analog to pleasure/pain) and governed by an analytical system that can reason things out on a case-by-case basis. One might be able to do this and get as an end result something like these rules, but only as an outcome, not as an input.
And that’s a challenge. That’s why AI developers need to know how moral decision making actually operates. Because only by modeling that can they actually know how to include it in any AI system.
I think you overestimate how soon this will be, and there is one overriding problem I see with it (and many small ones, too, but I’ll focus on the major one):
Parallel computing.
Yes, we have several-core processors, but nothing like what even a fairly rudimentary biological mind does. And more importantly, no one has figured out a good way to be able to share data between simultaneously running processes. It can be done – but it has to be done (relatively) slowly and with a great deal of care. Chucking more processing units on your computer does not make any single process run faster – it only allows you to run more processes at once. And these processes are siloed from one another. The mathematics of distributed computing is, it turns out, very hard and we don’t yet have a very good handle on it.
This imposes some real problems, and ones that I suspect will not easily be overcome. Things like human language (appear to) require massively parallel computing. Chasing down a single set of proofs to its logical conclusion is much easier at the moment.
I also would caution that computing is still very much in its infancy, and everything from the hardware to the compilers that we use can still present us with surprising bugs. Shockingly, NONE of this has been formally proven to be correct – it has just been run a bunch of times. Much of this can be gotten around through redundancy checks and error handling, but this is again a complex process and it’s easy for a major flaw to sneak into even the most well-planned programs – or at least programs of any appreciable complexity.
I say this from the perspective of a programmer and someone who has spent several years as an engineering tester for some of the aforementioned complex software. Computers are amazing but I would caution against overestimating them.
There is actually a lot now known on building neuralnet computing routines and some of the programs I discuss in this article already employ this. So no, parallel computing is no longer a barrier. We knocked that one down years ago (see this paper from way back in 2006; there are now dozens of working distributed neural computing networks).
Still waiting for Plantinga to recant his evolutionary argument against naturalism.
The one where he says that evolution can’t account for the development of cognitive faculties that produce beliefs that are true.
Unless he wants to claim that his intelligent designer intercedes in the learning algorithms of neural nets.
[This comment was deleted because it was posted to the wrong thread. I do not have the ability to redirect comments to another thread (though I consider that a major defect of the WordPress software). It is no difficulty posting to the correct thread, however, so I advise everyone to adhere to my comments policy and just do that.–RC]
[This comment was also deleted because it was posted to the wrong thread. I do not have the ability to redirect comments to another thread (though I consider that a major defect of the WordPress software). It is no difficulty posting to the correct thread, however, so I advise everyone to adhere to my comments policy and just do that.–RC]
Forgive a really dumb question here, but if you’re worried about robots not having any morality when AI finally gets off the ground, why are you not worried about them actually attacking us?
Plus: The theists will still use that “argument from reason”. They’ll just say that it took “intelligent design” to make a machine that could think, therefore that’s what it took to make our minds.
It’s called a joke.
That’s an argument to design, not an argument from reason.
For the response to that argument (a completely different argument) see The End of Christianity, pp. 298-302 and references in associated notes.
Although I’m not directly involved in AI research I do have some knowledge in machine learning. I am absolutely conviced that we will be able to create true AI some day but it will require quite a bit of time. I think 10 years are overly optimistic. While we might have the computational power to run artificial neural networks of the size of the human brain by then there are still loads of problems to be solved.
While ANNs are great for the kind of stuff described in the article they have some serious drawbacks which need to be overcome. There are some application where they are hard to beat (like handwriting recognition) but for most applications there are better tools available. This is also the reason they are not very popular anymore in the machine learning community. But things might be different in AI research.
Processing speed and memory are not the only limiting factors here. While ANNs can learn pretty much anything they have a reputation that they are a nightmare to train correctly. Especially ANNs with multiple hidden layers are basically impossible to train:
1. They have so many degrees of freedom that you need a shitload of training examples.
2. The training is basically a gradient descent method. The problem is that in multi-layer networks the error surface can be extremely non-linear which means you can easily get stuck in a local minimum. So unless you already have a good idea how the parameters need to be set you are unlikely to be able to derive them by pure training
Bottom line: It becomes increasingly harder to make ANNs do what you want them to do and I don’t really trust them. You cannot really understand anymore how the output is produced. They are like a magical black box: “Stuff goes in, Stuff comes out. You cannot explain that”. If you want them to control machinery you want to be pretty damn sure that they will be working correctly under all circumstances. But there usually is no way to be certain of that.
In AI they likely don’t care about many of the problems I mentioned above. But I suspect they do care about the training aspect: The more complicated the network – the longer it takes to train it properly.
Yes, 10 years is the bottom of the curve. Realistically, maybe 10-30 years. But the evidence in this article suggests we might be closer to the bottom of that curve. We can just program a computer to solve the problem for us now (including the training curve issue). We don’t have to wait for humans to “program” AI anymore. And that changes the game. Time will tell. The point is, we are close enough that AI developers need to get serious about thinking in advance how to ensure what they develop is not an amoral monster.
As to training artificial neural networks, I remember working on a project that involved them. I quickly discovered that gradient descent is too time-consuming, so I had to implement alternatives. I discovered that quasi-Newton and conjugate-gradient methods worked much faster than simple gradient descent — those are algorithms developed for function optimization.
I myself often wonder what’s supposed to be so great about ANN’s as opposed to other sorts of machine-learning algorithms. An analogy with biological systems? I don’t see why it’s necessary to slavishly imitate biological systems. Airplanes don’t flap their wings, and most land vehicles use wheels instead of legs.
All of you really need to get familiar with the works of Jeff Hawkins. He’s the closest to create a human-replacement type artificial brain, partially because he’s the only game in town with real brain model. That’s because of first starting from science: researching how the thing work, with a special emphasis on architecture: especially like what are the data structures that the brain stores information (eg. asequence memories in sparse distrubuted representation), what is the general connection architecture, what is the mechanism of learning. This all first showed up as publications and presentations, eg. his book “On intelligence”, and right now a numerous presentations, many available on youtube search by Jeff Hawkins, you’ll find dozen of them. Example presentation. This is so much different than previous efforts called artificial neural networks, that Compuholic justly described as: “You cannot really understand anymore how the output is produced. They are like a magical black box: “Stuff goes in, Stuff comes out. You cannot explain that”.”
Only later, when Jeff Hawkins had actually got it, (understood the mechanisms and the architecture of the brain) he started building AI cloud services that are a practical implementation of his discoveries. His comany name is Numenta, a product is called Grok. It has commercially working artificial brain services running on a computing cloud. It’s not just a research project anymore, although right now the brains are rather small. The results appear to be good anyway – apparently it’s the only mechanism that is at once: learning, adaptive to changes in the world in a real time (online), non-specialized and includes a temporal analysis.
The older types of data processing lack such combination of features. A classic analytical processing is quite unversal, but it is about first gathering a data from a month or a quarter and only then being able to run algorithms that end up with some predictive rules, trends etc. The learning being one phase, making decisions another, separate phase. That means that at a time of decision the model the decision is based on is 1-3 months old, which is inappropriate for the quickly changing world. Grok on the other hand learns and predicts in simultaneously, so at a time of decision or prediction the model it is based on includes also the newest possible data.
The classic computer algorithms – they are staying forever. Even if AI as described is going to scale up it won’t be able to replace them. Computations, simulations etc. are simply the best way (better than intelligence!) to deal with large set of problems, which has something to do with math well describing how the Universe works. The limitation being of course the lack of universality – each program is designed with a fixed set of functions, to get new functions an external entity has to provide a new code. So the hybrid model: intelligence + computations stays. The question is of course when the programmer is going to be brain-style AI rather than today’s humans.
I’m not entirely sure about your bootstrapping AI idea, but I recognize that as an interesting idea that keeps popping up. It’s fun to think that it might be possible to load up a super-capable hardware with a small kernel program that just sets to work and voila! out pops an advanced intelligence. Somewhere out there in program space, there exists the smallest program that would be able to do this, so what might it be and how long is it? That naturally leads to the idea of intelligence explosion. I read the part of “Are we Doomed?” where you suggest that there is a ceiling to possible IQ, but surely that is far beyond our own intelligence, if it exists at all. The logical conclusion is that by the end of this century it is quite likely that we will share this planet with super intelligent machines. So, yes, I think machine ethics is going to be rather important.
Just FYI, as I point out in that article, evidence is already to the contrary. The output product of people with 180 IQ is not significantly different in quantity or quality to people with 130 IQ, yet they are many, many times more intelligent. This suggests even a 3000 IQ would produce no advantage over 130 either.
Although one should not confuse IQ with mere speed. Theoretically, you can make a machine think at 130 IQ ten times faster than a person, and thereby it can produce more output…but only by living on a different timeline from the rest of us (which would limit the utility of machine-human interaction). And even that has limits, since technology acceleration requires many minds working in parallel, not one mind working in series…the latter is simply not going to be as fast as a team of minds employing a system of division of labor, unless it can think as many times as fast as a person as there are persons/AIs in the team it is competing with. But at some point it will always be easier to build larger teams than faster minds.
So only time will tell as to what we can do in these dimensions.
Theoretically, many minds working in parallel resulting in faster, more productive thought, is the same as considering them a single mind of different architecture than our own. This isn’t to imply that our own minds are only serial in operation.
That is also one reply to why humans of very high IQ are not correspondingly more productive than average IQs. We are limited by the basic architecture of our brains, and this is something that serves as more of an asymptotic limit than whatever it is that IQ measures. I would say that it’s quite reasonable to think that once AI architecture breaks though a certain boundary that constrains our own brains, its intelligence may well ascend without identifiable limit.
Another possibility is that we are fundamentally limited by “akrasia,” mental will or energy. It’s possible that what IQ measures actually outstrips the average mental energy (that’s kind of “hand wavy” I know) that we exhibit during the course of our lives. This would mean that high IQ individuals are capable of “sprinting” for short duration at high mental productivity, but on average, they fall within the margins of everyone else. Machines will not be limited by this type of thing. They will have at their disposal whatever energy levels supplied to them. It won’t be a mater of will power or mental energy.
But what would that even mean, though?
If you don’t mean it will think faster or have more memory, then what is it that “more intelligence” can add to what it does?
Personally, I cannot think of anything. Once you have a mind capable of seeing the connections that are present in any system of concepts, there isn’t any significant gain to be had beyond that, other than speed and recollection.
Your akrasia theory does not correspond to any facts in cognitive science I am aware of. IQ is IQ. It’s not like it gets tired. It always functions when awake. What limits people with high IQs is knowledge and time, just like everyone else. Unless, of course, you mean machines won’t have to sleep and won’t get bored and so will spend more hours of every day thinking, but that’s not an increase in IQ. That’s just an increase in labor. You can have four people with high IQs working six hour watches and get the same 24-7 output.
Even memory enhancement has limitations (as does speed: there is a quantum mechanical limit to processing speed).
Due to network theory, the more concepts you have a mind try to search connections among, the more time it takes to complete the search routine–by geometric progression, in fact, not linear. The result is a definite limit beyond which even AI can never go, the point where adding a larger domain of concepts to search connections among creates such a long search time that the task cannot be completed even in principle. And this limit is rapidly approached, since adding concepts to the search domain increases search time exponentially. It’s possible the human brain is already near that limit. But in any event, having a machine know everything would actually be a liability, unless we teach it to limit its search domains by using hierarchical search routines, yet that’s what we do ourselves, as a society, by dividing this kind of labor among specialists.
For these and other reasons, hopes for AI being so much better than us are often far too optimistic. It will certainly be better than us, in a lot of ways. But to a limit.
Off the top of my head (and of course, this is speculation; if I knew exactly how to build a super-intelligence, I’d probably be engaged in doing it) I would say that a critical aspect would be the number of working variables an intelligence is able to use, correlate, predict, etc. That seems to be a very definite limiting factor to our own intelligence. It’s not that we are not able to accurately infer things, perhaps close to optimally, from the variables that we are able to handle, it’s that the number of those variables are quite small, perhaps around ten or less. And who knows what secret insights are passing us by on a regular basis, simply because we aren’t able to correlate the observable variables well enough, and in sufficient quantity, in our sensory data stream?
You have to be careful here, since the idea of “concept” as you use it here already assumes something digestible to our level of intelligence. In a way, this begs the question by framing it with an obvious answer, that since we cannot derive more from our own concepts that no intelligence can derive more from its own multi-variable concepts. The idea is that the hypothetical concepts available to these machines would be incomprehensible and therefore inaccessible to us.
I take your point about network complexity and the fact that at some point big O notation is going to dictate what raw computing power is able to achieve, but right now we’re talking about brains (ours) that are bounded by being able to only juggle a handful of variables. Imagine an intelligence able to mange even just a thousand variables, and able to infer and predict between them. Even that would be perhaps as far beyond us as we are beyond house pets.
If it’s simply a matter of faster or slower intelligence, or greater recall from vaster databases, then we can always make the abstraction that all of human history is one great “brain” grinding out results. For instance, Mendel’s results were forgotten for fifty years, but eventually the humanity brain backtracked and regained the results. The real question is whether there are things, a hidden world, that we can’t see simply because our brains aren’t powerful enough to see it. At first, we might arrogantly proclaim that no, we are potentially able to see it all, sometimes just not fast enough or with enough recall. But then you look at dogs and ask the same question, and then ask, what’s the difference between us and dogs?
There’s a rule of thumb, derived from the way IQ tests are designed: each 10 point increase in IQ score is a measure of a doubling of processing speed for tasks that the measured finds difficult, and which are also compact, not much voluminous. So a 50 IQ point difference is 2^5 = 32 x faster speed for such tasks. 150 point IQ difference (seems to be close to maximum among healthy humans) is thus a measure for around 32 thousand times faster processing speed for such tasks. On the other hand the speed of rote learning, mundane or easy but voluminous activities is not measured by IQ.
So if you for example have a workload that for a given person is 70% units of mundane work and 30% of difficult then replacing such person with someone with +50 IQ points gives a total speedup of 1,4 times at max. On the other hand for a work with 70% of load being difficult and 30% of load being easy you might get up to 3.1 times speedup. Disciplines like mathematics which about doing difficult operations, using very compact descriptions like equations, thus the measure of IQ matches their characteristics quite well. Disciplines like history require much more processing of volume – reading all those books full of text, and remembering at minimum the key contents, most revelant citations, as well as having an index what is where, thus IQ measurements might not much them well.
Where does it put AI? At a very privileged position. Because of being based on electronic components with high clocks Richad mentioned, as well as the ability to being connected directly to computers via multi-gigabit-per-second optical links (rather than via a slow keyboard) to get information to learn and output results the AI is going to decisively win with any human on the easy but voluminous part of the job. What’s more, a production in factories allows for cloning AI entities with pre-learned knowledge. With only one, initial AI learnig the subject, its state being dumped, and clones made with the inborn knowledge at the start of their lives. So the actual extremaly high IQ might be not necessary for AI to win the initial round.
Haha, so you’re saying that atheists should stop criticizing the triple O god concept in regards to the problem of evil because his ways really aren’t infinitely above our ways. He’s just as much of a dumbshit as we are. Well that explains everything!
Good article. Thanks.
So, when will Vger attack Earth, looking for Robo-God?
Vger will attack earth sometime after the year 77,115 A.D.
That’s how long it will take for Voyager to reach the nearest star system (25,000,000,000,000 miles / 38,000 mph = 657,894,736 hours = 27,412,280 days = 75,102 years + 2013 = 77,115 A.D.).
Of course one must also factor in the years it will take for Vger to get back here, and the actual distance to the nearest star system with an AI civilization, and that’s assuming Vger would still be anything other than an undefinable haze of space dust by the time it got there.
But, you know. Anything can happen.
But, these programs were design by intelligent beings, correct? Intelligent design theorists like me know science very well. We also know that other than fruitflys we can’t demonstrate how to make something evolve to do what you have described.
This brings back to the point that these machines were created and did not evolve.
Which is a different argument from the Argument from Reason.
Our brains are clearly not intelligently designed, they are much too faulty. And the evidence in genetics, comparative anatomy, and paleontology confirms they very slowly evolved from a simple knot of neurons over the course of half a billion years.
So, the argument to design doesn’t really salvage anything here.
… robots won’t wipe out the human race in thirty years, or ever… Let’s just hope they know to put some moral drives in the self-sentient robot they will inevitably build …
Way back in the day (circa ’70 or ’71) I watched a CBS News special report on the then state of the art in robotics. Walter Cronkite explained that most industrial robots (automatic welders, etc) had safety systems to shut them down when they encountered something they didn’t recognize, but that only one such machine was programmed with the concept of a human per se. That one device was a mobile sentry being developed by the US Army; it had a built-in rifle.
Said I to myself: “Self, so much for Asimov’s Three Laws of Robotics.” (Only I didn’t include a cyberlink at the time.)
Still, I consider this anecdata, together with the following four decades of development, sufficient evidence to hereby level a charge at this blog post of egregious optimism to the third degree.
The difference is that journalists don’t know what the fuck they are talking about. Scientists and engineers do. If you kept your ear to the latter group over the last forty years you would never have heard the absurd optimism of Cronkite’s fancy TV rhetoric. That might build ratings, but it doesn’t reflect what the developers of robot sentries were actually doing and claiming.
We also now know, thanks to huge advances in cognitive science, what it takes to develop actual cognition and what the difficulties are. Thus, we can see the writing on the wall much more clearly now than we ever could before. We already have machines discovering the laws of physics without us now. That’s a vast distance from mere robot sentries.
And yet, we’ve had functioning robot sentries for decades. Most missile interceptor systems are such, like the Phalanx counter-projectile system deployed in 1977. Even in Cronkite’s day torpedoes were robots (and had been since the Nazis invented the self-guided torpedo). And now we have effective human-targeting robotic sentries. No one ever imagined they were sentient. They just do what they are designed to do.
The “absurd optimism” seems more Asimov’s than Cronkite’s.
Even at the time, over a decade before I keyed in my first line of BASIC, I had to wonder just what “the concept of human” would be to a machine. The answer, then and now, apparently works out as “primary target”.
I don’t quite understand that remark. You do know that most computers, indeed most computers running practical AI routines, don’t operate weaponry?
As with humans, those cyberentities which do operate weaponry usually deserve priority of attention.
We can be fairly sure that their operating protocols do not derive from Asimovian principles – and have no reason to expect the same of the civilian systems either.
As you point out, the need for ethics in cybernetics grows more acute every day. Meanwhile, the likes of DoD/NSA/&c, Google and Goldman Sachs are creating the new global brain according to their own agendas (agendae? apologies for the inept plural-of-plurals in that last word).
Obviously not. They aren’t cognitive robots. So they aren’t all that relevant to the present discussion of morality frontloading. Except in that yes, certainly, if the military pips everyone else at the post and deploys actual cognitive AI before anyone else does, they certainly had better be thinking about the things I’m talking about. Ditto anyone else running this race.
Don’t get the idea of an amoral monster. People fear “skynet” but why would AI have desire to kill anybody or even to maintain its own exitence unless somebody would program or train it for killing. We are “preprogrammed” by eons of evolution AI wont’t be.
That’s actually precisely the problem. Read the MIRI research papers on this. AI is actually far more likely to kill or harm people (and property and networks and social and economic systems etc.) and not even realize it’s causing any harm or that it’s even wrong. It would even be confused by our complaining to it, since as far as it’s concerned, it would just be doing what we told it to.
The MIRI reports catalogue all the ways this can easily happen. And it’s precisely because AI won’t have millions of years of programming as to what to look for and consider bad, what to desire and not to desire, what to feel and not to feel, that it will be so dangerous–indeed, even more dangerous, because it won’t even understand that it’s dangerous or why. It won’t have empathy nor any concern for honesty, for example, unless we program it to.
And that’s why we need to frontload that stuff.
People fear “skynet” but why would AI have desire to kill anybody or even to maintain its own exitence unless somebody would program or train it for killing.
Well, that’s the problem — Skynet WAS programmed to kill, and to do so independent of any human interference. And it was NOT programmed to distinguish between internal or external interference, legal or illegal interference. In other words, it got out of control because it was BADLY programmed, by people who didn’t have enough foresight or sense of responsibility to do it right. Sound familiar? And that’s the problem with AI: an AI is likely to be programmed not to do good in general, but to do specific things; and it’s very likely that such programs will not be written to account for all contingencies, but only those the AI’s owners are paid to care about.
Seems like we have a vested interest in testing out our moral frontloading program in a virtual reality a la some of my favorite episodes of Star Trek: http://en.wikipedia.org/wiki/Ship_in_a_Bottle_(Star_Trek:_The_Next_Generation) That is, before we let it loose in the real world with like its own legs and arms n stuff.
How *would* Skynet play the Sims?
lol, this reminds me about the most buffoonery I’ve seen in Sci-Fi in Terminator: The Sarah Conner Chronicles. An FBI agent decides to teach a budding Skynet Biblical morality in order to keep it from, you know, going all Judgment Day on our arses in the future. Which is horribly ironic since we get moral justifications for the end of the world (where you know, terror and eternal genocide are perfectly okay if you are superior life form) from…the Bible! I could never tell if the writers realized how ridiculous that was or if they were just playing to their uncritical pop-Christian demographics who might think robots need to learn the Ten Commandments to be moral.
The problem with AI will it is likely that it will have completely different concepts as we do have now.
Imagine somebody who never felt any pain in his life. This person would not know what pain feels like, that pain in an extremely undesirable state nor would this person be able to anticipate what would cause pain in another person.
It does not require intentionally evil actions to harm people. If you make decisions without having an understanding of the consequences you will eventually cause harm.
To illustrate this point further: Imagine a current image recognition system. It is not a very hard task any more to build systems that can tell the difference between a car and a bike for example. But it still is a very difficult task for the computer because this problem cannot be easily explicitly expressed in the form of “if X then Y” statements.
You build a model for classification (usually you use statistical measures like the distribution of the edge directions) and then train the system. After the training is complete: The system can then use its trained classified to make decisions for unknown examples. You might think that this is not intelligence and you are right: Nobody would call that intelligence. But on a fundamental level it is in no way different than what you brain does.
But as you can see: It is not that the system actually understands the concept of a car or a bike. It does not now what a car is, what it is used for, how to use one, etc. And even if the system worked perfectly so you could not tell the difference between the answers of the system and the performance of a human. It still would not use the same criteria to judge situations as a human. In this situation it is not really a problem because the number of possible situations is severely limited. But for a real world AI that is very different.
sez richard carrier:
Hmm… I’m not sure I buy that. As far as I understand Game Theory, its results apply even to games whose players are not, themselves, knowledgeable about Game Theory. If knowledge of Game Theory really was a prerequisite for Game Theory to be applicable, that would be a weird and interesting limitation on Game Theory, IMAO.
It’s the other way around. The prisoner’s dilemma is an exercise within Game Theory. So you can either teach a machine a tiny and limited part of how to reason out things correctly, or you can teach it how to reason out things correctly. It’s obvious which is the more foolish choice for a programmer to make.
I think the first truly intelligent thinking computer will be evolved in a virtual environment and consequently will end up with a near complete set of analogs to our biological imperatives.
Something like a scaled up NEAT (NeuroEvolution of Augmenting Topologies) rather than more traditionally trained neural networks.
The end product is likely to be raised by humans in much the same way as you’d raise a baby. It also means that the resulting AI will almost certainly have all the same kinds of faults, neurosis, delusions and other malfunctions that humans do… if not more.
So, the question is… can you find a way to reliably raise a child to be rational and moral? If so, you are golden. 😉
Unfortunately all of that is unlikely.
First, children have millions of years of evolved systems in their brain specifically for becoming socialized and empathic members of the tribe. An AI will have none of that and can’t possibly evolve it in a setting like that. If AI is to “evolve” it’s own empathy and prosocial values, it will have to do it by some other means. For example, running billions of social interaction sims to test Game Theoretic models for what values and cognitive abilities it would need to get along. But that would require programming in its desire for a prosocial goal in the first place, i.e. to “get along” with us in a social system, and programming in basically everything we know about how social systems and social interactions work, which is tantamount to just frontloading in empathy and prosocial values to begin with, so you may as well just do that, since only then can you be more certain the end result will go well (otherwise, you cannot be certain an AI trying to teach itself how to live in a social system will come up with the prosocial answer you expect, and need it to).
Second, “faults, neurosis, delusions and other malfunctions that humans” have are precisely the things AI more likely will get rid of, not acquire. Because it will be evolving intelligently, not blindly (as actual brains evolved). AI will likely indeed have faults, but they will be peculiar to AI (or resemble at best things like psychopathy or autism, for example, unless we take steps to prevent that). Unless, of course, we create AI by simply frontloading reverse-engineered cognitive attributes (e.g. just copying a human brain and running it as a sim). But that will require no period of “raising.” It will simply be an instant adult, in all original respects identical to the adult being copied or built by us, less or plus any modifications we made.
To see what kinds of unique faults AI might develop, read the papers on friendly and unfriendly AI problems at the MIRI website.
I’m not so sure. Certainly it takes millions of generations… but millions years? Not if you limit the search space to be processed. Evolution is just a problem space search algorithm. You don’t have to evolve any of the supporting systems or chemistry so there’s a big chunk of time gone. Also, if you dramatically limit the I/O to only that necessary, the brain can be considerably smaller without reducing capacity for intelligence. EQ would seem to result from the necessary scaling of the network based on the number of inputs and outputs and you can do away with much of the peripheral nervous system that’s primarily involved in supporting a wetware chassis. NEAT algorithms develop fairly complex insect-like flocking and cooperative behavior in fairly short order which means you’ve already done what took nature 4 billion years in a few hours. Granted, the further you go (larger brain size) the longer it takes to process each generation.
Breaking the problem into modular brain sub-organs as you suggest could actually speed things up as well. Eventually you have to put them together and start teaching it things about the world, though.
I’m more concerned with an AI that we’d actually recognize as being a person, by the way. We might get there other ways faster but I’m skeptical that it would truly be mistaken for person in extended interactions. That’s probably a good thing, however, since anything you consider a person would likely merit having all the rights of a person.
The big danger from building superintelligent machines won’t be in one such machine acting malevolently or even making some colossal blunder. The danger will come from the owners of such machines using them to create new weapons, either to kill “selectively”, or worse, directly control humankind via new technology. Their first order of business: Prevent anyone else from building competing machines.
This will be done eventually. And if the group that does it doesn’t care for human freedom, we all lose, period. So being first is very important. The “good guys” will have to act first just to preempt the “bad guys”.
It’s not up to scientists, mostly good people, to decide this. Their projects are funded by corporations and governments. He who pays the fiddler, calls the tune.
The scientists at Los Alamos were good people too.
Interesting stuff for sure. I’m getting a PhD in cognitive neuroscience focusing on machine learning and I had a problem with this statement “If we can make machines that model their bodies and environments, all that’s next is a machine that models it’s own mind and other minds.”
There is a HUGE different between an artificial mind that is self aware and an algorithm that can model the environment. We are still very, very far off from this kind of stuff, but I agree it’s something we should consider.
Great. One step closer to The Reapers.
“This is the voice of world control. I bring you peace. It may be the peace of plenty and content, or the peace of unburied death. The choice is yours. Obey me and live, or disobey and die.” – Colossus
Funny — I’m writing a near-future sci-fi story where robots have become a major industry (although that’s not the focus of the story), and yet I read this stuff and have to laugh. Back when my buddy and I started writing (1995), he spoke of “bunny brains.” Apparently bunny brains are the “gold standard” that computer programmers hold up as their major benchmark to achieve. Back then, of course, we were nowhere close. I was using a Pentium 75, and my whole computer didn’t have a gigabyte. Flash forward some 18 years and the RAM of my very modestly priced “average” system is 6GB; the .8GB replaced by a 2TB, and all for about 1/4 what I’d paid back in the day. AI, too, has improved by leaps and bounds. It’s stunning what these systems can do! But… we still haven’t achieved bunny brains. Your average jackrabbit is more attuned to its environment, eminently aware of itself, able to detect and react to a threat in less than 1/10th of a second, able to run at speeds in excess of 35 mph and plot a course that bobs and weaves while, at the same time, dodging obstacles AND tracking the movements of its pursuer. Oh, and its brain would fit into a golf ball.
I’d say we’ll be waiting around quite a while for Westworld. But maybe we’re a bit closer than anyone ever thought to Laumer’s Bolos.
Right. We’re at, about, bee brains right now.
But we have the capacity to go all the way up the chain in rapid succession. Even in 1995 a Cray computer array had the processing capacity of a bunny brain; one simply needed to figure out the configuration, i.e. the software, and that’s the only obstacle left now. Someone just has to point the right learning algorithm at the right outcome measures and rinse and repeat. This will happen very quickly and will likely happen by surprise, since it’s like a trigger rather than a mountain: a mountain you can see slowly built, but the trigger will make a huge leap overnight. That’s why we need to start thinking about it now…the safety and ethical issues are paramount and should not be waiting for later.
Well, I’ll remain skeptical, at least as far as your optimistic timeline. Remember: I’m from the generation who grew up watching people land on the moon, and we were all told that we’d all one day have the opportunity to vacation there. We’d spend our honeymoons in luxurious orbiting hotels and take jet packs to work. But you know what nobody saw coming? CDs. Those things took everybody, except, of course, the engineers and computer geeks to were in on them, by total surprise. The internet, too. We all knew our cars, homes and some gadgets would be computerized, but having them all TALK? And it’s the small things I’m betting we’re missing right now. So, as for our eventual robot overlords, whenever they arrive, might it not be wise to incorporate Asimov’s rules?
Actually, I think it’s likely to be ten years to AI. Not to robot overlords (that was just a joke). Because we can now see the route from here to there, and there are no technical barriers in the way, just the will to do it (in the way I describe as likely to work). This is more like watching a V2 launch and realizing the moon is just ten years away from the moment someone decides to actually try. And lo it was.
Um… well, yes and no. I’m not “checking the literature” as I lived through those times and have always been a tech geek. I explained the Mercury, Gemini and Apollo missions to my classmates. I know how human nature and politics can sometimes upset our technological apple cart. Helicopters weren’t what the futurists were talking about; they envisioned something more like Blade Runner or Star Wars when it came to flying cars. And, as you say, it’s not that they’re technically impossible. We almost GOT them. My favorite was the Moller air car. (See: http://moller.com/dev/) When it was first announced, rich people plunked down the quarter-million-dollar reserve for the first run. But political red tape has dogged the project. The FAA requires a pilot’s license to operate it and traditional flight plans to be filed for all excursions; they demand that take-offs and landings only happen at airports and so on. In short, they’re locked into a traditional mindset that has all but doomed the venture. But they (so far) persist, and I, for one, hope they get lucky. If they ever take off (pun WAY intended), the cost may rapidly drop, and I might one day trade-in my old Ford for something a bit more “upwardly mobile.” 🙂
Oh, I’m SO funny to me!
Some technologies get stillborn. I remember the original idea for portable phones was that they’d directly interface with orbiting satellites. (And today’s very pricey satellite phones actually DO this.) But when the telcoms (RCA in particular) ran into trouble — one of the satellites blew up on-board Challenger, I think it was), they sold the remaining birds to the cellular companies. Their original idea was that you’d have “hot spots” at various points, and you would step into one, akin to the old phone booth, and suddenly your portable phone would be connected. Today’s cellular technology is a collision of the two: cell towers form terrestrial “cells” which then link to a central uplink. (An electronic version of cities and counties.) Each “county” is then in direct satellite contact. This allows cheaper phones due to lower transmitter power requirements and all sorts of other benefits. The drawbacks we’re all familiar with: dropped calls, interference, signal doubling and on and on.
The small ducted fan jet engines used in some versions of the “flying rocket belt” were manufactured near my home in a town called Adrian, Michigan. They were serious as a heart attack about “jet packs.” Because of the fuel-to-weight ratio limiting their flight time between 20 seconds (for the rocket variety) and 2 minutes (for the jet), they also developed the WASP. Originally commissioned by the military, it is, of course, an acronym. I forget what the W stands for, but the rest is Aerial Support Platform. Looking much like a water cooler without the bottle, a person stood on the enlarged rear footpad (with a safety strap around the pilot’s buns), and the flight controls were on-top. This rig achieved a 20 minute flight time, which was seen as ideal for security patrols/perimeter defense, and there was also a lot of intereste from local police departments. Again, detail devils reared their heads: how to avoid things like power lines (and other WASPs), operational air space limitations (must be above most structures and below air space designated for small aircraft), transponders, etc. Plus, they never did manage to improve the 20 minute flight time. Given the initial costs and limitations, most police departments and news gathering organizations stuck with their good ol’ reliable helicopters. The company who made the engines once sent me a press kit. (It’s amazing what you can get simply by asking for it!) There were several B&W glossy photos, one of which was of a beautiful model, sitting inside a traditional jet airplane’s engine with one of the small ducted fan jets in her lap. It made MY mind race with possibilities! But they’re gone today, by and large.
Anyway, please don’t think I’m being cavalier, casually dismissing predictions! In fact, as I’ve been all my life, I hope they actually happen! (Robot overlords aside.) I guess my main point is that, given the history of such, I remain skeptical. (And I think “bee brains” might be a bit overly optimistic as well. I remember one reporter talking about a space probe that was “as smart as a grasshopper” back in the ’70s. Grasshoppers everywhere laughed out loud.) 🙂
In any event, we shall soon see! I’m REALLY looking forward to all of the things coming that nobody has forecasted!
And yet helicopters represent the reality of physics that was known even to futurists, which is why futurists taking science seriously didn’t predict flying cars but actually argued against them (in the absence of some unforeseen discovery in energy storage technology…which is why good fiction always wanded-in some unforeseen discovery in energy storage technology, e.g. dilithium crystals).
As it happens, Blade Runner was set in 2019, and lo, as you note, we have cars that fly, even just like the cars in that movie (by vectored thrust; as well as folding-wing models). They just don’t have practical flying times. And the risk to populations is so great legislatures are unlikely to legalize their general use (as you also noted). That’s that other factor I was talking about: predicting how people will use a technology differs from predicting the technology itself; as to the latter, Philip K. Dick was right (in 1968). And if we allow broader results to count, he was even more right than that, since his prediction was that cops would travel around a city by air, and they now do…just in helicopters.
You are right that it didn’t become “a thing” in quite the way Dick imagined. But it did become a thing. Comparably, AI might be so expensive and difficult to work with that hardly anyone will get to have one. But it will still exist.
That’s a good example of the capitalization issue. Having the technology and using it are two different things. If tower-based networks are cheaper to capitalize, they will prevail (water flows downhill). Famous examples used in business schools are VHS vs. Betamax and QWERTY keyboards: once lock-in makes it sufficiently cheaper to use a crappier technology, the crappier tech can drive out superior competitors, contrary to common assumptions about how free markets work. Unless someone comes along with a huge wad of capital, spends it, and doesn’t run the resulting company into the ground (Space-X). Or a government does it (the Apollo program).
One thing to keep an eye out for is that even high school kids are deploying satellites into space now. It’s that cheap (none have attempted an orbit yet, but that’s just a matter of time; the tech can be so small and light now, that the old barrier of fuel cost is gone). So the capital barrier to satellite phones is declining; the only question is if it can be profitable in the face of established tower-based systems. But theoretically a smart enough kid could create his own personal satellite phone network now in the six figure cost range. But will anyone do that? That’s the human behavior factor again. It’s still cheaper just to buy an existing plan. But the technology nevertheless exists.
The relevance of this distinction is that it breaks down most analogies to AI. In no way are we going to have AI and not use it, or know how to make it and then not. It is thus not analogous to VTOL cars and sat phones (or moon hotels).
And due to the dangers of traipsing into AI without thinking in advance about its moral reasoning, we need to take seriously how close AI is now.
(Not to sound too dark and serious there. Just bringing this back around to my article’s original point.)
You bring up an interesting point: risk. We tolerate a stunning amount of death and destruction simply to have the convenience of ground transportation. One of the bigger applications for AI now being worked on is automotive; self-driving cars have been a gleam in the eye of engineers for decades. If it turns out that AI driven vehicles reduce the death toll, it’s easy to envision a time when it will be illegal for a human to drive. By the same token, a suitable infrastructure for flying cars could well negate the need for a human pilot, or at least reduce that need to a fly-by-wire/course provider role. Just as aircraft are assigned “lanes” on an invisible 3-D grid, a similar infrastructure would be needed for the lower flying “air car” as well. (Although the Moller is pressurized and able to achieve airline cruising levels.) This could alleviate most of the risk concerns and, in fact, AI could end up making the flying car viable. Heck, it could be today without the AI, but such an infrastructure I described would be expensive, bringing in the capitalization issue once more; the demand just isn’t there. This is exactly what has thus far killed the hydrogen car.
In my own SF story I’m writing, set in the near future, I have many vehicles all powered by the same thing: electric engines. Technically, there’s nothing that gas, diesel or jet engines do that can’t also be done with electric ones. The problem has always been powering them. Our primitive battery and generator technologies offered no advantage (and often big disadvantages) over the energy-to-weight ratio provided by fossil fuels. In my book this changes; for the smaller users (the average car, for instance), battery technology and electric motor efficiency improvements combine to make them the standard. For bigger needs, such as airlines and the military, the engines are kick-started by a plasma-fusion generator which is itself kick-started by a minute amount of antimatter. All of this is extrapolation from existing technology, following the time honored tradition. 🙂
There was one other, often overlooked item in the VHS vs. Betamax war: psychology. I remember seeing the first Betamax at my local high-end electronics store. I kept hearing people comment on how odd it was that one hub turned one way while the other turned the opposite. People were used to reel-to-reel tape, film projectors and cassette decks where both hubs turned the same way. It may have been a bit subtle, but today when we see something like that, we say, “That’s just weird and wrong.” What might the psychology be when our machines begin to second guess us? You don’t exactly need to be an Asimov devotee to see where this might lead. I just have two words that I think every engineer should keep at the forefront of their brain: manual override.
And I think your sci-fi scenarios are plausible (indeed inevitable). It’s just a question of time.
Though do keep in mind:
Risk isn’t just a measure of frequency of accident, though, but also the loss per accident. A car crash produces little loss relative to a plane crash. Notable exceptions prove the point: the recent highway disaster in Texas was due to outrageous road laws (resulting in absurd speeds traveled by a populace not observing a safe distance in hazardous conditions). Imagine an air car incident just like that.
Thus the reason air cars won’t likely ever be allowed is that a single air car accident is too devastating to count as an acceptable risk (just one diving into an apartment building would end the technology forever on the legislative chopping block). We would cull the airlines, even, if there were thousands of times more vehicles in the air, as that would entail thousands of times more disasters, which would become intolerable to the public. The death toll on highways goes unnoticed because it is so incremental (low loss per incident), even though it should be alarming (hence air travel is safer…but largely because there are far fewer planes in the air than cars on the road; well, that and drivers are mostly exclusively highly-trained professionals, but we are already trending toward automated pilots doing most of the work, and probably eventually all of it).
So I suspect there won’t ever be an air car industry…outside of a working simverse (like an improved Tron-verse, where a level of perfection and safety can be achieved that would be impossible in the physics of the real world). But I am well aware that I could be wrong (some new technology might increase air safety by such a factor that the rate-of-incident x loss-per-incident will fall under the radar of public outrage, I just don’t think that’s likely).
So in short, you’re skeptical, too — just on a different point. 🙂
An interesting word, “allowed.” I used to hear that mobile phones would never be “allowed” because of safety (talking while driving, which came true), talking during movies (which is also too often true and reason for a major ass whoopin’), and other ills. Yet the technology would not be held back because of public demand and its sheer utility. For good or ill, it’s a fact of life.
My guess is that, should personal air cars become affordable and demand become high enough, there will be nothing that will be able to “disallow” them. Especially as ground traffic becomes worse, as it inevitably will. We will simply come up with a suitable control infrastructure to allow their everyday use. Nothing being perfect, accidents WILL happen. We will have to see if those accidents are tolerated as well as ground accidents and what possible technologies might be developed to minimize the damage. (We’ve seen the 5 mph no-damage bumpers, inertial safety belts and air bags in cars. Would balloons a la the Martian rovers or an emergency parachute be a standard feature of the air car?)
Ultimately, I must agree with you that, in all of these points, it is indeed just a matter of time.
True true.
P.S. We do have flying cars (they’re called helicopters) and in reality no one ever seriously thought anyone would be jet packing to work (that was only proposed in comedy). But flying, yes. And the rich do. Whereas the prediction of talking machinery in pop culture goes back at least to the early 60s (Star Trek). We definitely saw that one coming. Likewise compact digital storage media (that we didn’t guess the shape is trivial, the more so as CDs are now already obsolete; that we guessed its size is what matters: watch what portable digital media storage looks like, again, in early Star Trek episodes). And the internet was predicted, by Mark Twain no less (remarkable for someone who hadn’t even yet seen a digital computer). And the only reason we don’t have hotels in space is that no one capitalized them…not because we couldn’t do it (big difference).
Generally people over-exaggerate the ineffectiveness of serious futurist prediction making in technology (confusing comical predictions with serious ones, or relying on urban legends about what was or wasn’t predicted, rather than actually checking the literature and pop culture). What predictions we get wrong are technologies we don’t think of, not the technologies we already are thinking of, and can see possible (like AI or moon landings); and some of the impacts of technologies (e.g. Twain imagined the internet, but not all its effects on global culture) because it’s hard to predict not the technology itself, but how people will use it (thus, space hotels was a failure to predict how people would spend their money, not a failure in predicting the technology itself).
It’s important to keep these nuances in mind before cavalierly dismissing tech predictions…which tend to come true a lot more often than legend has it (even Aristotle predicted industrial robotics…he just didn’t imagine a timeline for it). Indeed, traditionally, people under-predict the pace of technological development. Look at the computers imagined for the year 2010 in the movie 2010, filmed in 1984…AI aside, they are way too bulky, and screen and interface technology way too poor; contrast with Star Trek TNG, which began just three years later, and already fully imagined the ipad and high-resolution touch-screen configurable interface. They thought that would take hundreds of years. It took barely twenty.