Machines Will Not be Like Us, Unless We Want Them to Be

A motorist stops to help someone whose car has broken down.  A person volunteers their time to help a homeless shelter.  A man rushes into a burning building to save a child.  Humans are an altruistic species.  We do seemingly irrational things, putting ourselves in discomfort or harm’s way to aid those who oftentimes we don’t even know, for seemingly irrational reasons.  We are surrounded with such constant exposure to self-sacrificing behavior that to a large degree we stop wondering why it is we do these things.  It’s the right thing to do, we say.  It’s natural.  It may indeed be natural for us, but what of the rest of the animal kingdom?  Other animal species exhibit altruistic behavior, such as mothers risking death to defend their children. Others engage in behavior that we would view as barbaric, such as those same mothers eating their children during lean times.  So what makes us simultaneously different and the same?  Why do we do what we do?

Some resort to supernatural explanations to explain this phenomena.  We are good because that’s how God/Goddess/Gods want us to be.  While there are those that may except these explanations, I do not (though I do say this as an atheist) and will not address them here.  Leaving aside the supernatural explanations, we are left with the natural ones, specifically that our penchant for altruism came about because of evolution, that something in our genes makes us want to help others even at the expense of ourselves.  If that is the case, there are a few possibilities.  It may simply be the result of genetic drift, random events such as a natural disaster that separates one population from another, thus changing the make-up of the gene pool.  This would imply that altruism is simply a lucky break, an explanation I find disheartening, possible but not likely.  It could be that our genes seek to protect themselves (or other versions of themselves in our close relatives) to ensure their continued existence, making our actions more the result of genetic selfishness. Another option is that altruism came about because it is a desirable trait, that somehow it increased our ancestors’ chance of survival and thus was passed on to future generations.

If that is the case, it seems at face value a poor argument.  Giving up time and resources and putting oneself in harm’s way does not seem like a very good way to increase the chances that one will survive long enough to bear children.  What must be kept in mind is that humans are and have been a social species.  Since the first time creatures that we would call Homo sapiens walked the Earth, we have survived not as individuals but as a collective, a group.  Viewed in this light, altruism becomes a much more desirable trait since in the early days of our species the survival of the group often meant the survival of the individual.  A single individual might help another member of their tribe with the expectation that that help will be returned at a later date, so-called “reciprocal altruism” (or “you scratch my back, I scratch yours” as we say today).  An individual may also help another to ensure the survival of a useful member of the group and in this way thus increase their own chance of survival.

Well, I’m three paragraphs into an article ostensibly about synthetic intelligence and I haven’t even mentioned machines.  There is a very good reason for that.  With the development of machine intelligence coming in the near future – and while they disagree on the timing most experts agree that it is coming – there is a growing debate over the danger associated with the construction of sentient machines.  There are pessimists who look at the Terminator and Matrix franchises the way some people look at Nostradamus, as accurate prognostications of the coming machine apocalypse.  There are optimists whose views of the future fall more in line with Iain M. Banks’ Culture novels, where machines and humans live peacefully in a post-scarcity utopia.  Then there are those who fall in the middle, those who can see the potential benefits of synthetic intelligence but also the pitfalls, those who advocate moving forward with research but with caution.  For those of you, like me, who fall into this category, the side whose arguments we must address are the optimists, because to be blunt, the pessimists cannot win.  There is very little that can be done, outside of a massive global cataclysm or a huge shift in public opinion, that will stop research into making machines better.  Therefore I shall address the optimists here.

My problem is not with the idea that synthetic intelligence will change the world for the better (I think it will) or that we will be able to develop machines with a strong sense of morality (I think we can.)  My problem is with the belief that seems to be common in the optimist camp, that morality in machines is inevitable.   Ray Kurzweil has stated his belief the intelligent machines we eventually will give rise to will view us respectfully, even reverently, due to the fact that we created them.  We are their ancestors, in effect.  I have one major problem with this and it is because of what I discussed at the beginning.

We are around kindness and generosity and compassion so much in our daily lives that we seem to take it for granted.  We believe that our altruistic nature is the natural way of things and that our machine children will share this trait with us.  They will not, or I should say there is no guarantee they will.  We are the way we are because it helped us to survive.  We sacrifice our time, our resources and even more because millions of years ago it helped our ancestors pass on their genes.  Our machines will not evolve.  They will be created.  Morality will not arise because it aids in the machines’ survival or the survival of its group.  It will arise because we put it there.  Or it will not arise at all.

Comments are closed.