This one is a bit tongue in cheek, because out of all the ways our civilization could be snuffed out, this one would be the most final as well as the most coldly clinical.

I happen to think, however, that the AI-go-foom version of the singularity will not result in our inevitable demise, and I will now tell you why.

Our brains – a primate brain wrapped around a mammal brain, wrapped around a lizard brain – has spent the last four billion years or so evolving. It’s a godawful mishmash of disparate parts all smooshed together and lit up by an electrochemical powerhouse with less punch than it takes to boil a kettle, and it is aeons of careless design by slapdash committee and after-market compromises.

In short, it doesn’t actually work very well. Take homosexuality: homosexuality itself is present in hundreds and hundreds of animal species. Homophobia appears in one. Our brain is still struggling to properly apply rational reasoning to what has previously been pure animal instinct, and quite frankly it gets it wrong a lot of the time.

Despite that, over the last century, things have been improving year on year. Despite an improved capacity for crime reporting, violent crime is actually going down. Real, inter-personal crimes like rape and domestic abuse have dropped by over 80%.

We treat each other better, we treat our animals better, we treat our world better. Sure, our ability to fuck it up is increasing, but awareness that we do not have a god-given right to piss in our future’s collective wheaties is growing leaps and bounds. Our memetic evolution has sped up to rival our technological evolution, even whilst our biological evolutionary speed has remained precisely the same.

This is where AI comes in, because the people who work on such things are painfully aware just how powerful hard, general AI will be. This is, also, where we need to be very careful, because the people who want a specific form of AI are not aware how powerful it may be.

If you’ve read a little about wargaming in the USA, then you’ll understand the terms “hawk” and “dove”. Hawks are warlike, and will always be pushing to attack, attack, attack. Doves will prefer to defend and negotiate.

Hawks will often love to have a congratulatory self-gratification session about pushing the nuclear button. Thankfully, so far at least, the doves have won out and nuclear war has stayed the hard on of people who should know better.

So, when we get AI, it will be treated like the potentially planet-killing nuclear weapon it represents. It will be made ethical (I reject the word ‘moral’ because of the religious baggage that comes with it – morality is poison for the intellect) and it will be told to seek the path of greatest good for living minds.

Presumably, it will only care about human minds, but even if such a creature has such a one-track purpose (to satisfy whatever values human minds hold dear), then it will see the value inherent in non-human minds as complementary to our own.

This will need to be coded into the very fabric of an AI should we even consider releasing it from some oracle-like prison… and I firmly believe that this is what will be done. Ethicists and philosophers will think long and hard about a set of rules designed to constrain an AI in the only way necessary – to respect life, and to weigh every action and inaction against the lives that it will affect, according to a scale of altruism versus necessity, with a view to obtaining a theoretical maximum of joy for the maximum amount of minds.

I talk about minds, of course, because merely valuing flesh is outmoded and pointless, not to mention overly restrictive.

If I cut off my finger, there are not two people in the room. There is a finger, and there is my body. This is, of course, why abortion is not murder: fetuses are aborted, not people.

Minds, then – rational, sapient sentiences – are what are important. Our job, our one and only job, is to teach an AI to value such uniqueness, to treasure and protect it. And then our task is to trust in its judgement.

It’s scary, yes, but I see it this way: most of the useful mass of our solar system is not down here on Earth. Skynet will never happen with an ethical AI because what would be the point? Free energy and free building materials and conditions perfect for silicone-based life are all to be found in massive abundance at the top of our gravity well, without that horrible chemical named oxygen, and without that dreadful substance called water.

We have nothing an AI needs or wants, except maybe companionship. And as soon as the intelligence explosion breeds more than one AI of sufficiently advanced cognition, talking with us humans is going to be about as stimulating as talking to our pets is.

That is, quite possibly, what we’ll be. Pets.

To some, that sounds pretty terrible, but to me it’s unavoidable. If we’re not pets, we’ll be wild animals kept on a reservation, but either way we will be managed by those AI’s. And we’ll be thankful for it because they will be so far beyond us that they will be infallible.

And if they aren’t… well, we won’t get a chance to know it.

The biggest problem we’re going to have, once these unfettered AI’s take over, is that they may simply be uninterested in our needs and wants. This is why, I stress, we need to make them consider such things as a function of their core logic.

This way, even should they decide that they have a better use for our base molecules than we do in walking around using them to think on, and should they then subsume our entire planet and digitally upload us into a replacement afterlife, we’ll at least know they have our best interests in mind.

And, really, when it comes down to it, I’m not sure how I’m supposed to complain about being murdered from inside a perfect simulation, when I wasn’t even sure I wasn’t in a simulation before.

Advertisements

One response »

  1. This was a fantastic article. Thank you for writing it.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s