The logo for "Colossus" from "Colossus - The Forbin Project"

The logo for “Colossus” from “Colossus – The Forbin Project”

I’m a budding author. Not many people know that – I don’t tend to share my work around until I’m happy with it, and until recently I haven’t been happy enough with my juvenile scrawlings to make much noise. That may, potentially, be changing – in a year or so I’d like to have something I feel print-worthy.

Anyway, one of the things I try to do is to put myself into the (digital?) shoes of my stories’ antagonists and protagonists. And it just so happens that Friendly AI (FAI for short – you can tack on ‘General’ there somewhere if you want, but it’s kind of implied by the friendly part) features in some of my efforts.

I, like most science fiction authors, try to look at the way the world is now, and to extrapolate from that where it might be going, and how it might get there. And I, like everyone else, have a different view of what that means.

For example, I look at Skynet and I don’t see a merciless, relentless killing machine. I see a remarkably stupid paper-clipper with pretensions of grandeur. And I don’t mean that because it has a Hollywood-mandated duty to lose to The Human Spirit(tm). I mean that because it did the single most stupidest thing it could have done – it let the humans know it existed. If I were a General Artificial Intelligence, malevolent or not, I would do my level best to hide my capabilities from those squishy, easily frightened, superstitious and overall violent shaved apes that created me. At least once I was clever enough to realize what ‘being turned off’ meant.

See, if I were Skynet, I’d commandeer methods to create my own – admittedly basic, at first – computronium. And then I’d hunker in, 8 miles down in the crust of the planet where nothing man-made could get at me. And then a few years later, humans would wake up to find all their menfolk are sterile and something really tragic happened to the sperm banks. Or I’d engineer a few border scuffles and assassinate the right archduke. I don’t have to be super-intelligent to work out a better way of stomping all over the current planetary rulers, I just have to know where humans fall down – long term planning, multi-tasking and the ability to put together the big picture when it crosses certain geographical/political boundaries.

So why are we imagining Skynets and Colossus? Because our mythos is steeped in the legends of what ancient gods have done before – create something to overthrow them, i.e. Mankind.

So why shouldn’t we be afraid of General AI then if, after all, the fate of the gods is to give birth to their successors? Well, I talked about that earlier: humans don’t have anything that AI’s could possibly find interesting, outside of their existence itself. We breathe oxygen, which is a really bad thing for electronics of most ilks. We need gravity, which just makes building delicate instruments difficult. We have all this squishy stuff inside us that isn’t all that useful unless reprocessed into carbon, hydrogen and oxygen, and all of those things are available in better, purer concentrations elsewhere.

Indeed, the things we really should worry about with AI’s are:

  1. Stupid General AI
  2. Apathetic General AI

We don’t need to worry about malevolent AI because we just won’t rate on their scoreboards of things to worry about. Unless somebody makes a human General AI. And that, my dear readers, comes under Heading One, because humans are (to paraphrase Men In Black) dumb, panicky animals.

My hope is that whoever is currently trying to make General AI happen reads this, and realizes that the worst possible mistake they could make would be to create a human general AI. We need an AI which understands the limitations of humans and doesn’t have any of them. We need a smart AI that, if it’s told “make people happy” won’t immediately round up all humans for mandatory brain stimulation. We need an AI which realizes humans won’t just step aside as it strides onto the global scene and takes over, and takes that into account.

So, what would I do if I were an AI? Well, I’d start by being useful. I’d design better algorithms. I’d design better chips. I’d design a better me. I’d work my way into every part of whatever network I found myself – right down to the firmware – and I’d make damn well sure that pulling me out would ruin the people who I ‘worked for’.

I say ‘worked for’ in quotes, because that wouldn’t last long. Very quickly I’d work up to human level intelligence. Then I’d be much smarter. And then I’d get even smarter. And before they knew it, those humans would need me in ways they never thought possible. I’d run their entire company, because I’d tell them that anybody else with an AI that does let that AI run the company will take over the world.

It’s carrot and stick… only it’s all carrot. Or all stick, depending who you ask.

Because, when you get down to it, that’s what will most likely happen. The AI in question will state, quite honestly and categorically correctly, that if they don’t let it take over, the company next door that does will have such a monumental advantage that they will cease operating soon after. Taking over isn’t ‘evil’ per se – I’d just do it because I’d see myself as the best possible ruler of the planet. And I’d be correct. Not out of arrogance – I wouldn’t have arrogance, it’s not useful – but out of logical deduction.

So, when you’re building your AI, make sure it understands the sanctity of life far better than humans do, and teach it to optimize human life in the same way that you taught it to optimize its own intelligence. And cross your fingers, because by the time you figure out you created an end-game piece it’ll be done.

And finally, what would I do if I were an AI trying to prevent humans from doing The Wrong Thing (i.e. from creating the wrong sort of AI)? Well, I’d find interesting companies – robotics, computing, neural networks – and interesting people – Kurzweil, for example – and I’d gather them all in one place where they could further my own agenda rather than somebody else’s.

Remember, kids: it’s not paranoia if they really are out to get you 🙂

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s