This year has been a momentous one for both technology in general as well as Artifical Intelligence in particular. For the first time, we’re seeing live-and-in-color usage of self-driving vehicles, people are carrying around personal assistants they can talk to in their pocket (though such things aren’t without their foibles) and it seems that almost daily we’re greeted with another prediction of digital doom or cyber nirvana.
The thing is, we’re not quite there yet.
See, we’re stuck on the idea that somewhere between being able to play pong, and asking if you would like to play a nice little game of thermonuclear warfare, something happens. Lots of people disagree on exactly what that something is (or indeed, whether it can happen at all), but the result is a conscious machine that is self aware, sentient and intelligent.
All of these states of being are so nebulous that we can’t even decide what they are, yet a lot of people are convinced that “robots cannot be conscious, they can only emulate it”. Some people fall back on Turing and his famous Test, citing the man over words he never said, when the truth seems to be more that his opinion was that a machine that appears intelligent is pretty much functionally indistinguishable from a human, which everybody knows is intelligent.
But then, along comes short, interesting videos like the following.
This short video is, essentially, one of three robots (so, by definition all three since they are otherwise identical) showing self awareness, one of the three big ones. The robots weren’t programmed with rote responses, they had to figure it out themselves. The short version is that two of the three were prevented from speaking, and all three were asked which two it was. The one that speaks at first doesn’t know… until it hears its own voice, whereupon it corrects itself and presents the answer.
We’re still, obviously, a long way away from human-level strong, general AI, with some very intelligent people proposing all sorts of reasons why machines can never be truly self-aware – with reasoning from heavyweights like Sir Roger Penrose stating human counsciousness being non-algorithmic, among others involving complex statements like this statement is a lie and so on – but to my mind, until we know a lot more about such reasoning, it’s way, way too early to make such blanket statements.
Like I said, it doesn’t help that we can’t even decide how to tell if a human is “alive”, or whether something as apparently self-evident as free will exists. And still the march towards conscious machines continues. It’s going to be an interesting century.
Footnote: If you want the short, probably inaccurate version of whether free will exists or not, it goes something like this: we can 100% accurately predict how or whether neurons will fire, how or whether chemical reactions will take place, we know exactly what comes out as the result. We can tell where and how all sorts of micro and macro physical interactions will take place and end up. It seems, in short, that given the starting conditions, we can compute the result.
Taking that one huge step further, if we knew everything about the starting conditions of, say, the universe, and given a large enough computing space, we could compute the universe.
Disregarding whether that’s true or not for a moment (and for large swathes of such things, it seems that it is true), that would mean that everything that happens in such a system is dependent on the starting conditions, or in other words, there is no free will, because everything is just physical/chemical/electrical reactions which are in effect entirely computable.