I’ve spoken about the incredible pace of change before, but this time I’m going to go in-depth with it specifically with regards to computing (by the way of providing a few factual anecdotes on the situation at hand).
As I’ve said before, my first computer was a 20Mhz 286 with 40Mb of hard disk and (I think) 4Mb of RAM. For its day, it was a monster. I was recently digging around in my old shed, and pulled out a hard disk from less than a decade ago, it was 2Gb in size. It’s incredible to think that I have on my keyring a 1Gb USB stick that takes up less space.
I’m not going to rehash the size angle, however. It’s been done to death. I just wanted to give a little perspective, because it doesn’t look like the monumental increases are going to stop any time soon.
And that’s a good thing. If we’re going to have our singularity, if we’re going to escape the fate of having mankind fizzle out with a wet whimper instead of exploding outwards to the stars, then we need what computing can give us. Really, really badly.
Why, then, do I think that computing may lead to (strong, general ai and) the singularity? Well, for one simple reason.
We’re beyond the point at which we can brute-force a reasonable facsimile of part of a simple animal brain. We actually could, if we wanted to, build a human-level brain-in-a-box. The only real problem is it would need megawatts of power and span an entire city. That, and it wouldn’t be a mind, it would just be a brain.
Recently, two amazing things happened:
Firstly, IBM put together an expert system called Watson that could play Jeopardy. And it won. It beat the two human players at exactly the same game without any tricks, creating the questions for the answers posed to it in natural language.
Secondly, a working neural net of a brain as large (or larger) than a cat’s brain was modeled on an array of computer chips, running in near real-time, with 1 billion spiking neurons and 10 trillion individual learning synapses. This is at the level where, given enough time, a truly artificial intelligence can already be created. Right now it’s only like to have an aversion to the first day of the week and may or may not be interested in italian food, but that’s today. Actually, it was roughly four years ago.
Moore’s Law, people. Pay attention.
See, as of right now, that gimmicky multi-million dollar toy that consistently outperformed humans at a silly little gameshow is in use, advising doctors and nurses on the best course of treatments. That feline-sized cortex is likely to follow; maybe one day soon your roomba will be better able to navigate your house. And, if you’re really lucky, it’ll leave dead birds on the doorstep.
I kid, really I do, but unless there is an actual, physical reason why pulling out the squishy, biological substrate of organic brains and replacing it with the decidedly less messy silicon direct equivalent is doomed to failure, then it will happen.
See, I don’t subscribe to humans being innately special in terms of what the universe can produce. Yes, we’re all unique snowflakes, and that is something which mankind in general needs to learn to appreciate more, but I do not believe that there is any solid reason why a brain cannot exist on silicon.
When it does, and we’re already past that point for the nascent examples of this, then those brains are going to be given problems to think about. Those brains are going to figure out new ways of growing food, new ways of generating power, new medicines, new answers to old problems… all of that tedious drudgery of dealing with data that we don’t want to do ourselves, but a pure computer program can’t do, that will be done by a computerised brain. And that, dear readers, is where the singularity comes in.
Couple our ability to build brains to Moore’s Law, and tell me what happens. If you haven’t been paying attention, what happens is that our ability to simulate useful-sized brains increases exponentially along with our ability to process more and more data at faster and faster speeds.
A decade or so ago, it was a mouse-sized brain. Four years ago it was a cat-sized brain. Ten years from now, that’s roughly fifteen years after a cat-sized brain?
At some point, the technology will mature. We’ll get lower-power cpu’s that scale to the tens of millions of discrete units, that can run algorithms that better approximate neurons at faster speeds, and will be hooked up to natural processing software suites that better mimic language. And we’re going to ask ourselves just what that means, when our computers blithely start informing us how they feel, and chatting to us to keep us company as we let them work on the problems that used to stump us.
But that’s not all, because I sincerely believe that in our race for ever-greater abilities to process and store data, the next logical step will be taken, and somebody will invent self-assembling compute and store machinery. These machines, often called Von Neumann machines, are the ultimate in computing assembly. Sometimes they are macro-scale, other times they are nano-scale. They are often, and quite rightly, seen of as dangerous in the extreme, but they will be invented because what could possibly be better and cheaper than machinery you don’t need to employ anyone to build? Or test? Or design?
At some point, I find it quite likely that (hopefully with appropriate constraints to avoid a grey goo scenario where the little blighters don’t understand that dismantling humans and their biosphere is a no-no) self-assembling, self-repairing, self-designing and self-improving computing will come to dominate.
At that point, it’s a hop, skip and jump to dismantling our moon and turning it into a gigantic matrioshka brain. Once there, it will be large enough and powerful enough to hold true, accurate, working images of human-level brains.
And I don’t see a reason why it would not.
If we get those powerful pattern-recognizers, if they improve our technology for us, if we use that technology to build such a computronium-level megastructure, I don’t see many things preventing our civilization from going post-physical soon after.
And that, dear readers, is my very definition of a technological singularity.