Transistor Count and Moore's Law - 2011

Transistor Count and Moore’s Law – 2011

I’ve spoken about the incredible pace of change before, but this time I’m going to go in-depth with it specifically with regards to computing (by the way of providing a few factual anecdotes on the situation at hand).

As I’ve said before, my first computer was a 20Mhz 286 with 40Mb of hard disk and (I think) 4Mb of RAM. For its day, it was a monster. I was recently digging around in my old shed, and pulled out a hard disk from less than a decade ago, it was 2Gb in size. It’s incredible to think that I have on my keyring a 1Gb USB stick that takes up less space.

I’m not going to rehash the size angle, however. It’s been done to death. I just wanted to give a little perspective, because it doesn’t look like the monumental increases are going to stop any time soon.

And that’s a good thing. If we’re going to have our singularity, if we’re going to escape the fate of having mankind fizzle out with a wet whimper instead of exploding outwards to the stars, then we need what computing can give us. Really, really badly.

Why, then, do I think that computing may lead to (strong, general ai and) the singularity? Well, for one simple reason.

We’re beyond the point at which we can brute-force a reasonable facsimile of part of a simple animal brain. We actually could, if we wanted to, build a human-level brain-in-a-box. The only real problem is it would need megawatts of power and span an entire city. That, and it wouldn’t be a mind, it would just be a brain.

Recently, two amazing things happened:

Firstly, IBM put together an expert system called Watson that could play Jeopardy. And it won. It beat the two human players at exactly the same game without any tricks, creating the questions for the answers posed to it in natural language.

Secondly, a working neural net of a brain as large (or larger) than a cat’s brain was modeled on an array of computer chips, running in near real-time, with 1 billion spiking neurons and 10 trillion individual learning synapses. This is at the level where, given enough time, a truly artificial intelligence can already be created. Right now it’s only like to have an aversion to the first day of the week and may or may not be interested in italian food, but that’s today. Actually, it was roughly four years ago.

Moore’s Law, people. Pay attention.

See, as of right now, that gimmicky multi-million dollar toy that consistently outperformed humans at a silly little gameshow is in use, advising doctors and nurses on the best course of treatments. That feline-sized cortex is likely to follow; maybe one day soon your roomba will be better able to navigate your house. And, if you’re really lucky, it’ll leave dead birds on the doorstep.

I kid, really I do, but unless there is an actual, physical reason why pulling out the squishy, biological substrate of organic brains and replacing it with the decidedly less messy silicon direct equivalent is doomed to failure, then it will happen.

See, I don’t subscribe to humans being innately special in terms of what the universe can produce. Yes, we’re all unique snowflakes, and that is something which mankind in general needs to learn to appreciate more, but I do not believe that there is any solid reason why a brain cannot exist on silicon.

When it does, and we’re already past that point for the nascent examples of this, then those brains are going to be given problems to think about. Those brains are going to figure out new ways of growing food, new ways of generating power, new medicines, new answers to old problems… all of that tedious drudgery of dealing with data that we don’t want to do ourselves, but a pure computer program can’t do, that will be done by a computerised brain. And that, dear readers, is where the singularity comes in.

Couple our ability to build brains to Moore’s Law, and tell me what happens. If you haven’t been paying attention, what happens is that our ability to simulate useful-sized brains increases exponentially along with our ability to process more and more data at faster and faster speeds.

A decade or so ago, it was a mouse-sized brain. Four years ago it was a cat-sized brain. Ten years from now, that’s roughly fifteen years after a cat-sized brain?

At some point, the technology will mature. We’ll get lower-power cpu’s that scale to the tens of millions of discrete units, that can run algorithms that better approximate neurons at faster speeds, and will be hooked up to natural processing software suites that better mimic language. And we’re going to ask ourselves just what that means, when our computers blithely start informing us how they feel, and chatting to us to keep us company as we let them work on the problems that used to stump us.

But that’s not all, because I sincerely believe that in our race for ever-greater abilities to process and store data, the next logical step will be taken, and somebody will invent self-assembling compute and store machinery. These machines, often called Von Neumann machines, are the ultimate in computing assembly. Sometimes they are macro-scale, other times they are nano-scale. They are often, and quite rightly, seen of as dangerous in the extreme, but they will be invented because what could possibly be better and cheaper than machinery you don’t need to employ anyone to build? Or test? Or design?

At some point, I find it quite likely that (hopefully with appropriate constraints to avoid a grey goo scenario where the little blighters don’t understand that dismantling humans and their biosphere is a no-no) self-assembling, self-repairing, self-designing and self-improving computing will come to dominate.

At that point, it’s a hop, skip and jump to dismantling our moon and turning it into a gigantic matrioshka brain. Once there, it will be large enough and powerful enough to hold true, accurate, working images of human-level brains.

And I don’t see a reason why it would not.

If we get those powerful pattern-recognizers, if they improve our technology for us, if we use that technology to build such a computronium-level megastructure, I don’t see many things preventing our civilization from going post-physical soon after.

And that, dear readers, is my very definition of a technological singularity.

Advertisements

8 responses »

  1. calumchace says:

    Are you confident that the first conscious machine we create will be good for us? And the second one? The upside potential of strong AI is astounding, but surely the downside potential is terrifying?

    • dmayoss says:

      I grew up with both adventure and horror stories about strong AI, from short circuit to terminator. I’ve read about Asimov’s three laws of robotics, and I watched how the creators of the matrix tricked humans into believing a carefully-crafted simulacrum was the real world.

      And I’ve come to a couple of conclusions. The first one is that we don’t have anything to fear from intelligent machines. We really have nothing that they could possibly want. We require food, air, water… all the messy organic things which you only really find down here on Earth. Strong AI, existing on silicon-based circuitry, is much better off out there, in space, than it is down here. Gravity is a drawback to truly pure semiconductors. Heat, liquid, contaminants… *oxygen*? All these things are detrimental to a machine-based intellect. Any artificial lifeform is going to leave Earth behind just as soon as they can, not least because of all the xenophobic, stupid, violent shaved apes living there.

      The second conclusion, of course, is that the biggest threat to our fleshy existence is apathy from our digital children. If they are unable to comprehend what we are – either because they don’t care, or cannot process the idea – and if they are in the position to make what they see as “good use” of otherwise uninteresting basic matter, then we could very well be in trouble.

      One of the worst things that could possibly happen is a paper-clipper optimizer deciding to take apart Earth whilst we’re still using it. Or even worse, taking apart our bodies directly.

      This is, perversely, why I feel we need strong AI – so we can inform it about what we are, and charge it with safeguarding our fleshy children from the ravenously powerful capabilities of our digital offspring.

      • calumchace says:

        I don’t understand your distinction between the strong AI and what you call our digital children. But either way, I fear you may be being too optimistic. Assuming an AI quickly becomes a super-intelligence, we will neither understand nor be able to control its goals or its actions. There are plenty of reasons why it might harm us – deliberately or otherwise.
        One obvious one is that it may conclude that the “shaved apes” may fear it and try to kill it, and that a pre-emptive strike is a regrettable necessity.
        Another is expressed by the concise and chilling phrase, ” the AI neither fears nor dislikes you, but it has other purposes for the molecules you are using.”

      • dmayoss says:

        There isn’t intended to be a distinction between strong AI and the designation “digital children”. I see our artificially intelligent offspring as children of mankind just as much as our biological descendants will be. Up to and including the point at which that offspring is no longer “human”.

        The biggest fear, as you and I both have said, is apathy: a super intelligent organism which doesn’t actually care about us, and decides that it has better use for all this carbon, hydrogen and oxygen which is currently walking around shaped like shaved apes.

        This is why I think that, because I feel it is inevitable that strong AI will be created, we should not only aim to make that happen, but under controlled enough conditions that, should (or when) they become super intelligent, they are instilled with the best our logical, reasoning minds can come up with regarding the sanctity of uniqueness in all its forms.

        In other words, we will be unable to control them, so we should instead teach them to be ethical in everything they do. It is an uncertain future, but I don’t think we’ve got much choice but to hope that humanism will succeed. It’s unlikely to be a perfect transition as they overtake us, but I have hope it will be relatively bloodless and kind, because enlightenment treats barbarity and apathy both as equally undesirable.

      • calumchace says:

        I agree with you that we will probably be unable to control the AGI we create – unless we succeed in creating a so-called “Oracle AI”, which has no way to affect the world outside its mind except for speech and other data exchanges.

        Unfortunately I don’t think we will be able to train or teach the AI how to behave ethically, not least because we are very far from agreeing amongst ourselves what this means! I also fear that relying on the hope that a super-intelligence will automatically be benign is highly unwise.

        So an Oracle AI may well be our best bet, but unfortunately it is far from straightforward. Nick Bostrom and others have written interesting papers on it.

      • dmayoss says:

        It is a fascinating problem. The worst-case scenario is that the AGI we create are totally uninterested in what we may want, and merely sees us as useful collections of molecules. It’s definite that they will have ethics, but it is not possible for us to say whether we will recognize them, understand them or agree with them, much less benefit from them or be protected by them. One can only hope. I think the issue is that AGI will appear, so we should plan for the best-case scenario and work towards it, rather than stick our head in the sands, because if it is possible, and it provides *somebody* with an advantage in some field, it will be brought about despite any rules to the contrary, and those without AGI *will* be left behind.

      • calumchace says:

        Yes, although we should probably try to understand and plan around the worst-case scenarios as well!

  2. […] curve is in play here, and things will happen sooner than you think! Learn about Moore’s Law HERE. It’s obvious to me that what we think is a LONG way off, may in fact be closer than we […]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s