The history of modern computing is both incredibly short and incredibly convoluted.

Silicon computing machines, the big beige boxes most of us would recognize as computers, have only been around since roughly the late 1970’s, and yet they have arguably been the most pervasive invention into our lives since Prometheus stole fire from the gods.

Sure, you could buy computers before then, but it wasn’t until the 1980’s that the modern silicon-based general-purpose computer entered our working and home lives en masse. I’m not going to diss earlier creations like the babbage engine or the enigma machine, they have monumentally impressive places in history, but there are better posts elsewhere discussing them.

But on to computers, in the home and business: they started life as huge, clunky dinosaurs that took up whole rooms. Their hard disks were the size of a large suitcase, weighed tens of pounds and stored a whopping 5 megabytes on huge platters bigger than an old vinyl LP record. Before they went general-purpose, they were programmed in laboriously slow fashion – often from punched card – by whole teams of people and then spat out data in the same fashion. In fact, a “computer” used to be a job description, such is the position in a company that they have replaced – computers computed. Numbers. Lots of them. Lots and lots of them (hence their name).

Eventually, data entry was via keyboard (and then mouse, and then touch screen), storage increased to kilo, mega, giga and then tera and peta-byte levels, and the numbers of people who wanted access rose from a small team wanting one thing at a time to hundreds wanting all sorts all the time.

To solve that problem, computers (real computers, not the toasters sold by IBM running Microsoft) became multiuser… and then a new trick was developed: virtualization.

Virtualization, at its core, is to take one big box (big in resources, not necessarily any longer big in size) and to present it as a discrete set of smaller boxes, where users of box A cannot touch the resources given to users of box B.

This makes administration of each box simpler, and compartmentalizes the issues facing a mammoth machine where the whole thing is accessible by (potentially) all users.

The good news is that this trick can be done by anybody with a home computer. The bad news is that we don’t really have a good way of baking this into the boxes we buy at a level low enough for all users to make use of – not that most people care.

So, why would you like virtualization? Well, with all the talk about viruses that goes on these days, here’s one: a box just for browsing with, and another box for when you pay your bills. Done properly, even if you manage to infect one box with a virus, the rest of the boxes on your system (and critically the big box containing them all) will never be vulnerable.

Or a box containing an important but wonky program may implode, but it won’t take with it another box containing an equally important, different program (and again, won’t take the whole system down either).

A good multiuser operating system has some of these advantages, but not all. For me, though, it means I can slice up my server machine into logically separate parts – login/terminal services in one, web server in another, database in another, home media server in another, and so on – if one gets penetrated, only the data in that one is directly vulnerable, and because the data I care about is somewhere else, then ripping and replacing a vulnerable box is just a matter of pushing a few buttons, and everything else I care about stays in place, secure and untouchable (unless any potential attacker manages to infiltrate my other little boxes, but that’s a bigger issue in general).

It also makes management of my home server simpler – backup of a service is simple, and I can reinstantiate from backup, or even clone at the push of a button for a new virtual machine clean of anything without trashing my main box, which doesn’t have a screen or keyboard attached.

One way of doing this is with a program which pretends to be a whole computer – virtualbox, for example, or vmware(player), or vserver (from microsoft). This is very flexible, and can be quite fast, but it’s still a resource hog as you’re duplicating pieces which already exist (the kernel, the hardware, the base operating system and so on) – and because it’s pretending to be a whole computer, then you have to have a part of the program doing all the fiddly bits already done by your computer in hardware, but in software. This also makes it slower.

The other way – which I am growing more and more partial to, at least when the workload is well known – is to run the program in what’s called a zone by Solaris, a jail by FreeBSD or, in general, a container.

A container is a very thin box – thin, but effective – where the already-existing services of a host computer are just made available in a controlled manner to a new copy of the kernel. This makes any virtual machine run at essentially host-speed, because there’s no pretending about how the CPU works or where/what it is, and duplicated programs consume very little extra memory.

On my home server at one point, I ran five virtual machines (plus the host), each with their own IP address and their own programs. When I first switched to FreeBSD, I abandoned that… but I’m heading back that way again now that I’m more comfortable with BSD, thanks to the power of ZFS and jail.

I’ll cover where I’ve been and how I got here in future posts.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s