So assume (see my last nanotech post, Nanotech IS distinguishable from magic) that we’ll find a way to build and power nanobots.
The medical nanobots in my novel Small Miracles tap the energy sources that the patient’s own body provides. That is, they can metabolize glycerol and glucose, just as the cells in our bodies do.
Now what?
The good news is that being cell-sized, such bots can navigate the circulatory system, foraging for glucose as they go. But being cell-sized, we’ll also need a lot of such devices to accomplish anything useful in less than geological time. How will we control them all?
Maybe it’s best that we not control the nanobots—not, anyway, in real time. Instead, maybe we’ll give each bot an onboard computer. Program the little guy to know what to do. As in, oversimplifying just a tad:
-
Drift through the bloodstream
-
Monitor your plaque sensors
-
When you find plaque on a blood-vessel wall
-
Grab hold
-
Nip the plaque into tiny pieces
-
-
Repeat
Such distributed control presupposes really tiny computers. And what about electricity to power them? To which I say: yes, and not necessary.
Yes, the computers must be really small—and they can be. And not necessary, because computers needn’t be electrical.
We’ve become accustomed to data storage in integrated circuits and on magnetic disks. But nothing says that a bit of data—a zero or a one—must be encoded as an electric quantity (like electrons trapped for a time in a charge well) or a magnetic quantity (like the polarization of a magnetic domain). Who remembers punch cards and paper tape? Really early computers used the presence or absence of pressure waves in a tank of mercury. And earlier still
Right. The abacus.
Anything that can unambiguously represent two values—while resisting, just a wee bit, randomly flipping from the state you want retained into the opposite state—can encode binary data. The position of sliding beads or rods. Or in an irreducible (I believe) minimum for any time soon, the state of individual bi-stable molecules. The medical nanobots in Small Miracles use bi-stable memory molecules.
That takes care of memory. What about a processing unit to execute instructions from, and manipulate data in, that memory? As it happens, the earliest digital computers were mechanical, designed by Charles Babbage). (Of course, so soon after Steampunk Month, everyone knows that. Right?)
Are mechanical nanocomputers crazy talk? True, we don’t have them—yet. But consider this University of Wisconsin proposal. Notice that they got funding from the Defense Advanced Research Projects Agency, the fine folks who brought us the Internet. Or Google away.
So how will we control nanobots? Bit by bit.
Edward M. Lerner worked in high tech for thirty years, as everything from engineer to senior vice president. He writes near-future techno-thrillers, most recently Fools’ Experiments and Small Miracles, and far-future space epics like the Fleet of Worlds series with colleague Larry Niven. Ed blogs regularly at SF and Nonsense.
In his proof of principle for the physical possibility of nanotechnology in his book “Nanosystems”, Eric Drexler proposed a design for mechanical nanocomputers that worked something like Babbage Engines. They had gears made of rings of atoms attached to shafts made of long chain molecules. Sounds clunky, but it turns out (pun intended) that the shafts could spin millions of times per second, allowing the computers to execute millions of instructions per second.
The trick here, as in circuit design, is less how to represent the values than how to store them, fetch them, and on a good day reliably error check that what was written last was what was read. Small Miracles dishes up some smooth handwavium as atomic memory, and there are lots of papers playing with the idea of using individual atoms as binary bits, or trinary twits(?), or some other grand scheme to hold a value. The trick, though, is in the infrastructure that permits the translation of a bit address into a bit value, requiring in VLSI and finer implementations some number of gates and logic elements to take a value from a memory bus and store it, then to find it and fetch it again. If all a memory chip had to do was hold bits, memory chips would be much smaller. Writing them, checking them, and reading them is where the real value is.
In a world with natural and man-made EMP risks, not to mention radio-frequency interference galore, mechanical nano-device scale computing solutions may become the tool of choice for mission critical systems.
Just as the aging COBOLers got one last big fling for Y2K, assembly language developers may spend their golden years knocking bits about to chop up cancers and promote nerve growth. Which can never happen now given Small Miracles‘ portrayal of the FDA.
Oh well, maybe another timeline.
Quite frankly, it only takes a few moments’ reading of the ACM Forum on Risks to the Public in Computers and Related Systems (known informally as comp.risks) to leave one with a feeling of utter and complete horror that any of the geniuses who have given us the Internet, Windows, Y2K, or any of the other marvels of computing technology that currently surround us are going to have anything to do with designing and programming nanobots.
HRH’s concerns about ‘grey goo’ are, in my opinion, not at all unwarranted. I’m fucking terrified.
SpeakerToManagers: (Great handle, that.) Thanks for the Drexler reference. I read that book long ago, but my memories of the chapter on mechanical computers weren’t fresh enough to cite in the original post.
kcarlin: Of course you’re correct that a complete computer requires more than an ALU and a memory array. There’s also control logic, clocking logic, bus structures, error-detection logic, I/O ports … Rightly or wrongly, that was a level of detail I thought wouldn’t interest the general reader of the blog. But such additional logic *can* be built at the molecular level.
(Geeky aside: If a nanocomputer’s implementation involves the popular architectural technique called microcoding, there’s a language issue in that micro as a prefix is bigger than nano. Dare we speak of femtocoding?)
NomadUK: Yup, the integrity of the software in nanobots will be an issue. A BIG issue. But while I see grounds for concern, the grey-goo scenario isn’t among them. Designing self-replicating nanobots will be a very hard problem. And that buggy software you rightly worry about will offer vulnerabilities that rapidly evolving microbes will exploit to compete.
And that buggy software you rightly worry about will offer vulnerabilities that rapidly evolving microbes will exploit to compete.
You’re joking, right?
NomadUK: No, not joking — but I certainly could have stated my point more clearly.
Of course, microbes won’t read or manipulate the code in the nanobots. But bad code leads to vulnerabilities and suboptimal behaviors.
To the degree nanobots have vulnerabilities and suboptimal behaviors (not necessarily limited to software bugs), microbes sharing that environment will evolve accordingly. Just as microbes already evolve immunities to antibiotics, adapt to changes in the available food supply (e.g., switching which sugar they metabolize based on availability), and develop the ability to resist phages.
IIRC, Drexler’s term for abacus-like nanocomputers was “rod logic”.