Doug Clow on the Whole Brain Emulation roadmap

[A guest post from Doug Clow. This was a comment in this article on Bostrom and Sandberg’s Whole Brain Emulation: a Roadmap, but given its length and substance I am with permission putting it here as a new blog post.]

I too am short of time, but have given this paper a quick run through. Here are some unstructured and unedited quick notes I made while I was at it. Apologies for brevity and errors — I almost certainly missed some of their points and have misrepresented parts of their case.

It does seem to be a serious and reasonably well-informed piece of work on speculative science and technology. Emphasis on the speculative, though — which they acknowledge.

The distinction between emulating a brain generically (which I reckon is probably feasible, eventually) and emulating a specific person’s brain (which I reckon is a lot harder), and emulating a specific dead person’s brain (which I reckon is probably not possible), is a crucial one. They do make this point and spell it out in Table 1 on p11, and rightly say it’s very hard.

p8 “An important hypothesis for WBE is that in order to emulate the brain we do not need to understand the whole system, but rather we just need a database containing all necessary low‐level information about the brain and knowledge of the local update rules that change brain states from moment to moment.”

I agree entirely. Without this the ambitious bit of the enterprise fails. (They make the case, correctly, that progress down these lines is useful even if it turns out the big project can’t be done.) I suspect that this hypothesis may be true, but we certainly need to know a lot more about how the whole system works in order to work out what the necessary low-level information and update rules are. And in fact we’ll make interesting scientific progress – as suggested here – by running emulations of bits of the brain we think we might understand and seeing if that produces emergent properties that look like what the brain does. Actually they say this on p15 “WBE appears to be a way of testing many of these assumptions experimentally” – I’d be a bit stronger than that.

Table 2 on levels of emulation makes sense. My gut instinct (note evidence base) is that we will need at least level 8 (states of protein complexes – i.e. what shape conformations the (important) proteins are in) to do WBE, and quite possibly higher ones (though I doubt the quantum level, 11, is needed but Roger Penrose would disagree). Proteins are the actually-existing nanobots that make our cells work. The 3D shape of proteins is critical to their role. Many proteins change shape – and hence what they do or don’t do – in to a smallish fixed number of conformations, and we already know that this can be hugely important to brain function at the gross level. (E.g. transmissible spongiform encaphalopathies – mad cow and all that – are essentially caused by prion proteins in the brain switching from the ordinary shape to the disease-causing one.)

The whole approach is based on scanning an existing brain, in sufficient detail that you can then implement an emulation. I think that’s possibly useful, but I think a more likely successful route to a simulated (!) intelligence will be to grow it, rather than to bring it in to existence fully-formed. By growing, I mean some process akin to the developmental process by which humans come to consciousness: an interaction between an environment and a substrate that can develop in the light of feedback from that environment. But based on their approach, their analysis of technological capabilities needed seems plausible.

The one that leaps out as really, really hard (to the point of impossibility in my mind) is the scanning component. There is the unknown of whether the thing is doable at all (what they call scale separation), which is a biggy, but falsifiable by trying out experiments in this direction. They talk about electron microscopy as being the only technology which offers sufficient resolution. They say that the neuronal/synaptic level would only require trivial increases in microscopy resolution. That’s missing the point. You just can’t scan enough of a brain with the sorts of microscopy that work at that resolution.

(This is leaving aside the entirely non-trivial question David addressed of whether it’s possible or not to extrapolate from existing technologies to ones that would have sufficient resolution for tiny prepared samples.)

Almost all forms of microscopy that could conceivably come close to being useful here give you an image of an exposed surface. You’re going to need to chop the brain up in to fragments that are thin enough to expose every single synapse, at a minimum. That’s not feasible without destroying at least half of what you’re trying to analyse. When you slice something, you basically smash up a thin column of stuff in the path of the knife (or laser beam, or whatever). And even if you invented some magical way of preparing the samples without that mechanical damage, you’d still have to pull the network of synapses apart in order to expose the surfaces to microscopy .

Sure, you can do it for very small, thin organisms (they mention C. elegans) – because they’re not much more than a couple of neurons thick. But human brains are a lot thicker than that. And anyway, C. elegans was (I strongly suspect) done by scanning loads of individuals and aggregating the date. That’s automatically shut you out of the big-ticket replicating-a-person bit.

Oh, and it’s worth noting that all of this, to be remotely possible, is really quite spectacularly destructive. Your brain is not going to be doing any thinking once this process is done with. Which is another reason why I think growing a simulated/emulated brain is a better research plan.

There are some imaging techniques which don’t require the slice-and-dice bit: MRI is a better possibility, at least superficially. And this is probably something I could dig in to more later, since I do know quite a lot about the physics/chemistry behind the technique. (Part of my PhD was developing a teaching simulation of an NMR spectrometer, which is a simpler thing than an MRI machine.) But off the top of my head it really doesn’t seem likely for all sorts of reasons — resolution, in multiple senses (you’re going to need a radio sensor with finer resolution than is theoretically possible). If you want, nudge me and I’ll try to find time to spell this out more and may even dredge up enough of the maths to do sums on it.

Their ideas on nanodisassembly seem like nonsense. You can’t build Drexler-type nanobots: the physics/chemistry just doesn’t work like that at that scale. Think proteins, not a teeny version of Robosapien. They say (p51) “Given that no detailed proposal for a nanodisassembler has been
made it is hard to evaluate the chances of nanodisassembly”. I don’t think it’s hard to evaluate: the chances are negligible to nil.

Chemical analysis – this is really not going to happen. I’d just been thinking about neuronal connections. Blimey, they are really stretching the bounds of what’s even theoretically possible here. There’s several techniques they mention that I don’t know about, but I do know about some and SFAICT they all suffer from the need-to-break-the-brain-up problem only worse. They mention dyes – but dyes are generally pretty large in chemical terms and will almost certainly destroy information if you perfuse a brain with them.

I’m not paying any attention to the information processing stuff. I could do, and the challenge is large, but (a) my seat-of-the-pants feeling is that Moore’s Law can do more than enough here, and (b) lots of people – ciphergoth included! – are at least as capable as me of doing the detailed scrutiny here.

Likewise the image processing and scan interpretation bit, and the neural simulation component, and so on – that doesn’t strike me as the really theoretically hard part. It might turn out to be practically impossible, of course, but it doesn’t ping my bogosity meter the way the scanning part does.

Ah, actually, one thing on the implementation side that might scale up to be infeasible is if you need to get serious about shape variability of proteins. Calculating possible conformations of proteins is (currently) a classic application that requires Grid computing type resources: it’s at the edge of what we can do with current computing resources. And that’s simply twisting a single smallish protein around. If we have to do that for a large proportion of the proteins in a brain, it’s easily above what’s going to be computationally feasible this side of the singularity. But I suspect we’ll be able to get by with a lot less detail, though not nothing — e.g. for TSE prion proteins, we might not have to do the whole calculation of possible conformations for each protein, and how that’s affected by its neighbours: we might well be able to just model it as a location in space and a one-bit state variable indicating whether it’s in normal or TSE conformation. My guess is that you’d need a bit more than that but not a lot. But that really is a guess that wants empirical testing.

Add post to: Delicious Reddit Slashdot Digg Technorati Google
(already: 3) Comment post

Comments

29.07.2010 5:35 Pore G

Thanks for tackling some of the issues Doug.

Disagree that prion protein in C-J shows that protein conformation is crucial for scale separation. The prion protein causes native proteins to get out of their own proper folding equilibriums.

You’ll find much better examples of why proteins are important by looking at things like subcellular protein concentration differences (i.e. local ribosome synthesis) and the protein composition at post-synaptic densities.

They key is not whether these different proteins, and their folding state, matter. Of course they do. And of course individual amino acids do, and of course the atoms that make them up do. The key question is whether one can model neuron classes, or neuron subclasses, as one distinct type, and accept that you are missing some variability between neurons, but still profitably predict the variation between individual brains. That is scale separation.

Agree that nanoassembly doesn’t make so much sense. And it might not in the future too. But on the meta level I am willing to accept that people betting against the future advances of science have not had a very good track record.

On imaging side, you seem correct in noting that the best approach would probably be to slice up the brain into little slices and imagine those individually. You say:

That’s not feasible without destroying at least half of what you’re trying to analyse. When you slice something, you basically smash up a thin column of stuff in the path of the knife (or laser beam, or whatever). And even if you invented some magical way of preparing the samples without that mechanical damage, you’d still have to pull the network of synapses apart in order to expose the surfaces to microscopy.”

There will presumably be some mechanical damage. But one could imagine some kind of knife / laser beam that would imagine as it slices, in the z direction, to capture that information before it is destroyed. This could then be reconstructed later.

Moreover, who’s to say that the slight slight damage from the small laserbeam will be too much damage? Especially if the laser beam could avoid crucial locations such as synapses or dendritic arborizations, which is certainly technically feasible, then you should be fine.

You note that simulating a brain may be easier, and a “better plan.” But there are large ethical and possibly existential risks to this, whereas simply restoring a dead brain to a live one is something that humanity could conceivably be more willing to cope with. Agree that recreating / emulating one individual dead brain should be much more difficult.

They did talk about EM being really close to the resolution necessary. But you may be interested in reading about a new laser that might be able to go even “deeper”, http://home.slac.stanford.edu/pressreleases/2010/20100630.htm .

29.07.2010 6:32 Paul Crowley

people betting against the future advances of science have not had a very good track record.

Do we know this? Perhaps the examples of people betting against and being wrong are just more famous than the perhaps more numerous examples of them being right.

30.07.2010 0:14 Luke Parrish

Slicing for cryopreservation purposes is easier than slicing for scanning purposes as the slice can be around a millimeter thick. UV lasers are ~250 nm wide, so only ~1/4000th of the tissue would be intersected.

Once that is achieved, the three methods of revival would be destructive scan, nondestructive scan, and biological reconstruction. The latter could be facilitated by selectively applied printed magnetics and glues that permit the tissue to join at the original spots and seal shut blood vessels. Stem cells, prosthetic neurons and dendrites, etc. could all be sprayed into place during the reanimation process.

Comment post