Doug Clow on the Whole Brain Emulation roadmap
[A guest post from Doug Clow. This was a comment in this article on Bostrom and Sandberg’s Whole Brain Emulation: a Roadmap, but given its length and substance I am with permission putting it here as a new blog post.]
I too am short of time, but have given this paper a quick run through. Here are some unstructured and unedited quick notes I made while I was at it. Apologies for brevity and errors — I almost certainly missed some of their points and have misrepresented parts of their case.
It does seem to be a serious and reasonably well-informed piece of work on speculative science and technology. Emphasis on the speculative, though — which they acknowledge.
The distinction between emulating a brain generically (which I reckon is probably feasible, eventually) and emulating a specific person’s brain (which I reckon is a lot harder), and emulating a specific dead person’s brain (which I reckon is probably not possible), is a crucial one. They do make this point and spell it out in Table 1 on p11, and rightly say it’s very hard.
p8 “An important hypothesis for WBE is that in order to emulate the brain we do not need to understand the whole system, but rather we just need a database containing all necessary low‐level information about the brain and knowledge of the local update rules that change brain states from moment to moment.”
I agree entirely. Without this the ambitious bit of the enterprise fails. (They make the case, correctly, that progress down these lines is useful even if it turns out the big project can’t be done.) I suspect that this hypothesis may be true, but we certainly need to know a lot more about how the whole system works in order to work out what the necessary low-level information and update rules are. And in fact we’ll make interesting scientific progress – as suggested here – by running emulations of bits of the brain we think we might understand and seeing if that produces emergent properties that look like what the brain does. Actually they say this on p15 “WBE appears to be a way of testing many of these assumptions experimentally” – I’d be a bit stronger than that.
Table 2 on levels of emulation makes sense. My gut instinct (note evidence base) is that we will need at least level 8 (states of protein complexes – i.e. what shape conformations the (important) proteins are in) to do WBE, and quite possibly higher ones (though I doubt the quantum level, 11, is needed but Roger Penrose would disagree). Proteins are the actually-existing nanobots that make our cells work. The 3D shape of proteins is critical to their role. Many proteins change shape – and hence what they do or don’t do – in to a smallish fixed number of conformations, and we already know that this can be hugely important to brain function at the gross level. (E.g. transmissible spongiform encaphalopathies – mad cow and all that – are essentially caused by prion proteins in the brain switching from the ordinary shape to the disease-causing one.)
The whole approach is based on scanning an existing brain, in sufficient detail that you can then implement an emulation. I think that’s possibly useful, but I think a more likely successful route to a simulated (!) intelligence will be to grow it, rather than to bring it in to existence fully-formed. By growing, I mean some process akin to the developmental process by which humans come to consciousness: an interaction between an environment and a substrate that can develop in the light of feedback from that environment. But based on their approach, their analysis of technological capabilities needed seems plausible.
The one that leaps out as really, really hard (to the point of impossibility in my mind) is the scanning component. There is the unknown of whether the thing is doable at all (what they call scale separation), which is a biggy, but falsifiable by trying out experiments in this direction.
continue reading