In other news, I’m off to the Bay Area on Thursday! I’m spending the first week at the I’ll be at the CfAR minicamp for the first week, then I have a week in the Bay Area, from Sunday 29 July to Sunday 5 August. Do you want to meet up? Do you know someone I should meet? Is there something I should go to in that time? What are your transatlantic travel top tips? Thanks, and possibly hope to see you soon!
Though I haven’t posted in a long time, I haven’t forgotten this blog. Every couple of weeks or so I come by and clear out all the comment spam, which gets in despite the application of CAPTCHAs. So I know spam comments are the cobwebs of the blog world, but I promise, the place is not covered in cobwebs because it’s been abandoned but because the spiders work fast!
Luke Parrish points me to what is clearly by far the most serious critique of cryonics ever written: a 57-page treatment by Evelina Martinenaite and Juliette Tavenier, presented as a 3rd semester project at Roskilde University in Denmark supervised by Ole Andersen. I want to congratulate them both on raising the bar for cryonics criticism by a factor of about ten thousand. In 1994 Ralph Merkle wrote:
Interestingly (and somewhat to the author’s surprise) there are no published technical articles on cryonics that claim it won’t work.
After 44 years of cryonics, that has finally changed.
December 22nd, 2010
Evelina Martinenaite, Juliette Tavenier
Abstract: The preservation of cells, tissues and organs by cryopreservation is a promising technology nowadays. However, the primary purpose of this science has been diverted to a doubtful technology, cryonics. Cryopreservation techniques are now being adapted with the aim of preserving people’s bodies after death in hope that in the future, medicine will be able to revive them. In this report we analyze both scientific and social issues involved with this technology. We first studied the events taking place in the cells during regular freezing. Various research experiments show that freezing causes damage to the cells. Therefore, vitrification presented by cryonics companies as an alternative, seems to be reasonable. We also looked at all the difficulties of this procedure and at the injuries that such a treatment could cause to the human body. Studies show that the vitrification procedure suppresses the injuries related to freezing but the use of cryoprotectants, although necessary, is toxic to the cells. Organs, such as kidneys, are the largest entities ever vitrified and thawed with success. By analyzing all present scientific data, we conclude that there is a limit to the size of living matter that can be cryonised effectively; therefore we conclude that it is not possible to cryonize an entire human body with the current technology without causing severe damage to it.
A brief response: Yes, cryonic preservation causes all sorts of severe damage far beyond our current ability to overcome; all the damage discussed in this paper is well understood and widely discussed by cryonics practitioners. This paper doesn’t seem to quite engage with the central contention of cryonics: that so long as the information that makes up memory and personality is preserved, future technology may find a way to repair the damage caused by cryopreservation. Two distinct paths to this end are widely talked about: molecular nanotechnology, and scanning/WBE. As far as I can tell, no argument is made in the paper that human cryopreservation causes information-theoretic death, and neither of these repair options are discussed at all. In fact it doesn’t seem to observe much more than the well-understood fact that reanimation is not feasible with current technology. As a result, this paper, while it is vastly vastly ahead of the arguments made by other critics of cryonics, is some way behind the arguments already considered and answered by cryonics advocates.
Radio 4’s weekly obituary program devoted five minutes to the death of cryonics founder Robert Ettinger, and spoke to longstanding cryonics foe Professor David Pegg of the University of York. Here’s a transcript of his broadcast remarks:
Pegg: There are three areas of damage (if you like) which have to be undone for this process to work: one is to bring the dead back to life, another is to cure the thing which caused them to be dead, and the third thing is that the process of preservation should not inflict any damage. It is possible to achieve effective cryopreservation of single-celled systems. What is not possible is to adapt that to multi-cellular systems. So although it might be advantageous to be able to cryopreserve kidneys for example, for transplantation, that is not possible. And the reason that it’s not possible is that the ice, which is innocuous to the cells in themselves, destroys the structure and the kidney will no longer function.
Interviewer: Well now, Robert Ettinger’s supporters would say you’re just being very very shortsighted, you’ve just got your head in the sand, because of course there are problems at the moment, but in the future, science will overcome all of these problems
Pegg: Yes, well that’s an item of faith. it’s not a question of — a scientific question. The fact is of course that we can’t predict the future, and we don’t pretend to do so. But nor can they.
Pegg (later in the programme): We are making progress, but we haven’t got the problem licked just yet. If that problem could be solved, then it would perhaps be reasonable to ask the question whether this should lead to the cryonics endeavour in practice. But at the moment, it’s frankly just premature.
Two technical points, both from Alcor’s cryonics myths page:
- Cryonics providers have used vitrification to eliminate ice crystal formation for a decade now, and even before that other cryoprotectant techniques greatly reduced freezing damage.
- While cryopreservation of kidneys is not yet advanced enough to be used for human transplants, it was demonstrated in rabbits in 2005. One of the rabbits tested survived for 48 days with the cryopreserved kidney as its sole kidney, before being euthanized for histological follow-up.
Perhaps Pegg was taking shortcuts because of the time pressure of radio; however sadly he has not taken the time to present a technically accurate argument in any forum. I once again urge him to do so.
In further correspondence with Doug, he points out this rather odd sentence in Ben Best’s Molecular Mobility at Low Temperature:
Diffusion in a vitrified cryonics patient would presumably not be due to concentration gradients because there should be not concentration gradients.
Surely there can be no concentration gradients only if the whole sample is homogeneous? I’m pretty sure standard cryonics practice doesn’t involve putting the brain in a blender a la Britannia Hospital. I’ll email Best and ask for clarification.
[This is another guest post by Doug Clow — thanks Doug! I asked a question on LongeCity: is the 29 kJ/mol figure for the activation energy of the “catalase reaction” given in How Cold Is Cold Enough correct? Doug was kind enough to give a detailed answer, and permission to edit it a little and reproduce here.]
I did do a chemistry degree, with a lot of biochemistry in it. And even as a postgraduate student attended a research seminar by another postgrad who was investigating catalase analogues, which almost certainly touched directly on the question. But that was long ago and I haven’t done this stuff in anger for decades.
Alas, I don’t have good data books to hand and can’t answer the direct question (“What is the activation energy of the decomposition of hydrogen peroxide when catalysed by catalase”) authoritatively.
Partly, it’s because there isn’t a single answer, and anyone who tells you there is is fibbing. There are shedloads of different catalases (more if you include general peroxidases). They are indeed legendarily fast, and they are more or less ubiquitous in oxygen-metabolising species. The story goes that it’s as perfectly evolved an enzyme as you can hope for. It’s not a bad choice for the worst-case scenario for this context, although I wouldn’t go as far as to say it was the very worst without checking up for other very-fast enzymes in metabolic pathways and signal transduction (e.g. acetylcholinesterase is also legendarily fast). Which would be overkill.
I’d say using any value between 1 kJ/mol and 20 kJ/mol is not unreasonable, and if you pressed me for a value, I’d probably settle on 10 kJ/mol as a round value. (See e.g. http://www.ncbi.nlm.nih.gov/pubmed/8320233 which gave 10 kJ/mol for a catalase from a halophile bacterium — no reason for choice except I alighted on it quickly, or http://www.sciencedirect.com/… which found 11 kJ/mol but looks odd for several reasons.)
At the very top end, a value of 50 kJ/mol for a reaction that happens at a reasonable rate for practical experimental purposes at room temperature is fairly typical. There’s a (sorely abused) rule of thumb that says that reaction rate doubles with an increase of 10 C, which only applies under fairly restrictive conditions, one of which is that the Ea is 50 kJ/mol.
This does, of course, yield materially different results. I tried duplicating that big table in ‘How Cold is Cold Enough’ in a toy spreadsheet, and couldn’t quite reproduce his results, but did get within an order of magnitude which is close enough for these purposes. I played around looking at his ‘Rate relative to liquid N2’ column, for different values for the activation energy.
- 50 kJ/mol -> 2.2 x 1025 times faster at 37C than at LN2
- 20 kJ/mol -> 1.4 x 1010 times faster
- 10 kJ/mol -> 1.2 x 105 times faster
- 8 kJ/mol -> 1.1 x 104
- 5 kJ/mol -> 340 times faster
- 2 kJ/mol -> 10 times faster
- 1 kJ/mol -> 3.2 times faster
For the question at hand, this makes a huge difference — to this analysis.
This analysis is likely to be wrong, anyway.
A quick look at the Arrhenius equation:
k = A e-Ea/RT
Let’s take a very, very simplified reaction, where one molecule of reactant hits one catalyst to produce one product. The pre-exponential factor A represents the number of collisions that occur; the bit in the exponent tells you what proportion of those collisions have energy above the activation energy for the reaction.
Now, mathematical instinct might tell you that the bit in the exponent will give you all the action, but that’s not necessarily true. For practical purposes, the rate of catalase in vivo is limited by the rate at which molecules collide, not by the proportion of the molecules colliding which have greater than the activation energy needed for the reaction. Essentially, if a molecule of hydrogen peroxide bumps in to catalase, it’s breaking down. My biochemical intuition is likely to be sorely astray at LN2 temperatures, but I’d guess the same situation applied.
The pre-exponential factor A is probably more key: it’s the rate at which collisions occur between molecules that might react. If you have a perfectly efficient catalyst, this is the main factor affecting the rate of reaction — which makes sense, since they’d reduce the activation energy to a negligible value. Some enzymes — catalase is an excellent example — have been under geological periods of selection pressure in that direction. The Arrhenius equation is a simplification that works (better than it ought to) across a lot of practically-important situations. (One simplification is that the activation energy is not temperature-dependent. It sometimes is.)
If you get a phase change to solid — vitrification at very low temperatures — then you’ll get a staggering-number-of-orders-of-magnitude change in A. Those molecules are going nowhere fast, and so are flat out not going to bump in to each other. Never mind how much energy they’ve got when they do.
So I think that all is not lost for cryonics on this point.
Over the past forty years, science has built up a substantial body of experimental evidence that highlights dozens of alarming systematic failings in our capacity for reason. These errors are especially dangerous in an area as difficult to think about as the future of humanity, where deluding oneself is tempting and the “reality check” won’t arrive until too late.2pm-4pm, Saturday 3rd July: Room 416, Fourth floor, Birkbeck College, Torrington Square, LONDON, WC1E 7HX (map)
How can we form accurate beliefs about the future in the face of these considerable obstacles?
This talk will outline ways of identifying and correcting cognitive biases, in particular the use of probability theory to quantify and manipulate uncertainty, and then apply these improved methods to try to paint a more accurate picture of what we all have to look forward to in the 21st century.
Room 416 is on the fourth floor (via the lift near reception) in the main Birkbeck College building, in Torrington Square (which is a pedestrian-only square). Torrington Square is about 10 minutes walk from either Russell Square or Goodge St tube stations.
Beforehand and afterwards, we’ll meeting up in the pub: any time after 12.30pm, in The Marlborough Arms, 36 Torrington Place, London WC1E 7HJ (map). To find us, look out for a table where there’s a copy of the book “The Singularity Is Near” displayed.
I’ve been absolutely drowning in blog spam of late, so I’ve switched on the CAPTCHA option in ByteFlow. Sorry for the inconvenience. It supports reCAPTCHA in theory, but I couldn’t get that to work.
[A guest post from Doug Clow. This was a comment in this article on Bostrom and Sandberg’s Whole Brain Emulation: a Roadmap, but given its length and substance I am with permission putting it here as a new blog post.]
I too am short of time, but have given this paper a quick run through. Here are some unstructured and unedited quick notes I made while I was at it. Apologies for brevity and errors — I almost certainly missed some of their points and have misrepresented parts of their case.
It does seem to be a serious and reasonably well-informed piece of work on speculative science and technology. Emphasis on the speculative, though — which they acknowledge.
The distinction between emulating a brain generically (which I reckon is probably feasible, eventually) and emulating a specific person’s brain (which I reckon is a lot harder), and emulating a specific dead person’s brain (which I reckon is probably not possible), is a crucial one. They do make this point and spell it out in Table 1 on p11, and rightly say it’s very hard.
p8 “An important hypothesis for WBE is that in order to emulate the brain we do not need to understand the whole system, but rather we just need a database containing all necessary low‐level information about the brain and knowledge of the local update rules that change brain states from moment to moment.”
I agree entirely. Without this the ambitious bit of the enterprise fails. (They make the case, correctly, that progress down these lines is useful even if it turns out the big project can’t be done.) I suspect that this hypothesis may be true, but we certainly need to know a lot more about how the whole system works in order to work out what the necessary low-level information and update rules are. And in fact we’ll make interesting scientific progress – as suggested here – by running emulations of bits of the brain we think we might understand and seeing if that produces emergent properties that look like what the brain does. Actually they say this on p15 “WBE appears to be a way of testing many of these assumptions experimentally” – I’d be a bit stronger than that.
Table 2 on levels of emulation makes sense. My gut instinct (note evidence base) is that we will need at least level 8 (states of protein complexes – i.e. what shape conformations the (important) proteins are in) to do WBE, and quite possibly higher ones (though I doubt the quantum level, 11, is needed but Roger Penrose would disagree). Proteins are the actually-existing nanobots that make our cells work. The 3D shape of proteins is critical to their role. Many proteins change shape – and hence what they do or don’t do – in to a smallish fixed number of conformations, and we already know that this can be hugely important to brain function at the gross level. (E.g. transmissible spongiform encaphalopathies – mad cow and all that – are essentially caused by prion proteins in the brain switching from the ordinary shape to the disease-causing one.)
The whole approach is based on scanning an existing brain, in sufficient detail that you can then implement an emulation. I think that’s possibly useful, but I think a more likely successful route to a simulated (!) intelligence will be to grow it, rather than to bring it in to existence fully-formed. By growing, I mean some process akin to the developmental process by which humans come to consciousness: an interaction between an environment and a substrate that can develop in the light of feedback from that environment. But based on their approach, their analysis of technological capabilities needed seems plausible.
The one that leaps out as really, really hard (to the point of impossibility in my mind) is the scanning component. There is the unknown of whether the thing is doable at all (what they call scale separation), which is a biggy, but falsifiable by trying out experiments in this direction.continue reading