Two pages on technology


Yep, it’s That Argument, and I have been following along while such folk as Daniel Dennett and Robert Searle had at it since 1980 or so.

On the gripping hand, I don’t want Flat Black to be Transhuman Space. Would you be happier if destructive brain scanning was too slow to outrun post-mortem changes? If the State and real Bob’s heirs-at-law saw a body on the slab and cash in the bank?


My feeling is that in a world with humans in it there won’t be an absolute decision one way or the other (or if there were, it would imply things about philosophical consensus which should change the flavour of the world). At this level of detail, it might be “few customers consider that continuity of existence, but there are some well-publicised exceptions”.

If there’s ongoing argument about it in the world, you might have some of:

  • the emulation of your mind is a distinct person that happens to have your memories. So it isn’t legally you. (But you get round this by hiding your assets in such a way that they legally belong to whoever comes along and gives the right password.)
  • the emulation of your mind is not a legal person at all. (You don’t have citizen AIs, right?)
  • body repair is a better option even for believers, until it gets so expensive as to be impractical.
  • the brain scan is considered murder unless you’re already dying (cf cryonics now).


No, but not for the reason that there are classic AIs that are denied legal personhood. The reason is that no-one makes classic AIs because they aren’t good for anything. If you’re programming a doctor, a civil engineer, or a town planner you don’t write code for sexual intrigue, you don’t write code for wanting to play soccer, and you don’t write code for political motives.

A mechanical mind could be built/programmed that emulated all the faculties and functions of a human mind, but they aren’t. Minds are made with the faculties and functions that they need for their purpose, which sometimes includes some copies of human faculties (e.g. conversational and linguistic functions for talking with people), but usually omits most of them, sometimes does the same job in a different way, and often has cognitive functions that humans just don’t have. Intelligence isn’t a single thing that is applied to all the different activities of the mind. There is no special faculty that objectively constitutes “intelligence”, “sentience”, “sapience”, or any other word that we come up with for the Self. There is no self. Just a toolbox full of monkeying routines.


My draft did say “few customers consider…”, not “no-one considers…”. But my supervisor told me that if anyone misunderstands then by definition your text is unclear. So how about this:

Brains, embodying minds, can be assembled synapse by synapse. The technology is used in making androids and to restore function after brain injuries, and could in principle make a copy of a brain that had been scanned with sufficient resolution. Few legal systems, neurologists, or customers believe that that offers a continuity of identity. Use of the method is rare.


I’m not sure what specifically “metaphysical” means here. Could you comment on the intended meaning? Maybe I can come up with a word that I would find more transparent. . . .


There is a book by Aristotle that is called τὰ μετὰ τὰ φυσικά because it was the next one on the shelf after Φυσικὴ ἀκρόασις (the “Lectures on Nature”). “Metaphysical” means “next to natural”, but that doesn’t signify anything about its content.

Anyway, Aristotle’s Metaphysics deals with a rag-bag of topics such as ontology and epistemology that logical positivists assert have no field of study. It is the beginning of a raging 2,300-year That Argument about things that don’t even exist. The argument is called “metaphysics”, and has expanded to deal with, if it did not already cover them in Aristotle, the Mind-Body Problem and the Problem of Identity.

If you take a particular position on the Mind-Body Problem (physicalism) and a particular position on the problem of identity you will reach the conclusion that to have a copy of yourself made in Chicago and then shoot yourself in the head is a form of transportation*. There is even a position that leads to the conclusion that running a simulation of your brain on a computer in Chicago would count.

If you go even further than that it seems that a copy of your brain that was significantly different from your brain would still be you, provided that you were dead†. To people who accept that it seems that to have a copy of your brain made that has been edited to juvenilise senile features, and then to top yourself, is a form of immortality. Roger posits that, even though this position is absurd, a lot of people who are facing imminent death or senile dementia, having no other hope, will act on this belief because Buckley’s chance is better than none. Also that people with the necessary skills could be found to do the copying and killing‡ involved.

In Flat Black the necessary position on the Mind-Body Problem (physicalism) has been established empirically. It is demonstrated that anything immaterial about the mind has no causal effects on anything observable. What I want to say is that few people are sufficiently convinced of the proposition that similarity is identity that they will actually go through with expensive suicide under the delusion that it is transportation or rejuvenation.

Perhaps “ontological” is the word I want.

* Many explanations for how teleportation might work amount to this, “beaming” in Star Trek among them. Some time in the last decade or so I read an SF story in which the interstellar transport system did work this way. The central character was responsible for disposing of the corpses and trying to discourage people from thinking about it too much. Something went wrong, there was a delay in confirming the signal, and he delayed killing the passenger with the usual huge dose of x-rays. The delay dragged on, he had to let her out of the transport booth. Then when the signal was confirmed a few hours later he had to beat her to death with a crowbar.

† After all, the future you is going to be different from the current you anyway, and identity is continuous through time, right? (See: the Problem of Theseus’ Ship.)

‡ In the first Flat Black campaign the PCs met a very old, very rich man and the lifelike robot in which his mind emulation was running. The doctors who had been would not commit the necessary murder and the old guy chickened out from the suicide. He died after a while, and one of the PCs married the robot.

¶ Back in '83 I took a sophomore philosophy course in the philosophy of the mind. Only got a Credit, though. If you’re interested in the field I recommend The Mind’s I by Hofstadter, R. & Dennett, D., Penguin Books 1981. ISBN 0-14-006253-X It’s an anthology of essays and extracts from the work of people who take different views on this subject, with commentary by Dennett and Hofstadter, and a lot of it is charmingly written.


I’ve replaced the paragraph anyway, but what I meant here is that the available technology could be used to change peoples minds, add sensory and motor faculties, and create “identical” or rejuvenated copies of people’s minds. But it isn’t done very often, because it would imply changes to self that most people won’t accept because they don’t believe that it would amount to continuity of identity.

If I used android technology to build myself, say, a centaur body, the damned thing would be super awkward to inhabit because my motor and sensory homonculi don’t have enough limbs. I could cut out my primary sensory cortex and my primary motor cortex and replace them with six-limbed substitutes, hooking up the sysnapses as necessary to connect it all to other functions in my brain. But that comes so close to building a deluded centaur and them jumping into a meat grinder tha I don’t think I’d want to.


We spent a lot of time on philosophy of mind in my first philosophy course at UC San Diego. Since then I’ve read discussions of the subject by Dennett and both Churchlands, among others; I rather like Patricia Churchland’s comment about “no spooky stuff,” which is very consistent with my outlook.

I asked about what you meant partly because “metaphysics” often means “spooky stuff.” For example, all three of J.B.S. Haldane, Nathaniel Branden (Ayn Rand’s leading disciple for many years), and C.S. Lewis all offer an argument against materialism based on the idea that without spooky stuff, you can’t make sense of a claim to know anything—though only Lewis carries it as far as belief in the Holy Spirit and the immortal soul. Such a meaning seemed unlikely, given that Flat Black is a setting where physicalism is generally accepted.

On the other hand, the technical meaning of “metaphysics,” as the subject that primarily encompasses ontology, didn’t quite seem to fit either.

As an Aristotelian, I consider what you are calling “similarity” to amount to specific or generic identity, that is, being of the same kind, in the way that my copy of the GURPS Basic Set is a copy of the same books that, I presume, you have on your shelves. But I think that survival is a matter of numerical identity, and that to suppose I can survive by a copying process is to confuse different sorts of identity, as Frank Tipler did, for example, in The Physics of Immortality…

But it seems to me that the metaphysical implications of editing the brain cannot very well relate to this. The changes involve don’t seem to alter numerical identity; it’s still the same brain, after all, it’s just been subjected to surgery. “Metaphysics” may not be the right word for what is being lost. It’s more as if you proposed to take a person whose gender was female, and surgically transform their body to male—that is, to give them an anatomical configuration that they would experience as “not me.”

I think maybe it would be better not to look for a particular single adjective, but to use a phrase: “the implications for personal identity.”


The premise that it’s already established that human cognition and motivation are fully explainable in physicalistic terms seems in its own right to entail a level of philosophical consensus that our world is nowhere close to having reached.


I am not so much an Aristotlean as an Heracleitian. When you ask me which ship is really Theseus’ I answer “why are you asking?”. For some purposes the ship in the museum ceased to be the same when you replaced the first halliard. For other purposes it is the same after every original stick is gone. None of these senses is objectively the real meaning of “identity”, because there is none. It’s just a word, definable at convenience.


Exactly so. In 1995 I exchanged with Marvin Minsky some disparaging remarks about Roger Penrose, but he got appointed to the Order of Merit and we didn’t. The Pope enjoys wide moral authority on the basis, ultimately, of dualist position on ontology that would be indefensible if, as in Flat Black or Transhuman Space, the physical basis of cognitive phenomena were well understood.

But the neurosciencists are making rapid progress. Before the century is out we will either know that Cartesian souls must exist or that they explain nothing. I don’t see a way to avoid taking a position on what the outcome will be.


I see that we have fundamental disagreements on both epistemology and metaphysics. I don’t think this is a suitable forum for exploring them. But I will say that the concept according to which it’s a different ship as soon as you replace one board can’t be applied to human beings or most other living organisms; a human being that doesn’t take in new oxygen molecules and release new molecules of carbon dioxide and water will very quickly be a human corpse, and the question of its identity will be purely one of forensics.


Indeed!. I think that metaphysics is pretty much vacuous.

Well, it can be so applied. It just isn’t useful or convenient to do so for many purposes. It does fine for the case where you have two signifiers and want to discuss whether they have only one referent. It’s only when the continuity of identity through time is at issue that Leibniz’s definition becomes useless and inconvenient.

Right. The courts have purposes for which one meaning of “identity” is useful and others not. But that doesn’t make the courts’ definition of “identity” right and Leibniz’s definition wrong. It’s just semantics. There is no objectively real identity. Metaphysics does not discuss the nature of identity, it discusses the semantics of “identity”.


Well, if you’re going to reject the very concept of “metaphysics,” it’s probably better not to use it in your description of what potential customers of brain rewiring are objecting to. Presumably your own cautionary attitude toward the process would not be founded on metaphysical implications, but on some other sort of concern; you could just as well suppose that most customers have similar concerns. If it wouldn’t be appropriate to call those concerns “metaphysical” in your case, I don’t see why you would call them that for other people, such as potential player characters.

The issues here, in other words, seem to be—perhaps “pedagogical”?

Your revised passage avoids that terminological pitfall, and that’s a good thing. However, it no longer spells it out that the scanning that would allow construction of a replacement or prosthetic brain necessarily destroys the original brain. That might be relevant to explaining why the process isn’t generally thought of as allowing “continuity of existence” (which I think is a better phrasing than “continuity of identity,” since one of the metaphysical questions here is whether “identity” requires continuity over time or not).

Your views, by the way, seem very much in the spirit of David Hume’s famous peroration:

If we take in our hand any volume of divinity or school metaphysics, for instance; let us ask, Does it contain any abstract reasoning concerning quantity or number? No. Does it contain any experimental reasoning concerning matter of fact and existence? No. Commit it then to the flames. For it can contain nothing but sophistry and illusion.


I accept that “identity” has useful meanings; I only deny that it has a unique true meaning that can be discovered by conjecture and speculation. I’m told that my derision for the content of metaphysics is not a rejection of metaphysics but a position within it.

Anyway, I have decided to spend fewer words in defending the omission of things that Flat Black is without, to save space for listed and defining the things it has. I describe the fusion-powered water-propelled shuttles rather than noting that transporters have no physical principle to work on and are dubiously transport at all. Further revisions may come in which I describe AIs that equal humans in the professional tasks they are made for while having less than 5% of the cognitive faculties and processing power needed to be a prolific hunter-gatherer.

As for destructive brain-scanning, I have decided that the possibility of non-destructive makes the transhumanist technology that I don’t want used less appealing, and therefore likely more expensive and rarer.


I don’t think I was ever supposing such a thing, though. My original discussion said that “I consider what you are calling “similarity” to amount to specific or generic identity, that is, being of the same kind, in the way that my copy of the GURPS Basic Set is a copy of the same books that, I presume, you have on your shelves. But I think that survival is a matter of numerical identity, and that to suppose I can survive by a copying process is to confuse different sorts of identity.”

I believe that what I am calling “numerical identity” (following the usage of Aristotle’s translators and interpreters) may be, or be close to, your conception of continuity of identity through time. That may not be the only way that “identity” can be defined, but if we are talking about whether the Bill Stoddard who has been provided with a centauroid body, and the brain wiring to represent it, or the digital system that remembers being Bill Stoddard, actually is the same person as the Bill Stoddard who is typing this, continuity of identity through time seems to be the most obviously relevant.


I don’t think I’m following your reasoning here. Are you saying that if it’s possible to scan my brain and create a digital model of it, and of me, that shares my memories and has a similar self-awareness, that option makes it less likely that people will do transhumanist things? Do you count that sort of digital modeling as a transhumanist thing? Or alternately, are you counting it as something NOT transhumanist that would deter actually transhumanist things such as, perhaps, destructive upload?

In either case, I’m not asserting that you’re wrong. I’m just asking for author clarification, because I feel as if I missed a step.


Right, and my position is that whether they are the same or not depends on the definition of “same”, different definitions being convenient and useful for different purposes. A forensic pathologist trying to determine the identity of a corpse finds one sense convenient and useful, while the deluded AI seeking control of the decedent’s assets prefers another. The question as to whether the 108-kilogram grey-haired economist sitting at my keyboard is identical with the 3-kilogram red-haired baby that my mother bore in September 1964 is best answered with “why do you need to know?”

The boy is father to the man.



I had an element in a late adventure in the first Flat Black campaign in which the PCs met an ageing and failing ex-trillionaire in the care of an AI robot who had been intended as his immortal continuation by non-detructive mind transfer. The project had failed, not for technical reasons, but because it was patently obvious to everyone involved that the robot and the old man were not the same thing, and that the old man blowing his head off wasn’t going to make them identical.

Suppose that you spend a fortune making an “identical” copy of Bill Stoddard which them cleaned out your bank accounts, diverted the royalties of your GURPS titles, and ran off with your wife. The example would be a poor advertisement for the more nonsensical parts of transhumanism. It hiring a hit-man to have you killed would not diminish the promoters’ feeling of chagrin.

Mostly I consider it a pointless technical tour-de-force that would lose all appeal when the novelty passed. What are such simulations for?


I might want the simulation to be available to help C with practical issues that I can cope with better than she can. Or I might want it to carry on running the last campaign of my life for however long the players remain interested. If it can be created without killing me in the process, and if it doesn’t cost more than a small percentage of my net worth, either motive might be worthwhile.