Killer robots, even AI ones, are banned in “Flat Black”

I don’t think it’s necessary to perceive those attributes, let alone ‘combine’ them (in a single grandmother neuron?), for evolution (and eventually language) to take them for granted. Each of the octopus’ arms can only talk to one shared brain. We animals all process a lot of information out in our bodies, and in the world outside (e.g. making artifacts, moving stuff around and finding it there later, causing behaviour in other organisms) but when we turn on the spot or displace a fluid (or move at all), the boundary of the region we directly control is physically manifest, whether or not we notice it. In other dimensions, a ‘unitary self’ is picked out by other properties: e.g. you are the only one with your actual memories, and as long as you live there’ll be a real universe that you’re living in.

The handy assumption of unity is as simple as not being able to count past one: individual is as individual does, regardless of the complexity or simplicity of any internal representation. In that sense, all matter has some of it. It takes language, and at least a bit of neuroanatomical knowledge, for the idea of the self not being unitary to be conceivable. (It’s like the free-will/determinism idea in that way, and both these remind me of ‘What the Tortoise Said to Achilles’, the moral of which could be: reality doesn’t care about our language games.)

1 Like

Are we talking about the unit of natural selection, or about self-perception? I don’t think it’s obvious that those two have to be the same.

The Buddha conceived of the Selt not being unitary — he described it as a composite thing, made up of skandhas (aggregates), in his famous argument against it being immortal or reincarnated. He managed that without knowledge of neuroanatomy.

1 Like

Premise 2, says that you can meaningfully (epistemically) talk about mental things, operations, cognition etc and have them be wholly distinct from material or physical things. In other words all of the discussion on thoughts, emotions and mental stuff is not just reducible to physical properties. Or to put it another way, I can change the physical properties and not necessarily change the mental properties. One potential implication here is of multiple realisation, different hardware (physical systems) could implement the same kinds of mental properties, alien brains might have mental properties like our own. If on the other hand you are reductionist about mental properties then they do not really exist. Sure they exist in that they are concepts we talk about but ultimately we could reduce them to more fundamental things, physical laws/descriptions. We do not learn anything from them that we could not already learn from the physical laws which govern the physical states that they can be reduced to (we can explain them away). So happiness as a mental property for instance is the exact same thing as the particular set of neuronal and related physical states that I find myself in (when I am happy).

The non-emergentism is I will admit one I had issues with as well. It seems that physical emergentism and the kind envisioned by philosophers is different. Emergentism (or its opposite reductionism) is a very slippery topic, Nagel has is own definition but there are many others:

https://plato.stanford.edu/entries/properties-emergent/#EpiEme

Nagel is referring to epistemological emergentism. The idea that we do not learn anything new when analysing these higher order or emergent states that we could not already. As you have pointed out this premise along with premise 2 seem to be at odds. In effect if both a true then weird things must be the result e.g. everything must have mental properties. Personally I take issue with 2 especially but 4 as well. By the way the kind of emergentism talked about here is much stronger than the kind found in statistical mechanics where if we could know the parameters of every microstate of a container of gas (the momentum of every gas molecule and not a statistical average) we could have full knowledge of the macroscopic states of the gas.

If by the way you have some spare hours the Stanford Encylopedia of Philosophy is a very good resource, Used by and maintained by actual philosophers it seems to be the one thing they all agree on:

https://plato.stanford.edu/index.html

Sorry this reply is half formed, I have to finish getting my son ready for school.

I also think we are slightly side tracked, if in an interesting way.

Brett: Is the purpose here to come up with a coherent reason why killer robots and AI are banned or to determine if they could even exist or something else?

I will stipulate that, and note that Hume came up with comparable ideas in the late 18th century, when nothing remotely like neurology existed. But Buddha, at least, didn’t present this as an obvious truth that anyone could grasp. He spoke of the realization of no-unitary-self as an aspect of enlightenment, to be attained through the long practice of spiritual disciplines, preceded by patient and insanely repetitious teaching by himself and his disciples. If no-unitary-self were a natural, spontaneous experience for human beings, I don’t think we’d have any need for anything like Buddhism.

1 Like

And for some topical but light hearted humour:

2 Likes

Well, let’s go back to thermodynamics when we talk about reductionism.

Here is a gas. Its molecules have a mean kinetic energy, from which we can define its temperature. But that’s only true because they’re all moving in different, random directions. If you had them all moving on parallel vectors the internal temperature of the resulting mass would be 0; what you would have would be a more or less tenuous body acting as a sort of projectile. So you can’t specify temperature solely by knowing the properties of the molecules; you have to know that they form a certain pattern, a certain order or disorder. Does that pattern count as part of the physical description?

I’d also note that I can have a piece of metal, say a pot on the stove, whose molecules have a mean kinetic energy that corresponds to a temperature of 373.18 K; and that means that they are holding together in a stable pattern, with the electron gas within the metal vibrating with a certain vigor. And in the pot I can have water, whose molecules are not just vibrating but rotating, and shifting position in a fashion that gives rise to convection currents, which is a quite different property. And some of the water will have turned into steam, within which the molecules are more widely spaced, and fly about (translate), and form bubbles whose low density results in the weight of the water pushing them up, and that’s a different physical state. But all of them have the same thermal property of being at 373.18 K! Heat will not flow from one substance to another when this is true, as stated by the Zeroth Law of Thermodynamics. So then does that mean that temperature, because it can be realized in radically different physical processes, is not a physical property?

I actually think that emergent properties are reducible properties; and that in panpsychism you have consciousness treated both as a nonreducible property (since it can’t be equated to a configuration of physical entities, states, and motions) and as a nonemergent property (since it’s postulated to be present in the basic particles or whatever). What I’m having trouble with is envisioning a system where consciousness is emergent but nonreducible, or where consciousness is reducible but nonemergent.

As for Buddha and Hume, my wife C showed that to me a few days ago. I think it would have been interesting to have Epicurus take part in the discussion as well.

2 Likes

Going sideways a bit, a lot of psi powers in SF came from contemporary theories - way-out, perhaps, and never respectable, but real theories nonetheless - that human psychic potential existed and could be improved. The writers at the time got very sniffy about making sure it was called “psi”, because it was serious business, unlike “magic”. As such theories have been fairly comprehensively demolished, psi doesn’t look any more respectable than magic so it can’t distinguish itself any more.

(I’m not saying “psionic” because I believe that was originally intended to mean psi + electronic, though of course in practice the term has drifted.)

I have a range of purposes, and I strongly suspect that other participants have yet others.

There is a clear Doylist reason that in Flat Black killer robots are banned and human-like AI is vanishingly rare. I think that players want to play human characters who occupy certain roles in the setting and story, confronting human NPCs in other roles. If I allowed AIs to displace humans from those roles the setting and action in it would turn into Nineties and Noughties radical hard SF instead of the Vancean rationalised planetary romance that I am aiming for. Glinnes Hulden is a retired space marine, Glawen Clattuc a police detective and park ranger; how could Flat Black be Vancean if these roles were taken over by killer robots and an ubiquitous surveillance network? The challenge, then, is to devise Watsonian reasons to rationalise the Doylist decision, and to ensure that they neither do violence to the suspensibility of disbelief, nor imply undesirable things about other aspects of the setting.

I have rationalisations that I think are satisfactory. My main purpose in this conversation is to explain my Watsonian reasons for the absence of robots and Minds from the field of view of PCs in Flat Black, and to have a bunch of intelligent and thoughtful gamers check that those reasons are coherent, plausible enough, consilient with the rest of the setting, and do not imply nasty logical consequences, all to a “good enough for gaming" standard”.

In addition I have an ulterior purpose, which is that my clinical psychologist suggests that my bucolic and rather isolated lifestyle does not afford enough grist for my intellectual mill, and urges me to engage more in sustained intellectual discussion with peers who share my interests. This is me trying to do that.

However, I don’t feel that I can impose on the generosity of @RogerBW and other participants to drudge either for my setting design and description or for my occupational therapy. I want these conversations to be interesting and rewarding for everyone who takes part — ideally, for everyone who even reads them. So I’m happy for the conversation to follow where-ever it leads itself — other than to the point of calling me a fascist, of course.

1 Like

Okay, noted.

We should discuss away then and ultimately see if we can be the online discussion equivalent of ergodic-like systems far from equilibrium governed by the fluctuation theorem rather than Goodwin’s Law.

That’s all cool. I personally also feel a lack of people to discuss philosophical and other knotty questions with; online conversation is a pleasure.

I’m willing to believe that general purpose AI is unattainable in a possible future. I did some research earlier this year and found that a top end supercomputer a decade ago was just good enough to emulate one cortical column in the brain of a rat. Human cortical columns are denser, and we have around a million of them. So emulating a human brain is going to take quite a few steps of Moore’s Law—and apparently it’s running into physical limits of the current technology. I’m willing to postulate that any breakthroughs don’t give us enough added capability to make self-aware computers or human brain uploads affordable, or maybe even possible.

You might have research labs with huge budgets doing occasional whole brain emulations to try to gain insight into the mental processes of an octopus, or a raven, or something else with significantly different brain architecture. But that would be a lot different from having faithful AI companions as consumer items, or even business assets.

When it comes to that I prefer the example of statistical mechanics, which is of course closely related.

Consider a set of molecules constituting a body of gas. Each of those molecules has a mass, a position vector, a velocity or momentum vector, an angular momentum vector, a chemical species etc. etc. None of them has a pressure, a volume, or a temperature. The molecule-by-molecule description of the gas does not contain values for the pressure, volume, or temperature of the gas. But it does imply them. You can in principle calculate those values as specialised statistics of the distribution of values of position, mass, and velocity.

In principle you could calculate all the physics of that gas from the dynamics of the individual molecules that make it up¹. The Newtonian dynamics of the molecules in the gas are fully sufficient to cause all its phenomena, such as exerting a force on the walls of its contain, its buoyancy in water and the buoyancy of a helium balloon within it, the lift and drag of an airfoil passing through it, the freezing of icecream or scorching of pizza placed within it, the sensation of a specific sound transmitted through it to a specific microphone. There is nothing to the physics of that body of gas that is not explicable in principle and caused in fact by the dynamics of its molecules. There are no supervening or wholistic causes that reach down though layers of abstraction to cause any of those molecules to do anything other than what individual dynamics cause them to do.

However, the body of gas does have properties that none of its molecules possess: pressure, volume, temperature, a speed of sound². These are what I call “emergent properties” in what I consider to be the straightforward meaning of the term, but I note that philosophers have defined such things to be “not truly emergent”, for clarity and convenience. We could calculate the pressure, volume, density, and temperature of a particular parcel of gas from information about its molecules, but we could never confirm by particular calculations, only repeatedly fail to falsify, that it had a speed of sound. What’s more, no detailed consideration of the molecular dynamics of a particular parcel of gas could derive universal generalisations about all gases.

But we can step away from the specific details of a specific parcel of gas. We can consider the Newtonian physics of molecules in general, and derive generalisations about pressure, volume, density, and temperature that apply to any parcel of gas. From general principles of Newtonian dynamics we can deduce the Universal Gas law, Bernoulli’s Equation, the speed of sound in a gas, the specific heat capacity of a gas. The emergent properties, or statistical properties if you prefer to say that they are Not Truly Emergent, have ironclad regularities to their physics that apply regardless of the particulars of the distribution of molecular properties, and that can be worked with confidently and precisely without having to know the position and velocity of even one molecule, without having to predict a single elastic collision. We can study the Carnot Cycle, design airfoils and rocket nozzles, turbines and piston engines, derive laws of acoustics and so forth without having to consider molecules any further. Once you’ve got the Universal Gas Law you can charge off into aerodynamics, thermodynamics, and acoustics by treating pressure, volume, and temperature as things subject to their own laws, and pay the Newtonian dynamics of molecules no further heed. What’s more, you can discover those laws empirically and independently of their derivation from dynamics: Boyle‘s Law, Charles‘ Law, and Gay-Lussac‘s Law were all discovered empirically before the molecular theory of gases was developed, and together they imply the Universal Gas Law.

Anyway: pressure and temperature are in no way independent of the positions and motions of molecules. If we already knew the positions and motions of all its molecules, measuring its pressure and temperature would not tell us anything that we could not have worked out from information that we already had. But when we step away from the particular to the general, forming the concept of pressure and temperature allows us to deduce or empirically investigate general propositions about all nearly-ideal gases that no amount of information about particular gases could have told us. Armed with the Universal Gas Law and Bernoulli’s Equation you can go on to do further science and engineering that are completely intractable if you do not use the abstractions “temperature” and “pressure”. Gas dynamics is completely consistent with and explained by Newtonian dynamics: it deals with pressure and temperature that Newtonian dynamics knows nothing of, not in supervention of Newtonian dynamics but because Newtonian dynamics implies that it must.

Is this what philosophers would mean if they said that pressure and temperature are real? Is it what they would mean if they said it wasn’t real? Is this what they mean when they say that the Universal Gas Law supervenes the dynamics of molecules or what they mean when they say it doesn’t. I find their jargon so confused that I can’t tell.

So anyway. In Flat Black There is no spooky stuff. No Cartesian dualism, no vital principle. Everything is physical³. Just as in the extended example above, “wholistic” principles of high levels of abstraction do not reach down into lower levels of abstraction to make particles do anything, particles just follow particle physics, and the higher-level physics, chemistry, biology, neuroscience, psychology, and economics is all summaries of what particles do for particle reasons. When a sophomore philosopher says “I think, therefore I am” that event is completely but not usefully explicable by fundamental-particle physics. A psychological explanation is more helpful than, completely consistent with, and just as correct as the particle physics.

Chemistry does what quantum mechanics implies that it must. Biochemistry does what chemistry implies that it must. Neurophysiology does what biochemistry implies that it must. Neuropsych does what neurophysiology implies it must. Psychology does what neuropsych implies it must. Sociology and economics do what psychology implies they must. It’s a single solid block of causal consistency, but each level is approachable on its own terms and can be applied without calculation from prior principles, as gas laws were discovered before molecular dynamics and can be applied directly.


¹ Including its deviations from the ideal gas laws.

² The distribution of chemical species, masses, positions, velocities, angular momentums, vibrational excitations etc. of its molecules also permits the calculation of other statistics. Some (such as for instance the proportion of molecules of each chemical species that are travelling faster than escape velocity) may be of interest in some situations but not others. Others (such as the proportion of molecules in the gas that as of chemical species whose formulas begin with “C”) are calculable but of no interest.

³ I share @whswhs’ objection to the usual formulation of materialism, which would require me to abjure belief in electrical fields, photons, and the curvature of space-time.

1 Like

(a) That’s pretty much what I was talking about. You notice that though I said “thermodynamics” much of what I wrote about was the underlying molecular level processes that we observe macroscopically as heat flow, temperature, and such.

(b) In terms of “reduction” and “emergence” it seems to me that thermodynamics can in principle be reduced to statistical mechanics, and that if you do the statistical mechanics the thermodynamic relations emerge at the macroscopic level. Conversely, if caloric fluid were real and “heat flow” were not simply a handy metaphor, caloric processes could not be reduced to molecular mechanics, would not emerge from the motions of and forces between molecules, and instead would reflect the adding up of caloric quanta that existed at an ultramicroscopic level. (Not inherently absurd a priori; that’s how electricity works.) The analogy to physicalistic monism vs. panpsychism is intended. What I don’t see is how there can be a position that combined reduction with nonemergence, or emergence with irreducibility, and that’s why I think Nagel’s views are poorly considered, if I’m understanding them correctly.

© The in-principle reducibility of large-scale social changes to interpersonal interactions and psychological “forces” is of course the founding metaphor of Isaac Asimov’s Foundation stories.

(d) I have some aesthetic preference for science fiction without spooky stuff, both because I know better how to think about what the advanced science of such a world might be like, and because a lot of the contemporary science fiction I like tends that way. For example, I ran two campaigns in Transhuman Space because, though some of it was handwavy, it gave me a world without overt spooky stuff. Traveller-like SF with psionics and such has, well, an old-fashioned feel to me.

(e) You might like to know that your long comment showed up on our tablet just as we finished watching an episode of original Trek (speaking of old-fashioned SF!), and C read all of it and thought that it was clearly explained and possible to follow. Not easy—her science education topped off at a semester each of college chemistry and physical anthropology, and she had to ask me to be quiet while she concentrated—but fully understandable. So there’s a vote of confidence for your expository skills.

(f) Here’s a nice bit for you, quoted by Daniel Dennett from a review by William Bateson (one of the founders of genetics) of a book by T.H. Morgan, ca. 1914:

The properties of living things are in some way attached to a material basis, perhaps in some special degree to nuclear chromatin; and yet it is inconceivable that particles of chromatin or of any other substance, however complex, can possess those powers which must be assigned to our factors or gen[e]s. The supposition that particles of chromatin, indistinguishable from each other and indeed almost homogeneous under any known test, can by their material nature confer all the properties of life surpasses the range of even the most convinced maerialism.

1 Like

So. Struggling to work out where to start.

It seems to me that a lot of people assume that there is a single thing that we may call “intelligence”, and that corresponds properly to (1) our subjective impression of conscious unity¹, (2) our impression that our friends and colleagues are well-ordered from “as thick as a whale omelette” to “as cunning as a fox that is professor of cunning at Oxford University", (3) what a Man is, that we are mindful of him. I think those people are mistaken, or in other words, wrong. I do not believe that the human mind is a unity or that thought is identical with conscious experience. I do not believe that there is a single faculty of general intelligence that is applied to different tasks. Most relevantly, I do not believe that intelligence is what makes people worth caring about or trusting. I believe that we may and one day will create machines that think, plan, intend, perceive, remember, recognise, understand, mean, and tell, but that we ought not to trust any more than we trust a psychopath, and that we need not care for as much as I care for my dog (who is not bright, even for a dog).

Rather, I believe (and it is the case in Flat Black) that the human mind is a collection or conglomeration of processes and capabilities, some of which are so separate from the others as to be localisable into different bits of tissue and separately disabled by localised brain damage. As Marvin Minsky put it, “intelligence” emerges from the interplay of the many unintelligent but semi-autonomous agents that comprise the brain². Our minds, the functions of our brains in biochemical context, consist of a multitude of capabilities or faculties that are useful for different tasks, and that are switched on for, or applied (or as Minsky puts it, recruited) to the problem (or rather, situation) at hand. Among these faculties are cool, rational, ones that get respect from philosophers. But emotions and even passions are among them too, and crafted presumably by evolution for some reason that seemed³ like a good idea at the time. There are also mental faculties that philosophers have often dismissed as trivial, but that turn out to be not only vital but also decidedly involved, such as vision, or the perceptual and motor capabilities to catch a ball, chase a mouse, or walk across the room without tripping over the pattern in the rug.

I believe that it would be possible in principle to understand and replicate these faculties, including the ability to love, to suffer,to feel satin, to enjoy Llaphraoig, and to make an ethical judgement. “Of course a machine can think!. I’m a machine, and I think. Don’t I?”⁴ In Flat Black it has been done. I also believe that it is possible to design and implement faculties that humans don’t even have. In Flat Black people have made not just true artificial intelligence but artificial human intelligence.

The thing is that for most uses you don’t need the entire gamut of human cognitive faculties. For any particular job most of them are redundant — wasteful or worse. There are in the human mind some faculties in the that were produced and shaped for purposes that were significant or even vital in ancestral environment, such as for chasing small game, or brawling, or keeping track of social alliances and antipathies in a band of hunters and gatherers, and that we now use for playing field sports and watching soap opera. Those and many others are not wanted an, say, an AI physician. The way of making an artificial intelligence for a particular purpose is not to understand and implement the psychometricians’ “g”, build a mind that is “fully functional” and then apply it to a restricted task. It would ended up getting bored and wanting to play hockey. Rather, the efficient and effective approach is to build an artificial mind with all and only the capacities needed for its purpose. That might very well include being better than a human at some things, or possessing capabilities that humans don’t have at all. Also, it might equally include completely lacking some human cognitive capacities, such as the capacity for feeling anger. Or love. Or shame, guilt, responsibility. All due respect to Alan Turing, but the imitation game is bullshit. Intelligence is not any single capability, and most definitely it is not the capacity to engage in chit-chat.

Practical AIs in Flat Black are often better than humans at the things they do: an Imperial Marines infantry drone isonly about as bright as a border collie, but it is a superhuman genius at skirmish tactics. But⁵ they have a bafflingly total lack of skill at or inclination towards routine human activities such as breathing, balance, bargaining, and bullshit. Why make a machine that can suffer, regret, repent? If you want something enjoyed there are plenty of humans to enjoy it.

Philosophers of the mind ask “can we make a machine that ‘thinks’ as a human does?”. AI researchers ask “how can we make a machine that ‘thinks’ as a human does?”. SF creators ought to ask “why would we make a machine that ‘thinks’ as a human does?” My conclusion is that we wouldn’t.


¹ Either I’m not like other people (certainly possible), or the subjective impression of conscious unity partly a result of not examining one’s experience very carefully.

² And the endocrine system, parts of the spine, etc. etc… Let’s not quibble: Minsky means that it is physical rather than being Cartesian spooky-stuff, he’s not denying that brain function is intimately involved with physiology.

³ Yes, I know. It’s teleological shorthand.

⁴ I quoted that on UseNet once, attributing it to Marvin Minsky, and was very surprised to receive as a result an e-mail from Marvin Minsky. We chatted about artificial minds and swapped disparaging opinions about Roger Penrose.

⁵ Except for individual ones that have been made uselessly human-like as some sort of stunt — which are made as one-offs, without economies of scale, and therefore at eye-watering expense, and which therefore are rare.

There is a very old story by Anthony Boucher, “Q.U.R.,” in which robots start experiencing derangements of function. It turns out that this results from their being burdened with humanlike body parts that are irrelevant to their jobs, and that only cause them cognitive stress. The invention of “Quimby’s Usuform Robots” solves the problem. . . .

I do think there is a certain element of “conscious unity” to humans. It comes about partly from the relatively high centralization of the nervous system (contrast octopuses, where over 50% of the nervous system is in the arms and sends severely redacted reports back to the main office), but more importantly from the existence of language. Language (a) gives us a medium in which to express our awareness, (b) gives us a tool for selectively focusing our attention, and © lets us describe our possible future actions in a way that gives rise to self-referentiality issues that we experience as “making choices.” A nonlinguistic lifeform presumably wouldn’t have those experiences. (And it’s interesting to consider how the ability to say “tiger” or “blackberry” might have given people more control over their attention in a selectively useful way.)

I have read it, though it did not as far as I recollect inform my thoughts about AI.

I do often think about sport and games and other amusements, though. It seems to me that we often see our neighbours and friends, or find ourselves, engaging for amusement in games that give exercise to physical and cognitive faculties that were shaped or granted for our survival and prosperity in the life of a member of a band of hunters and gatherers, and which itch from lack of employment in the agricultural and post-agricultural environments in which we find ourselves.

Very likely. Undoubtably, even. But there are results from experiments and studies of people with brain injuries that suggest that humans have less conscious unity than appears to be there on introspection. To some extent what appears to be self-consciousness is rationalisation and self-deception.

So we’re like my cat who likes me to throw his toy mouse, which he will catch out of the air, shake repeatedly, fling and catch, and occasionally fetch back for me to repeat the initial throw? It’s clear that his impulse to hunt has no proper exercise inside our apartment. . . .

Why, one might even suppose that playing RPGs makes some use of those archaic traits and impulses.

1 Like