Killer robots, even AI ones, are banned in “Flat Black”

I am not of the opinion that PCs belong on the battlefield — especially after the invention of artillery — and I seldom design adventures or campaigns that lead them there. So I came to specify that the Empire in Flat Black employs a corps of about 450,000 space marines, and run Flat Black campaigns for three years without ever describing their combat equipment. When eventually I got around to doing so several players and fellow-travellers were distinctly underwhelmed. As one rather savvy fellow pointed out, it’s hard to think of any problem on the high-tech battlefield for which foreseeable technology offers not better solution than a man in an armoured suit.

My reply is that in Flat Black the Empire, many other highly-developed and influential parties, and the interstellar law that they negotiated have a particularly severe hang-up about weapons that kill indiscriminately or of their own accord. A lot of those parties are not especially happy about people killing, but all of them put autonomous weapons that make their own decisions about whom and whether to kill in the same moral and legal category as nukes, poison gas, infectious weapons, and (as was coming to be at the time) antipersonnel land¹ mines. The Empire considers killer robots to be as illegal, immoral, and outrageous as poison gas, and therefore will not field an “army” of autonomous weapons. When it comes to open combat, and when the heavy gear is broken out from Imperial armouries, the Imperial Marines don their battle rattle and go along to command and control the machines.

Today I came across the following item in The Economist, a review of a book about the need to include humans in a role of meaningful oversight when future robotic weapons come into use:

¹ I suspect that we’re not too terribly far from considering naval mines in the same light as land mines.

So there are no AIs which count as ‘people’ in Flat Black?

Linda Nagata’s The Last Good Man is near-future military SF set against a backdrop of real soldiers and real pilots being squeezed out as the tech becomes better and better at doing their job. I enjoyed it a lot, though it never addresses the question of “Can any of this autonomous tech be hacked?”

1 Like

Very few. One of the PCs in the first campaign ended up in a marriage-like relationship with an AI robot who had been built as an unsuccessful¹ pilot program to achieve immortality through mind emulation. He was programmed capable of suffering and joy, and everyone treated him as though he counted as a person. But artificial human-like minds are rare, and therefore expensive, because there isn’t really a lot of use for them, given how plentiful humans are. The artificial intelligences that are programmed are made for particular purposes that human cognition cannot perform well, and are therefore commonly very, very un-humanlike. Most aren’t capable of suffering, because why would anyone make them so?

You see, intelligence doesn’t exist. Also, there is no Self.

Human cognitive function is not a single or integrated process taking place in the mind. Studies of people with localised brain injuries and of the results of temporarily activating or de-activating particular parts of the brain e.g. with magnetic fields, split-brain experiments, and so forth, show that the human “mind” consists of the aggregation of a number of more-or-less separate processes separately localised in the brain. The search for a common “general intelligence” that is involved in all of them seems doomed, and it is now clear that the human mind does not work by applying a general faculty of abstract intelligence to diverse problems². Rather, there is in the human brain a considerable collection of different faculties that deal with different subjects.

Those faculties are adapted to managing human life in the ancestral environment. They are good for hunting prey, keeping track of resources in an environment that changes with recognisable trends, making stuff with the hands, specialising and exchanging, managing social relationships, managing a sex life, negotiating, making, keeping, and breaking agreements, attracting the good opinion or compliance others, conserving attention, conserving memory, running, throwing things, fighting, deterring trespasses and betrayals, conserving energy, rationalising, judging people, deflecting blame. The part of the human mind that appears to be the Self is just one of these. Its function seems to be to present an account of our motivations and behaviour to other people. There is reason to believe that it confabulates.

If you build an artificial intelligence for any purpose other than emulating a human mind (which is a pointless stunt) most of these faculties will be useless or worse, and you will leave them out. And for a lot of practical uses AIs will require cognitive faculties that humans just never evolved.

WEIRD³s have a lot wrapped up in why humans count and other animals don’t. We keep shifting the goalposts about it, as chimps, crows, elephants, dogs⁴ etc. are shown to possess the capabilities that we have declared make us unique and important in a succession of fallings-back to unprepared positions. I hope that eventually we will be forced to concede that what matters is the capacity to suffer. But when we program AIs there will be no need to include that.

So in summary: in Flat Black a lot of the things that we expect will require true artificial intelligence capable of abstract reasoning turn out to be done better by pattern recognition and dumb expert systems. High performance on cognitive tasks turns out to be uncorrelated with self-awareness or “sapience”, or even in most practical applications with a capacity for abstract thought. Most useful AIs are highly specialised, utterly inhuman, incapable of suffering or joy, and not equipped to persuade humans that their interests matter. There’s just no reason to program them any other way.

¹ It was unsuccessful in that the uploading process was not fatal, and the original person was not resolute enough to suicide after the procedure.

² There is a lovely demonstration of this involving two problems that are abstractly the same. People find the puzzle difficult which it is posed in terms of cards that have a number on one side and a letter on the other, and usually get it wrong. When the same puzzle is posed in terms of people who might be under or over legal drinking age and whose drinks might be alcoholic or non-alcohol people find it easy and most get it right.

³ People whose backgrounds are Western, Educated, Industrial, Rich,and Democratic.

⁴ At the moment we’re up to “self-awareness”, defined as the ability to recognise one’s own reflection in a mirror. Elephants didn’t have it until someone thought of making the mirrors big enough; now they do. Dogs don’t have it at the moment because they don’t react to their reflection having a spot on its face, but I have seen a dog react purposefully to seeing that its reflection was about to get its tail stepped on. So there.

1 Like

I have read over the above reply, and it seems very disorganised and ill-expressed.

I’m sorry. I seem not to be very well.

I have said to a dog of my acquaintance “dogs don’t have object permanence!”. Apparently this one does.

The more I learn about consciousness, the more likely it seems that it’s a useful trick played by the mind on itself.

1 Like

I don’t quite agree with that model, or not all of its details.

As a physicalist (I would say “materialist,” but physical science’s paradigm hasn’t been “matter and motion” for a century or so), I don’t believe in a unitary inner self, but I also don’t believe in a multiple inner self. When I use the word “self” I mostly mean me as a physical entity. And I fairly clearly am a unitary being physically.

It seems to me that the explanation for our sense of inner unity must involve both our use of language to communicate, and even more, our use of language as a mechanism for expressing and channeling our (and each other’s) mental processes. Language as we use it is a fairly unitary and linear channel with small bandwidth. (The sense of self of a person whose native language was signed rather than spoken might be slightly different, though a lot of signing seems to require that the high-resolution area of the retina be focused on the signer’s hand, which has some of the same effects, even though the retina has hugely greater bandwidth.)

Given evolutionary continuity, it seems as if there would have to be nonhuman animals that had some capabilities that go into making the peculiarly human linguistically mediated self-awareness possible. Other apes, elephants, corvids, parrots, and possibly coleoids all appeal to me as candidates for uplift. (My Call of Cthulhu GM refuses to eat cephalopod flesh.) I suspect that they might have some such traits. But I would’t be prepared to go from the existence of borderline cases to doing away with the conceptual distinction between “humans” and “nonhuman animals,” or to adopt a Benthamite (or Buddhist) “can they suffer?” ethics.

I do think that human beings have a general purpose problem solving function—but most of that function is language and linguistically mediated thought. It’s just fairly slow and low-bandwidth; a computer can manipulate symbol strings (in a fashion that evolved out of human symbol manipulation) much more effectively than a human being can.

All of that, though, is just philosophical chitchat. I certainly have no problem with envisioning a future where general purpose AI is not a thing, and where “AI” means what it means in the real world, at a somewhat more advanced level. The possibilities of that kind of AI already seem radical enough.

Kahneman in Thinking, Fast and Slow suggests that there are, well, fast and slow mechanisms for solving problems - the fast ones doing best in familiar situations but often confidently giving a wrong answer when confronted with unexpected data (e.g. fuzzy pattern matching), the slow ones requiring conscious effort to invoke. One suspects that early attempts to build an AI would concentrate on the latter, with speed boosts from better hardware than gooey humans, but most of the “this is a person I can talk with rather than a sophisticated expert system” signals would come from the former.

1 Like

From what I’ve read, early attempts at AI actually undertook to emulate both types of thinking, confident that progess would be made quickly across the board. Then they found out that solving math problems or playing chess was a lot less difficult than climbing a flight of stairs—which seemingly no one had expected; there was an element of “if it can do things that are hard work for high school students, it can surely do things that kindergarteners have figured out.” Apparently a lot of human intelligence involves methods other than processing symbol strings.

There are a lot of problems that we can’t currently reduce to symbol strings. Take that staircase example. First. you have to recognise that you have a staircase, based on visual and/or tactile input. Then you have to estimate the height and relative position of the steps. Then you can start solving the problem of how to move your limbs to get up it, without falling.

My day job is working in a team of people who produce software for representing 3D shapes. This is mostly used for computer-aided design of machines, vehicles, buildings, and so on. I have some idea of how it could be used with visual sensors and a bunch of computing power to form 3D models of a scene, which could then be used to solve the problems of movement. None of this is done with symbol strings, because nobody has ever managed to invent a general way of representing 3D shapes like that. Everyone who comes into the field tries to, sooner or later, but nobody succeeds. If there’s a way of doing it, it is not obvious.

AI researchers don’t seem to be trying to solve that problem at present: They get far better practical results for less effort by forming statistical models of human behaviour and using those to display advertising.

The one I used to see a lot of articles about when I edited computer science journals was lifting and stacking packages. Postal organizations are interested in this, for obvious reasons. I get the impression a definitive solution has been about five years away since the first AI conference.

I don’t agree with it either. I have explained myself poorly.

I’ll be better in six to ten weeks.

The big problem, I suppose, is that the philosophers, diplomats, and lawyers can’t reach a conclusion about which AIs ought to count legally as making moral choices and being responsible for the resulting actions.

It’s not just a question of devising an operational definition, it’s mainly a question of ethics, and therefore has no answer. Humans are only allowed to kill because a a default grandfathering provision.

Furthermore, a lot of cognition isn’t problem-solving. A lot is process.

“What nervous impulses do I send down my spine right now to maximise the chance that my team will win this game of Rugby without too great a chance that I will be too badly hurt or suffer damage to my reputation?” is a problem. We can tell whether a robot has stacked parcels, walked through a carpark, won a game of Go. Loving Annie, enjoying Puccini, being moved by Brahms, appreciating Goya, and wanting icecream must necessarily be equally physical, but for the time being we’re clueless to explain what we actually mean when we ask whether an AI has done those things.

Very interested to read what your refined reply will be Brett. There are definitely some sections of your answer, in particular the characterisation of the brain-mind relation and of our cognitive faculties I would take more or less issue with. I could table some specifics but given you have said you are not happy with your own reply I am happy to wait.

There are some other interesting replies here also, RogerBW has indicated he leans towards the more subjective or transcendental idealism (or perhaps epiphenomenal) view of consciousness.Me am I more the resigned physicalist. Resigned in the sense that I am resigned to the fact that we are not likely to have a unifying account of the brain mind relation using the physical laws of the universe within my lifetime.

For me the notion of the general AI as opposed to application specific or Weak AI is appealing but whether it is even possible (replicate something like a mammalian mind) comes down to a deeper understanding of the human brain-mind, which are severely lacking. Deep Learning Nets only gets you so far, in of themselves they are relatively useless, build them with the appropriate reinforcement learning algorithms and you can do something useful. Thinking that you can just bootstrap this process to something more like general AI seems naive.

One last point, the symbol manipulating computational metaphor of mind is for the most part dead within Cognitive Science as far as I can tell, apart from a few last gaspers. Seeing the mind as some kind of computational engine though is very much still alive and the way we talk about it in Cognitive Science is still replete with terms like processing, resources, channels, bandwidth, memory stores etc. This is not to say that computable algorithms are not good models for many aspects of cognition though, they most definitely are. These same people applying mathematical models to understand cognition and the mind would be very hesitant to say that the mind is an actual ideal Bayesian observer for instance.

1 Like

I learned about this subject from reading eliminative materialists like the Churchlands; I particularly like Patricia Churchland’s motto of “no spooky stuff” for both neuropsychology and epistemology. One thing I learned from them was the idea that most traditional philosophy envisioned the mind as a propositional engine, something that perceives that , knows that , imagines that , remembers that , desires that , and so on. And of course we express it that way when we say, for example, that the cat “thinks that” toy mouse is on the sofa; but I’d always taken that as a metaphor—the cat has no language and doesn’t “think that” anything expressed in language (whereas language really pervades what humans do, by and large). But I came to the conclusion that the consciousness that eliminative materialism wanted to eliminate was not what I meant by “consciousness” in the first place.

1 Like

My introduction to cognitive philosophy was Douglas Hofstadter’s Gödel, Escher, Bach, which I followed up with Hofstadter & Dennett The Mind’s I. Then I took an intermediate philosophy class at UNSW in 1983: Computers, Brains, and Mind — foundations of cognitive science. And later I had an extended conversation by e-mail with Marvin Minsky. I don’t like to subscribe to any -isms, because they always seem to end up with me being called on to defend something that I don’t agree with, perhaps even haven’t thought about. But I did agree with Minsky quite a lot.


isms can be tough to live up (or down) to, but they do help clarify things. There is a fairly famous version of an -ism argument by Nagel. In Mortal Questions he queries the (at the time) very popular view of the mind body problem. To grossly paraphrase, do you accept the following four premises?:

  1. Material composition (or a commitment to materialism)
  2. Non-reductionism, the view the mental properties cannot be reduced to physical properties (perhaps due to supervenience)
  3. Realism about mental properties e.g. mental properties exist (not idealism)
  4. Non-emergentism, there are no true (causally) emergent properties of systems with sufficient complexity (like the brain)

If so, and many people did, then you come to the troubling conclusion of some kind of property dualism, commonly known as Panpsychism, where every atom in the Universe has mental properties (of some kind).

Anyway, the idea of there being real (general AI) or not strongly depends on your philosophical views. I guess it really comes down to what commitments (or caveats or fiats) Flat Black as a setting makes. Brett?

I’m not sure what all of those premises mean. I’ll set aside material composition; I prefer to say “physical” rather than “material,” as my brain’s activity involves electrical potentials that are created by an exchange of virtual photons, and I’m not sure that virtual photons are “matter,” but I don’t think that’s fundamental. But both non-reductionism and non-emergentism confuse me.

Take something I think I understand: The physics of heat. On one hand, there’s the Joule/Maxwell theory, which says that heat is random molecular motion, and temperature is the mean kinetic energy of such motion; when I put my hand in hot water, the impacts of the water molecules cause the molecules of my hand to move faster (“raise its temperature”). On the other hand, there’s the caloric fluid theory, according to which heat is a substance that’s stored inside physical objects, and when I put my hand in hot water, caloric fluid flows from the water into my hand. Those seem to be the only alternatives. Having 2 and 4 be separate premises seems to suggest that there can be four fundamental alternatives; but I don’t think there can.

In Nagel’s terms, is the caloric theory non-reductive? Is it non-emergentist? And is statistical mechanics reductive and emergentist? Is it possible to describe theories of heat for all four sets of options?

It seems to me that panpsychism, like vitalism in biology or the caloric theory in thermodynamics, is just wrong. That is, I have a “no spooky stuff” view of reality. But I don’t think I have a grasp of what Nagel’s terms mean, or why he thinks there are two separate premises.

But in any case, it seems to me that a lot of classic SF has spooky stuff: The psionic powers of the Lensman books, the Force of Star Wars, and so on. But more recent SF is more likely to forgo spooky stuff, and assume that it’s all physics.

1 Like

I don’t think our ‘sense’ (usually unquestioned assumption) of inner unity is mysterious, it just recognises that we’re flesh-connected, skin-bounded, mobile, oriented physical objects that both perceive and interact with other such objects as well as other kinds of objectively LESS individuated stuff. It’s a handy rule of thumb.

1 Like

But to perceive all of those different aspects of your physical being, and combine them into one perceived entity, takes certain features of neural organization and information processing that not everything has, even among organisms with sophisticated nervous systems. See for example experiments on self-recognition in a mirror; being able to identify that visual image as “oneself” requires the ability to integrate a visual perception with a kinesthetic one. Or consider the octopus, which has more than 50% of its neural mass out in the tentacles, each of which processes a large volume of information, and sends only selective reports back to the brain; it has a big case of its left hand not knowing what its right hand is doing. Octopuses may well have much less sense of inner unity than humans, even though everything you say about humans is also true of them.