Introduction: Five Theses on the Limitations of AI Systems

1 When personalities such as Bill Gates or Elon Musk today publicly warn of the dangers of AI systems, the technical layman is quick to connect these warnings with the dystopias of Science Fiction. Films such as Terminator, Blade Runner and Ex Machina have shaped the public conception of what AI systems are supposed to be: Usually a robot with a humanoid shell and humanoid intelligence, alternatively a virtual software agent like the on-board computer HAL in Stanley Kubrick’s 2001: A Space Odyssey. And not only laymen share this almost regulative idea of our future, but many experts believe such AI systems will soon appear, systems that are intended on the one hand to be amazingly like humans, at the same time being much more intelligent than we humans (Bostrom, 2014; Kurzweil, 2006). If we uncritically adopted this purported scenario, then we would need to fear the depicted AI systems as much as ancient man feared the world of gods (who were portrayed in a similarly human-like form). We should not forget though that science fiction stories are nothing more than modern legends. Of course, a critical encounter with legends has always informed the history of ideas. Legends are important to critical thinking. Legends should however not be the basis for determining public policy and economic decision-making.

In this article I will argue that AI systems are in no way as “heroic” or “super-intelligent”, as is suggested by contemporary legends. Instead, I intend to describe these systems as plainly they are in their limited processing capability. This objective treatment seems to be long overdue, since political documents such as the US National Defense Authorization Act (US Congress, 2018) describe AI systems as “human-like” and in doing so come shockingly close to science fiction. Also beyond US Congress the idea has become so ubiquitous that political entities have started to consider giving ‘human-like’ rights to AI systems (“Robot Rights”; Gunkel, 2018). Even national citizenships have been awarded to these systems (Hatmaker, 2017) and the idea of a separate AI legal entity has been raised in official documents of the European Parliament (Krempl, 2018). Attributing a human-like nature to AI systems giving them corresponding rights can damage the freedom and dignity of human beings though in a very serious way. New ethical norms in dealing with the new “species” would become necessary in all areas of life. Is it wrong to kick a robot? Are we killing a robot when we deactivate it or shut off its power supply? Would cheating on a partner with a robot undermine the dignity of the human partner? A virtually inexhaustible spectrum of ethical questions would arise, with a corresponding volume of resulting regulations. The present article thus specifically focuses on the question of whether describing AI systems as human-like is acceptable. I accumulate just a few modest comparisons of the cognitive abilities of AI systems with those of human intelligence. I do not doubt that in the future AI systems will possess even more impressive computer power than today. Under the condition that AI has high-quality input data that is trained and sensitive to context it could become so powerful that it will not only be useful in narrowly defined application areas but can really enrich human decision-processes. However, despite this hope for better performance, AI systems will never become human-like for the following reasons inherent in their nature:

1. AI systems have little human-like information

2. AI systems cannot react in a human-like manner

3. AI systems cannot think in a human-like manner

4. AI systems have no human-like motivation

5. AI systems have no human-like autonomy

Underlying these theses five theses is the assumption that computer systems will continue to be produced from inorganic material and will function digitally.2

On the Classic Distinction between Reason (“Vernunft”) and Intelligence (“Intelligenz”)

Let me clarify upfront that I am building on the classic distinction between reason (“Vernunft”) and intelligence (“Intelligenz”); two terms which seem to be frequently confused in our modern world3: For the great philosophers, from classical antiquity until the late Middle Ages, human reason (“Vernunft”) is the ability to collect rational arguments, data and facts, etc., to comprehend them and to combine them in a factually logical manner (Greek: dianoia, Latin: ratio). Normal humans can learn to act with reason, for example to collect information, remember the information, retrieve it and combine it. Humans have for the most part control over this process. In contrast to reason, intelligence (“Intelligenz, Verstand”, Greek: noûs, Latin: intellectus) consists in the ability to truly understand information, to separate what is important from what is unimportant, to abstract, to think further or simply to see what is crucial in a given matter and what is not. Humans have little control over this intellectus: One cannot force oneself to understand something one does not understand4. For this reason we attribute to humans varying degrees of unique intelligence.

Now, how does intelligence manifest itself in contrast to rational reason? First of all, it is experienced by the body: Blood pressure rises in the joyful moment of understanding something or one enjoys flow when one’s own expertise corresponds to the requirements of an assignment (Csikszentmihalyi, 1991). Secondly, it is hardly possible to recapitulate point by point what exactly it was that brought about the moment of understanding or ‘enlightenment’. True understanding, which is immanent in intelligent (not mechanical) action, can only be made tangible (dingfest) to a limited extent, let alone be recapitulated in sharply delineated units of information.5 And third, it must be noted that intelligence – in contrast to reason – is referred to in classic scholarship as “noûs”; the same word as French “nous”, German “wir”, English “we”.6 This common etymological root points to the notion that intelligence is somehow linked to making a connection with the world around us: things, nature or other people, etc. Intelligence is a form of shared understanding.7  When it is impossible to build this kind of shared understanding, we are quick to say: “I cannot make a connection!”, which is intended to express that we are unable to establish this “noûs”.

This concept of “connecting to the world” seems central to me to understand the difference between human and artificial intelligence.8 It is constitutive in the distinction between reason and intelligence: Reason parses a situation neutrally into individual pieces, objectively analyzes them without bodily reaction and makes decisions which are as clearly understandable as possible on a point-by-point basis. Intelligence on the other hand requires us to connect with a thing, requires it to mean something to us. The conclusion of the present article will be that AI systems can exercise reason, but not be intelligent. And the reason for this is ultimately that AI systems are not capable of connecting with the world in an intelligent manner; the world “does not mean anything to them”, there is no perceivable “noûs” for them.

AI Systems Have Little Human-like Information

Humans are highly sensitive, integrated body/soul systems (Damasio/Everitt/Bishop, 1996) which are in permanent resonance with their environment (Rosa, 2016). Indeed, humans are so powerful systems that at present some scientists argue that we can no longer avoid the theory of understanding ourselves as walking quantum computers (Wendt, 2015). Even if I cannot judge this theory’s validity here, I think we can all agree that human beings permanently use their entire body to processes optical, acoustic, tactile, gustatory and olfactory information about their environment. “Our perception… is… the product of forms of synesthetic perception specific to the body”, writes Johannes Hoff. Therefore, human-like AI systems would first require a similarly empowered motor-sensory “body” that processes the environment in a similarly complete manner. However, the state of the art is still far away from achieving this. The most powerful super-computers of today can only simulate one to two percent of the neural activity of a human brain, requiring a thousand times more energy to do so than what is required by our biological system (Meier, 2017).9 A human brain has more than 860,000 billion neurons connected to one another through over a quintillion synapses. In the past it was thought that humans learned by changing the effectiveness of existing synapses. This idea also served as the foundation for machine learning; accordingly AI systems are said to train “neurons”. Today however Neuroscience tells us that humans are constantly forming new synapses between the neurons. In other words: The brain is constantly busy rewiring itself. If we are to trust technical sources, up to 40 percent of the synapses on a neuron are replaced daily. Thus a somewhat disheartened author from the field of AI recently concluded: “While it is true that today’s AI techniques reference neuroscience, they use an overly simplified neuron model, one that omits essential features of real neurons, and they are connected in ways that do not reflect the reality of our brain’s complex architecture” (p. 35 in Hawkins, 2017). So, the fact is that the highly diverse, nuanced and rich human processing of information about our environment is infinitely finer and more flexible than anything we can expect from machines within the foreseeable future and at viable energy costs. However, if the information collected by AI systems is different, and also processed more roughly, how is an AI system then to be “human-like”? With less data, less sensory input and less powerful and completely different processing of information about the surroundings, AI systems will probably continue to appear “awkward” to us for quite some time to  come.

That said, AI systems certainly have something to contribute to this world. A large number of sensors which we humans do not have can be built into their housings. And this sensor information can be shared efficiently among AI-systems by means of global networking. Depending on the configuration, AI systems can for example measure radioactivity levels, heat or humidity, determine the number of WIFI networks being used in a given area or identify the furniture found in buildings in the vicinity (for example when the furniture is tagged with RFID chips operating at certain frequency bands). AI systems such as mobile robots could (to the extent officially permitted) access the socio-demographic profile of every passing pedestrian in international data markets or ascertain with relative certainty their current mood based on an optical analysis of facial expressions. Even if AI system thus cannot connect with the living, sensory world in the same way as humans due to a lack of bodily sensory systems and a non-existence of human-like lived bodies, they can network with each another and aggregate enormous amounts of data. The result is an entirely independent machine-ratio that might be of significance to human life and economies, but it is absolutely not human-like.

AI Systems Cannot React in a Human-like Manner

When humans react to their environment, their intelligence in the sense of “noûs” becomes evident in a very special way. Persons establish a normal connection to the world reacting directly and emotionally. This is important in all life situations. Take for example an emotionally charged situation in which we witness an injustice. When encountering negatively connotated conditions such as injustice, cruelty, brutality or similar qualities, our language has expressions for our directly experienced sense of negative values. For example, we would say that what we see “gives us a bellyache”, “sends shivers down our spine” or “makes our hair stand on end”. But our understanding is not only led by such a sense of negative values. Most of that which ultimately gives our lives meaning in the form values – sympathy, love, friendship, community, security, etc. – manifests itself to us humans in that we feel attracted to it (or repelled) emotionally (Scheler, 1921, 2007).

The feeling of attraction or of repulsion is decisive for how we can be with the world.

It will suffice to point out that no computer system and no AI system of any generation possesses a lived body which can experience such sensations or anything similar to them. A housing made of steel does not have stomach pains and no emotion will run down its spine.10 A humanoid robot might be able to say that it has stomach pains. The reaction of the machine then consists in this language output. However, this simulation of a similarity to humans does not make the AI system truly human-like. People might appreciate the simulation. But simulation does not mean to ‘be human-like’. It means to ‘pretend being human-like’. Scholars will agree that there is a difference of ethical import if an entity ‘is’ like something or ‘pretends’ to be like something.

Another interesting aspect of this example of simulated bellyaches and spine shivers is that true similarity to humans is often not even desirable for AI. Epley, Waytz and Cacioppo (2007) show that many people appreciate anthropomorphisms solely because of feelings of loneliness. Wouldn’t it be nice to have a highly empathic AI friend that is at the same time so neutral that we feel comforted by the interaction? Interaction that follows rules of empathy, but that is in reality emotionless, increasingly appears to be the ideal for a generation for which the trust in humanity is at a nadir.11

In my view this lack of trust does no justice to human nature: The human ability to be empathic correlates with the activity of mirror neurons, which allow us to be highly social beings that feel with their environment and thereby are able to truly care for it (Jenson/Iacoboni, 2011). AI systems on the other hand do not possess mirror neurons and therefore cannot care. That said, AI systems could in the medium-term make more use of sensors capable of recognizing human emotions and reactions. Computers are already able to precisely measure minute facial signs of emotion, dilation of the pupils or skin reactions. On this basis they can derive relatively reliable conclusions about how a human is feeling at the moment. Furthermore, AI systems can possess the technical skills necessary to react reasonably to such observations.12 I intentionally use the term “reasonable” here, because the computer system is not capable of actualizing a “noûs” and emotionally connect with the human counterpart. It can however calculate a rational “reason”-based model of the human counterpart and then perform certain predetermined or learned reactions. The question is whether we want to assess such machine reactions as “intelligent” in a human sense, because in truth they are just rational.

What makes a human “intelligent” is that s(he) is ‘one’ with the world; emotionally intelligently understanding the environment. For this s(he) needs self-consciousness. S(he) needs to be able to consciously react to the surrounding in the sense of connecting. If the connection is not conscious, one would agree that it is not intelligent, but sub-conscious or just intuitive (like an animal). In the case of AI systems, it is now this aspect of consciousness that is missing: The AI system lacks a conscious self through which it could connect to its human counterpart, and thus every AI reaction must ironically react “self-lessly”. While this is what many AI proponents like, it is not human-like.13

AI Systems Cannot Think in a Human-like Manner

When computer systems “think”, what they are actually doing is calculating. Every computer system, including all forms of AI, is based on data which has been encoded, processed, classified in databases, structured, functionally integrated, ideally described with metadata, and perhaps linked to a standardized ontology. What is frequently described as an AI-specific functionality, say, machine learning (e. g. with deep neural networks), is a part of precisely this data processing architecture. This functionality makes it possible not only for raw data to be stored as information, but also for it to be meaningfully “represented”, and for these representations to even change and evolve. AI systems can recognize and adapt to patterns, for example to our language. And they can then, in combination with knowledge developed in the field of Linguistics, build (synthetize) constructed models which allow them to recognize, analyze and perform acts of speech.

This aggregation can be continuous in AI-supported data processing. Stored information and representations constantly change with the inflow of new data. If we observe visualizations of such dynamic datasets live on the monitor, we could get the impression that this flow of data and changing information objects have their liveliness, a liveliness I refer to as “synthetic existence”. This synthetic existence is impressive when seen at work. “It’s blinking, it’s alive!” (Gehring, 2004), might be the astonished reaction of the impressed observer of such a system. But with all due enthusiasm, we should not equate these blinking visualizations with human-like existence. It is not life itself, but rather an observed instantiation of the real phenomena of life (with many layers of logical abstraction in between). The artificial representation its underlying reality should never be conflated.

The difference between human thinking and artificial information processing is that humans as a rule do not calculate using data with the help of some model. I know of course that in Psychology and in Economics there is a long-established tradition of modelling human decision-making in this way.  We also – regrettably – still use the idea of “homo oeconomicus” to represent humans as a kind of calculating “preference-optimizer”. In Psychology we use models such as the Theory of Reasoned Action14 to explain how humans act. There are innumerous models describing human thinking. But every reasonable scientist also knows that all these models which parse human decisions into boxed constructs also know that these represent hardly more than heuristics of human thinking. This does not make models superfluous. Heuristics are scientifically important for our understanding of ourselves and society and the cosmos at large. But they are not able of completely representing or reliably predicting human behavior. Those who don’t like this view are gently reminded of the coefficients of determination and magnitudes of error associated with every statistical analysis. Only in the rarest of cases does a human in a decision-making situation minutely fragment the aspects into individual components or begin with these components and recalculate them together with appropriate weightings or follow identical patterns.

Instead it seems certain that humans normally perceive and re- member their environment in non-summative, holistic entities, at least given that the two hemispheres of the brain work together in a healthy manner (McGilchrist, 2009). Here the right hemisphere, which is responsible for holistic perception, interacts with the left hemisphere, which structures that which has been perceived (for example through the speech center) (McGilchrist, 2009). As early as the beginning of the 20th century, Edmund Husserl used the classic concept of “noemata” to describe the holistic nature of our thinking (Husserl, 1993). Noemata allow us as humans to grasp the meaning structures (Sinngestalten) of a phenomenon, and we do not do this using individual data items in weighted calculations to create these structures, but rather we realize them or they become intuitively real. Based on this knowledge, in Neuroscience and memory research we speak of “autonoetic consciousness” when describing human episodic memory (Baddeley/Eysenck /Anderson, 2015).15 In much of our thinking we humans actualize that which we observe as noemata to which we have given names. Following this reasoning, we could say for example that a human can share an idea of what it means to be good.16 When an incident occurs in the surroundings where someone behaves in a good way, then we humans recognize this immediately. An AI system on the other hand has no shared idea of what is good. It can be trained to recognize a sequence of events which human beings have labelled as “good” or “right” and can therefore incorporate the (learned or determined) rule that a good human stops at a red traffic light. But the AI system then recognizes only this one manifestation of what is good (to stop at the red traffic light). If someone then runs past a red light in order to rescue a child, the AI system will calculate that the act is not good, unless it has already learned exactly this sequence before (or has shared it with other distributed AI systems). People are on the other hand immediately capable of recognizing the idea of goodness in the rescue scenario. In other words: an AI system follows a bottom-up “pattern recognition theory of mind” (Feser, 2013). The human on the other hand uses top-down recognition of noemata (such as the idea of what is good), which cannot be expressed in terms of data points, but only as holistic form of being. This dynamic of thinking allows our human species to deal easily with the unstructured tsunami of motor-sensory, optical, olfactory and tactile environmental stimuli. It requires no compiling or translation into data units, no pre-processing, no »training« to every single pattern, no predefined database fields, no ontologies, etc. whatsoever.

Of course, I’m aware that some scientists (from a variety of disciplines) don’t like this phenomenological description of how we humans think. Thinking in noemata? No measurable and finely delineable information units that can be added up? This discomforts them in their well-structured and controlled view of the world. I can relate to this unease. After all the dominant portion of scientists still hangs on to what theologian Johannes Hoff calls “Legobrick-metaphysics” (“Baukästchenmetaphysik”). He writes: “Just as Johannes Gutenberg ‘put together’ his printing plates using movable types, Kant’s ‘I’ also ‘synthetisizes’ perceivable objects from ‘many and varied’ sensory impressions”.17 According to Hume the synthetisizing ‘I’ can even be reduced to a “bundle of different conditions”18 (Hoff, 2020). This however appears to be an expiring model of human thinking. If we follow the latest neuroscientific findings, which are embedded in an intelligent tradition of the Humanities, then the following becomes clear: “If it can be referred to as a ‘machine’ at all, then the brain is not a ‘synthetization machine’ or ‘projection machine’, but rather an ‘inhibition machine’ or ‘selection machine’. The brain does not generate the ‘mental’, but rather contracts it in interaction with other organs and corresponding environmental impressions which limit the realm of possibility of the knowable and perceivable to more or less discrete forms”.19

The difference between machine-based synthesizing and thinking as a human has ethical implications. AI systems can only process data for which they have the corresponding technical representations (i.e. for which they have been “trained”). They will again and again make the most ridiculous errors as long as they do not have a representation for all conceivable situations; especially those that are improbable. They will make incorrect judgements as soon as they are confronted with a situation which was not included in »training«. This however can be a fatal flaw, since all of human life is ultimately a sequence of context-sensitive, non-identical repetitions.20

Since AI systems can make errors which can be more than inconvenient, even dangerous, for the humans affected, today we find almost exclusively what are called “narrow AIs”. These AI systems are trained for a highly defined, closed context, in which they can learn the theoretically possible data patterns and even recognize details and anticipate possible developments which humans often fail to see. This however once again illustrates why an AI system is not human-like at all. AI suffers from what is called »underfitting« in the open, general, non-identically repeated context of life shared by humans and groups. And it is more precise and more foresightful than a human in closed contexts with repeating patterns.

If one does not understand this difference between human thinking and the data processing performed by AI systems, the result can be an ethically problematic use of the technology. Any- where where the complex individual life situation of a human is at stake the use of AI is problematic. One example is the question of whether a given individual is and will remain a criminal, will commit or has already committed a felony, will perform well at a given university or in a given job, etc. The non-identical characteristics of individual humans in their non-identical repetition of life situations, which they live out in non-identical contexts, are so unique that it is impossible for an AI system to grasp them. The AI systems used in the midst of such general life terrain are in permanent danger of underfitting reality.

AI Systems Have no Human-like Motivation

In science fiction AI systems always become exciting when they set their own goals; for example, when the on-board computer HAL in Stanley Kubrick’s Film 2001: A Space Odyssey starts to outwit the captain of the space station. When humans consciously set themselves goals, they do so because an action or the result of the action or a mode of being appears meaningful and/or valuable. Often such notions are however not defined objectives, but rather values which exert a certain attraction on humans and then motivate them to take action.21 Motivational research and behavioral research have been examining these mechanisms in detail for many decades now, speaking of »motives« which are formed in humans generally, contextually or on a situation-specific basis (Vallerand, 1997). Also a human may have a general propensity towards a particular behavior. McClelland distinguishes for example between humans with a relatively pronounced tendency towards power, achievement or affiliation (McClelland, 2009). Such intrinsic motives take effect again and again in recurring contexts (leisure time, family, studying). They show in forms of curiosity, the desire for order or a certain idealism (Reiss, 2004). There are also entirely situation-specific motives, such as the desire to win or the desire to be left alone. Psychology generally assumes that such motives shape human behavior.

Seen these human motives the question arises as to how to teach an AI system such emotionally charged values as power, affiliation, joy of achievement or idealism. AI systems have neither a mental access to these terms in the form of noemata, nor do they have a lived body which could convey to them the sensory value of these motives. And even if this was the case, the best theoretician would not be able to precisely model a motive such as power or the need for peace or calmness.

The point where Psychology and AI meet is however in the context of simplifying models of human ratio. Take for example the Expectancy Valence Theory. Expectancy Valence Theory postulates that human enter into a kind of forethought in which they calculate whether a certain behavior will contribute to the fulfillment of desired outcomes (Vroom, 1964). Expectancy valence functions might be a fascinating basis for the optimization of an AI system. However, an AI system is only capable of representing what is called “extrinsic” motives by psychologists, for example an amount of money which is to be obtained through a given behavior. AI systems in financial markets are trained with such monetary goal functions. However, as soon as one moves out of application contexts in which extrinsic motives are put into a simple maximization logic and one moves into the normal spheres of human life, where intrinsic motives are pursued for their own sake, then AI systems become ineffective because they cannot relate to the idea of something that simply is desirable for its own sake.

It is therefore deeply misleading when some AI experts take to the slippery slope of attributing such intrinsic motivations to AI systems. Some use terms such as “intrinsic reward” with titles promising the “intrinsic motivation” of AI Systems. But upon closer inspection of how these terms are defined and how they are implemented in their system, the “intrinsic motivation” suddenly deviates substantially from what Psychology understands by this. Take the work of Jürgen Schmidhuber. In one of his papers he relates intrinsic motivation to a Reinforcement Learning Component in an AI system, which embeds a mathematical goal-function. This goal function maximizes in response to the discovery of new (“surprising”) data patterns. It then uses this maximization-of-newness pattern to initiate further actions within the system. This is what Schmidhuber proudly calls “intrinsic motivation” (Schmidhuber, 2010) and he seems to suggest that thereby the AI becomes somewhat human-like.22 But intrinsic motivation in the human sense is not connected to anything new or surprising: Much the opposite! Intrinsic motivation is the (appreciative) experience of something which is non-identically repeated or internally desired as pleasurable.23 Moments as the »experiencing a feeling of belonging« or »being at peace« are states which are perceived as emotionally familiar.24 Furthermore intrinsic motivation has nothing to do with maximization, as is the case with Schmidhuber’s »reinforcement component«. In contrast, maximization as a principle is entirely the opposite of acting for the sake of the motive itself.  Put concisely: The borrowing of psychological term by computer scientists such as Schmidhuber (2010) is totally misleading. Intrinsic motivation in the human sense cannot be created in AI systems.

AI Systems Have No Socially Embedded Autonomy

The last major area which is supposedly to justify the alleged similarity of AI systems to humans is their potential autonomy. The USA’s Defense Science Board defines the technical autonomy of an AI system as “the capability [of the AI system] to independently compose and select among different courses of action to accomplish goals based on its knowledge and understanding of the world, itself and the situation” (Summer Report). This technical autonomy begins after the system has been activated. For example, when a drone is sent on a mission, it can be configured in such a way that once it has set off it acts based on goals it sets itself. Or a power grid can use smart-meter data to independently manage network stability. With this definition the military body decided on the highest degree of what it refers to as “autonomy”: The entirety of control belongs to the machine (Parasuraman/Sheridan, 2000). Here it is necessary to recall Emmanuel Kant’s view, according to which one can only send a slave, who is not autonomous at all (!), “on a mission”, because it is not possible for the slave to send himself on a mission. The slave cannot choose the type of mission and can also not refuse the mission. Kantians would therefore refer to the degree of freedom a drone has not as “autonomy”, but as “heteronomy”. Again, computer science uses a term with a highly precise definition in Philosophy and in doing so attributes capabilities to machines which these machines do not have. The exception here is of course those AI systems from science fiction, which indeed choose their own missions and define their own goals. These are referred to as “General AIs”, which may define their own goals according to an unsupervised machine learning process. While such AI systems do not exist at present, it cannot be excluded that they might exist in the future!

But even if one day such “General AIs” come into being, they cannot be human-like. Why not? In “Self-Determination Theory” Ryan and Deci (building on the motivation research of the 1970s) have since 2000 repeatedly proven how important the three factors competence, autonomy and relatedness are to humans (Ryan/Deci, 2000). Here autonomy is understood as the possibility of causing one’s own actions and to do so in a manner that the action takes place in harmony with oneself; that one does not feel forced by outside influences to initiate certain actions. This, however, does not mean that in living out this autonomy one is completely free from the desires, goals and habits of the group with which one feels associated (Deci/Vansteenkiste, 2004). Just the opposite: The human is a zoon politikon, a social being. This means that a reasonable decision by a human normally takes the social environment into account. Humans live a “socially embedded autonomy”. Their freedom ends where the freedoms of others begin.25 Looking at this tension between one’s own freedom and human “autonomous” decision-making, which takes place within a social environment, it quickly becomes evident that it is the vulnerability of the human which makes a highly essential contribution to the fact that he or she often decides independently to think on behalf of others. S(he) does not decide autonomously as a detached individual – i.e. free of external influences.26 The entirety of the Aristotelian body of Virtue Ethics is concerned with this human topic of maintaining a healthy moderation and not having a negative impact within one’s group by either exaggeration or understatement in the nature of one’s decisions. However, the force which can motivate one to maintain this moderation (or “Golden Mean” behavior) is human vulnerability; the vulnerability of not being recognized by one’s own group, or being rejected, or of being alone.

This is precisely where lived out human “autonomy” differs very fundamentally from the “autonomy” of an AI system. The latter is understood more than anything as the possibility to initiate an action based on one’s own calculations, without obtaining confirmation from an operator. Social concerns are irrelevant to the machine, since the machine is not vulnerable. The machine does not worry about not receiving any more electric power or being thrown on the scrap heap because the idea of death in the human sense cannot be conveyed to the machine.27

Towards a Mindful Definition of AI Systems and Their Delineation from Humans

The discussion on the purported similarity of AI systems to humans has been oriented towards a variety of characteristics which can be used as a basis for defining these computer systems: their physical bodies, the data and its processing, the sources of defined goals and autonomy. Figure 1 summarizes large portions of the discussion. The left-hand column repeats the manifestations AI systems have in our modern legends, i.e. in science fiction. The middle and right-hand columns, relevant to the present treatment, describe the AI systems which exist in practice or which are at least the subject of serious experimentation. The system properties shown in blue are properties which still require a high degree of research and do not as yet function reliably, for example processing of unstructured data. The respective design of shells, data, learning methods, goals and degrees of autonomy determine which cognitive processes a given AI system can perform. Therefore, these characteristics are shown below the AI cognition. Furthermore, a distinction is made as to whether or not an AI system is software behind a technically autonomous (actually only heteronomous) physical system; whether or not one is referring to a purely virtually generated entity or to a hardware system which integrates the characteristics described. There are practical examples of both forms of systems. Purely virtual AI systems are for example digital voice assistants such as Amazon’s “Alexa” or the “Google Speech Assistant”. In contrast, physical systems, such as self-driving cars, have actuators which translate the calculated actions of an algorithm into mechanical movements for a system. In each case it is a good idea always to speak of an “AI system”, since as a rule a large number of algorithms are linked, supplemented to include the corresponding databases and the executing (motoric or virtual) system elements. The entirety of an AI system often appears to humans to be intelligent. However, qualification as an AI system does not depend on this characteristic.

Against this background, I want to define an AI system here as a virtual and/or physical integrated computer system, which can independently perform a wide range of cognitive functions. These functions are based on (at least in part) unstructured and content-rich datasets. They are capable, to perform effective actions even without human intervention based on cognitive functions that can calculate acts of perception, planning, drawing conclusions, communicating and deciding.

If we now take this definition and a collective look at all the areas described in this article, in which AI systems fundamentally differ from humans, the question must arise as to how experts come up with the idea of actually attributing AI systems a similarity to humans. Humans share identical DNA base pair sequences with other mammals. These sequences allegedly match at up to 90 percent in humans and pigs. But nobody would ever think of confusing humans with pigs. And nobody would ever think of defining rights for pigs similar to those defined for humans.

From this critical ethical perspective, the question arises as to whether it is acceptable to liken humans to AI systems, or whether this (now common) practice be actually equivalent to human defamation. The world of technology, shaped by marketing and hype, pays too little attention to the established use of terms and thus engages in a tight-rope walk for which Hastak and Mazis have coined the term “deception by implication” (Hastak/Mazis, 2011).

Nietzsche may playfully say: “Sharp and mild, rough and fine, strange and familiar, impure and clean, a place where fool and sage convene: All this I am and wish to mean, dove as well as snake and swine” (Nietzsche, 1882).

Figure 1: Characteristics of realistic and unrealistic AI systems

Literature

Ajzen, I. / Fishbein, M. (2005): The Influence of Attitudes on Behavior. Mahwah, New Jersey, USA: Erlbaum.

Baddeley, A. / Eysenck, M. W. / Anderson, M. C. (2015): Memory. New York: Psychology Press.

Bérard, B. (2018): “Unmasking ‘AI’”. Blog. https://philos-sophia.org/ unmasking-ai/.

Bostrom, N. (2014): Superintelligenz: Szenarien einer kommenden Revolution. Berlin: Suhrkamp Verlag.

Csikszentmihalyi, M. (1991): Flow: The Psychology of Optimal Experience. New York: Harper Collins.

Damasio, A. R. / Everitt, B. J. / Bishop, D. (1996): “The somatic marker hypothesis and the possible functions of the prefrontal cortex”. In: Philosophical Transactions: Biological Sciences, 351(1346), S. 1413–1420.

Deci, E.L. / Ryan, R.M. (2000): “The ‘What’ and the ‘Why’ of Goal Pursuits: Human Needs and the Self-Determination of Behavior”. In: Psychological Inquiry, Vol. 11, No. 4/2000, S. 227–268.

Deci, E. L. / Vansteenkiste, M. (2004): Self-determination theory and basic need satisfaction: Understanding human development in positive psychology. In: Ricerche di Psichologia (27), S. 17–34.

Epley, N. / Waytz, A. / Cacioppo, J. T. (2007). “On Seeing Human: A Three-Factor Theory of Anthropomorphism”. In: Psychological Review, 114(4), S. 864–886.

Feser, E. (2013): “Kurzweil’s Phantasms – A Review of How to Create a Mind: The Secret of Human Thought Revealed. First Things”. https://www.firstthings.com/ article/2013/04/kurzweils-phantasms.

Fuchs, T. (2016): Das Gehirn – ein Beziehungsorgan: Eine phänomenologischökologische Konzeption (5. Aufl.). Stuttgart: Kohlhammer.

Gehring, R. (2004): “Es blinkt, es denkt. Die bildgebenden und die weltbild- gebenden Verfahren der Neurowissenschaften”. In: Philosophische Rundschau, 51, S. 272–295.

Gunkel, D. J. (2018): Robot Rights. Cambridge, US: MIT Press.

Hastak, M. / Mazis, M. B. (2011): “Deception by Implication: A Typology of Truthful but Misleading Advertising and Labeling Claims”. In: Journal of Public Policy & Marketing, 30(2), S. 157–167.

Hatmaker, T. (2017): “Saudi Arabia bestows citizenship on a robot named Sophia”. In: Tech Crunch. https://techcrunch.com/2017/10/26/saudi-arabia-robot-citizen-sophia/.

Hawkins, J. (2017): “What Intelligent Machines Need to Learn From the Neocortex”. In: IEEE Spektrum, 54(6), S. 33–37.

Husserl, E. (1993): Ideen zu einer reinen Phänomenologie und phänomenologischen Philosophie (5. Aufl.). Berlin: De Gruyter.

Jenson, D. / Iacoboni, M. (2011): “Literary Biomimesis: Mirror Neurons and the Ontological Priority of Representation”. California Italian Studies, 2(1). http://www.neurohumanitiestudies.eu/archive /paper/?id=150.

John S. McCain: National Defense Authorization Act for Fiscal Year 2019 (2018).

Kierkegaard, S. (2005): Die Krankheit zum Tode – Furcht und Zittern – Die Wiederholung – Der Begriff der Angst. München: DTV.

Krempl, S. (2018): “Streit über »Persönlichkeitsstatus« von Robotern kocht hoch”. In: heise online. https://www.heise.de/newsticker/meldung/Streit-ueber-Persoenlichkeitsstatus-von-Robotern-kocht-hoch-4022256.html.

Kurzweil, R. (2006): The Singularity is Near – When Humans Transcend Biology. London: Penguin Group.

McGilchrist, I. (2009): The Master and his Emissary – The Divided Brain and the Making of the Western World. New Haven and London: Yale University Press.

McClelland, D. (2009): Human Motivation. Cambridge, UK: Cambridge University Press.

Meier, K. (2017): “The Brain as Computer – The Brain May be Bad At Crunching Numbers, but it’s a Marvel of Computational Efficiency”. In: IEEE Spektrum, 54(6), S. 27–31.

Nietzsche, F. (1882): Die Fröhliche Wissenschaft. Ditzingen: Reclam.

Parasuraman, R. / Sheridan, T. B. (2000): “A Model for Types and Levels of Human Interaction with Automation”. In: IEEE Transactions on Systems, Man, and Cybernetics, 30(3), S. 286–297.

Reiss, S. (2004): “Multifaceted nature of intrinsic motivation: The theory of 16 basic desires”. In: Review of General Psychology, 8(3), S. 179–193.

Rosa, H. (2016): Resonanz. Eine Soziologie der Weltbeziehung (2. Aufl.). Berlin: Suhrkamp Verlag.

Roth, G. (2001): Fühlen, Denken, Handeln. Wie das Gehirn unser Verhalten steuert. Frankfurt/Main: Suhrkamp Verlag.

Ryan, R. M. / Deci, E. L. (2000): “Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being”. In: American Psychologist, 55, S. 68–78. https://dx.doi.org /10.1037/0003-066X.55.1.68.

Scheler, M. (1921, 2007): Der Formalismus in der Ethik und die Materiale Wertethik – Neuer Versuch der Grundlegung eines ethischen Personalismus (2. unveränderte Aufl.). Halle an der der Saale: Verlag Max Niemeyer.

Schmidhuber, J. (2010): “Formal Theory of Creativity, Fun, Intrinsic Motivation” (1990–2010). In: IEEE Transactions on Autonomous Mental Development, 2(3), S. 230–247.

Sennett, R. (2009): The Craftsman. New York: Penguin Books

Spiekermann, S. (2019a, 23./24. März 2019): “Der Mensch als Fehler”. Süddeutsche Zeitung, 15.

Spiekermann, S. (2019b): Digitale Ethik – Ein Wertesystem für das 21. Jahrhundert. München: Droemer.

Spiekermann, S. (2020): “Digitale Ethik und Künstliche Intelligenz”. In: Philosophisches Handbuch Künstliche Intelligenz. Hrsg. v. Mainzer, K. München: Springer Verlag. (Im Erscheinen).

Vallerand, R. J. (1997): “Toward a hierarchical model of intrinsic and extrinsic motivation”. In: Advances in Experimental Social Psychology, 29, S. 271–360.

Vroom, V. H. (1964): Work and Motivation. New Jersey, USA: John Wiley & Sons Inc.

Wendt, A. (2015): Quantum Mind and Social Science – Unifying physical and social ontology. Cambridge UK: Cambridge University Press.

Footnotes

  1. I would like to thank Prof. Friedemann Mattern, Prof. Johannes Hoff and Jana Korunovska, who evaluated the present paper and discussed their criticisms with me.[]
  2. I am aware of the fact that this assumption is undermined by current experiments which attempt to implement software on organic materials (see e.g.: https:// www.pnas.org/content/117/4/1853 or https://royalsocietypubli-shing.org/doi/ full/10.1098/rsif.2017.0937). However, such experiments are in such an early phase of development that they can hardly be taken seriously in academic contexts as of the year 2020; in particular the ability to verify reactions on the part of organic materials is not compatible with the current paradigms of our computer mechanics and statistics: For example, the predictability, traceability or repeatability of operations.[]
  3. Cf.: “If, in philosophy, there is a ›before and after Immanuel Kant (1724–1804), this is because he has inverted the meaning of intelligence (Verstand) and reason (Vernunft) as understood by all preceding philosophers: from Plato, Aristotle, Plotinus and St. Augustine to St. Thomas Aquinas, Dante, Leibniz, Malebranche, and beyond, all said to labor under an illusion which he alone was able to recognize and dispel! Indeed, in keeping with his conviction that intuition can only be sensible or empirical, he elevated reason to the highest rank among cognitive faculties, capable supposedly of rendering synthetic, systematic, universal and unified intelligibility. Hence intelligence or intellect came to be seen as inferior to reason: a secondary faculty concerned with processing abstractions, endowing sense experience with a conceptual form, and connecting the resultant concepts so as to constitute a coherent structure – until, finally, it turned into discursive knowledge, that is to say, became ‘reason’.” (Bérard, 2018).[]
  4. “We absolutely cannot think what we can’t think” (G.E. Moore).[]
  5. If at all, good teachers manage to explain clearly things using analogies and narratives. Such an explanation usually begins with the words: “Just imagine…”[]
  6. https://en.wiktionary.org/wiki/nous#Etymology.[]
  7. This is why it is so pleasant to hear an intelligent person speak, because we immediately and intuitively recognize that the person is right. It is usually not possible to say why we believe the intelligent person is right, but we share an understanding of reality with the person.[]
  8. Cf.: “Our perception… is… the product of bodily forms of synesthetic perception. At their point of departure, we always find what the Aristotelian aesthetics as sensus communis (common feeling). We see ‘bubbling water’, hear ‘bright bell tones’, see a ‘hard impact’, smell ‘the sharp aroma of hay’ – and only learn later to allocate ‘the bubbling’, ‘bright’ and ‘the hard and sharp’ to different sense modalities which are analytically isolated from one another and can be allegedly allocated to ‘elementary’ (auditory, visual, tactile, olfactory or gustatory) ‘prescribed sensory impressions’.” (citation, unpublished manuscript, Hoff, 2020).[]
  9. Cf.: « As an example, consider a simulation that Markus Diesmann and his colleagues conducted several years ago using nearly 83,000 processors on the K supercomputer in Japan. Simulating 1.73 billion neurons consumed 10 billion times as much energy as an equivalent size portion of the brain, even though it used very simplified models and did not perform any learning… The IBM TrueNorth group, for example recently estimated that a synaptic transmission in its system costs 26 picojoules. Although this is about a thousand times the energy of the same action in a biological system, it is approaching 1/100,000 of the energy that would be consumed by a simulation carried out on a conventional general-purpose machine« (pp. 29 and 31 in Meier, 2017).[]
  10. I am aware of the fact that several scientists such as Daniel Dennett argue that the lacking ability of resonance (“Resonanzfähigkeit”) of a system is not dependent on its material properties. There is however no proof of this argument. The fact is that computers will not have a body capable of resonance within the foreseeable future.[]
  11. See also my article on the poor image of humanity of our time (Spiekermann, 2019a) and the historical sources of this mode of thinking (Spiekermann, 2019b).[]
  12. Note at this point that the technical skills of AI, i.e. technical components which execute certain algorithms, are to be distinguished from those which Richard Sennett refers to with “skill” (Sennett, 2009).[]
  13. It should be noted that there are moment in the interaction with robots in which they seem extremely vulnerable, especially because of their selfless reactions and as a result seem human (cf. Spiekermann, 2019b). Furthermore, it is important to me to note, that I naturally not doubt any person who claims to often act selflessly or altruistically. It is simply normally the case that we inject ourselves into selfless actions. And even in altruistic forms of action the own psyche and motivation play a role.[]
  14. See e.g. “Theory of Reasoned Action” or “Theory of Planned Behavior” (cf. Ajzen/Fishbein, 2005).[]
  15. The autonoetic part of the memory is a part of the long-term memory and the part which reflects the grown personality of a person.[]
  16. In turn the word “noema” with the shared etymological root of the noûs points us to the shared; that which is understood as shared be the community.[]
  17. According to Kant realization is a “Ganzes verglichener und verknüpfter Vorstellungen” (Kant, Kritik der reinen Vernunft, A 97). Its point of departure is a multiplicity, given to the senses both passively and diffuse. Its synthesis requires the “Spontaneität unseres Denkens” (A77 / B102). An object is logically “das, in dessen Begriff das Mannigfaltige einer gegebenen Anschauung vereinigt ist” (B137). Only with that do material judgements become possible which allow us to recognize the world by securing the referential relation to things (“Gegenstandsbezug”) of subjective syntheses.[]
  18. Roth (2001), 338.[]
  19. Here with reference to Aristoteles’ aesthetics: Fuchs (2016), 187 sq. as well as Aristoteles, De Anima / Über die Seele, III, 430 sq.[]
  20. Kierkegaard, S. (2005). Die Krankheit zum Tode – Furcht und Zittern – Die Wiederholung – Der Begriff der Angst. München: DTV.[]
  21. See for example Scheler (1921, 2007).[]
  22. A previously unseen data pattern is classified as ‘new’. This ‘new’ is automatically good. The “Reinforcement Learning Components” maximizes itself and “rewards” the underlying system.[]
  23. Nota bene: “intrinsic” stand for “coming from the inner”.[]
  24. Only primitive curiosity is a motive (for many), where the new in its pure form can be good.[]
  25. Cf.: “To be autonomous does not mean to be detached from or independent of others, and in fact Ryan and Lynch (1989) showed how autonomy can be positively associated with relatedness and well-being. Autonomy involves being volitional, acting from one’s integrated sense of self, and endorsing one’s action. It does not entail being separate from, not relying upon, or being independent of others” (Deci/Ryan 2000, 242).[]
  26. Cf. Spiekermann (2020).[]
  27. An AI system could of course integrate a function which minimizes deactivation or being shot. It will then develop behavioral strategies which avoid such possibilities. This limits the autonomy of the machine; however, this limitation is not social, as in the case of humans.[]