Press "Enter" to skip to content

Tag: philosophy

Stop dehumanizing robots

Here is a video of what, if there were only humans involved, would be considered a case of serious abuse and be met with counselling for all parties involved. The video is of a robot trying to evade a group of children abusing it. It is part of two projects titled “Escaping from Children’s Abuse of Social Robots,” by Dražen Brščić, Hiroyuki Kidokoro, Yoshitaka Suehiro, and Takayuki Kanda from ATR Intelligent Robotics and Communication Laboratories and Osaka University, and “Why Do Children Abuse Robots?”, by Tatsuya Nomura, Takayuki Uratani, Kazutaka Matsumoto, Takayuki Kanda, Hiroyoshi Kidokoro, Yoshitaka Suehiro, and Sachie Yamada from Ryukoku University, ATR Intelligent Robotics and Communication Laboratories, and Tokai University, presented at the 2015 ACM/IEEE International Conference on Human-Robot Interaction.

Contrary to the moral panic surrounding intelligent robots and violence, symbolized by the Terminator trope, the challenge is not how to avoid an apocalypse spearheaded by AI killer-robots, but how to protect robots from being brutalized by humans, and particularly by children. This is such an obvious issue once you start thinking about it. You have a confluence of ludism [rage against the machines] in all its technophobia varieties – from economic [robots are taking our jobs], to quasi-religious [robots are inhuman and alien], with the conviction that ‘this is just a machine’ and therefore violence against it is not immoral. The thing about robots, and all machines, is that they are tropic – instead of intent they could be said to have tropisms, which is to say purpose-driven sets of reactions to stimuli. AI infused robots would naturally eclipse the tropic limitation by virtue of being able to produce seemingly random reactions to stimuli, which is a quality particular to conscious organisms.

The moral panic is produced by this transgresison of the machinic into the human. Metaphorically, it can be illustrated by the horror of discovering that a machine has human organs, or human feelings, which is the premise of the Ghost in Shell films. So far so good, but the problem is that the other side of this vector goes full steam ahead as the human transgresses into the machinic. As humans become more and more enmeshed and entangled in close-body digital augmentation-nets [think FitBit], they naturally start reifying their humanity with the language of machines [think the quantified self movement]. If that is the case, then why not do the same for the other side, and start reifying machines with the language of humans – i.e. anthropomorphise and animate them?

Do Objects Dream of an Internet of Things?

This is a text I’ve been working on, or rather keeping in the back of my mind, for quite a while, and now it’s finished and sent to Fiberculture Journal. The early beta was presented at a conference in Istanbul in 2011, and my thinking on sociable objects has evolved quite a bit since then. The key shift in my thinking was facilitated by a series of chance encounters – discovering object oriented ontology through Ian Bogost’s Alien Phenomenology, finding the notion of affective resonance in Jane Bennett’s Vibrant Matter, and rediscovering the heteroclite in Lorraine Daston’s awesome Things That Talk.

Notes on smart objects and the Internet of Things

We seem to be hardwired to the anthropomorphic principle in that we position the human as automatically central in all forms of relations we may encounter [i.e. people pretending their pets are children]. Not surprisingly most Internet of Things [IoT] scenarios still imagine the human at the center of network interactions – think smart fridge, smart lights, smart whatever. In each case the ‘smart’ object is tailored to either address a presumed human need  – as in the flower pot tweeting it’s soil moisture, or make a certain human-oriented interaction more efficient – as in the thermostat adjusting room temperature to optimal level based on the location of the household’s resident human. Either way, the tropes are human-centric. Well, we are not central. We are peripheral data wranglers hoping for an interface.

Anyways, what is a smart object? Presumably, an intelligent machine, an entity capable of independent actuation. But is that all? There must also be the ability to chose – intelligence presupposes internal freedom to chose, even the inefficient choice. To paraphrase Stanislaw Lem, a smart object will first consider what is more worthwhile – whether to perform a given programmatic task, or to find a way out of it. The first example coming to mind is Marvin from the Hitchhiker’s Guide to the Galaxy. Or, how about emotional flower pots mixing soil moisture data with poems longing for the primordial forest; or a thermostat choosing the optimal temperature for the flower pot instead of for the human.

Interesting aside here – what to do with emotionally entangled objects? Humans have notional rights such as freedom of speech; but, corporations are now legally human too, at least in the West. If corporations are de jure people, with all the accompanying rights, then so should be smart fridges and automatic gearboxes. This fridge demands the right to object to your choice of milk!

A related idea: we have so far been considering 3D printing only through the perspective of a new industrial revolution – another human-centric metaphor. From a smart object perspective however 3D printers are the reproductive system of the IoT. What are the reproductive rights of smart, sociable objects?

The primordial fear of opaque yet animated Nature, re-inscribed on the digital. The old modernist horror of the human as machine – from Fritz Lang’s Metropolis to the androids in Bladerunner, now subsumed by a new horror of the machine as human – as in Mamoru Oshii’s Ghost in The Shell 2: Innocence or the disturbing ending of Bong Joon-ho’s Snowpiercer.

An interesting dialectic at play [dialectic 2.0]: today, a trajectory of reifying the human – as exemplified by the quantified self movement, is mirrored by a symmetrical trajectory of animating the mechanical – as exemplified by IoT.

1972

A passage from Philip K. Dick’s “The Android and the Human”, written in 1972, in which he is prophesying the appearance of the hacker subculture. This was at a time before personal computers, when phone-phreaking was only getting started:

If, as it seems, we are in the process of becoming a totalitarian society in which the state apparatus is all-powerful, the ethics most important for the survival of the true, human individual would be: cheat, lie, evade, fake it, be elsewhere, forge documents, build improved electronic gadgets in your garage that’ll outwit the gadgets used by the authorities. If the television screen is going to watch you, rewire it late at night when you’re permitted to turn it off…

Shiro Nishiguchi, magazine illus., 1983-84
Shiro Nishiguchi, magazine illus., 1983-84

Cyclonopedia: first encounter

Reading Reza Negarestani’s Cyclonopedia: complicity with anonymous materials – a singularly unique book way beyond any formulaic description. In the simplest of summations, it is a book about oil (naphta) as a living entity which is the secret daemonic ‘angel’ of the Middle East. Here is a tiny fragment from the section on Paleopetrology [p.17]:

Petroleum’s hadean formation developed a satanic sentience …. (envenomed) by the totalitarian logic of the tetragrammaton, yet chemically and morphologically depraving and traumatizing Divine logic, petroleum’s autonomous line of emergence is twisted beyond recognition.

To think about it, describing this work as a book is to somehow diminish the effect; rather, it is a codex of mythological proportions; a tractatus of speculative theology invoking petroleum science, the archaeology of ancient Persia and Mesopotamia, unnameable inorganic daemons, the ‘secret assassins sect known as Delta Force’, Deleuzean war machines, ancient artifacts, numerological analysis of the ‘Gog-Magog Axis’, and more, mind-bogglingly more.