Aug 102015
 

Here is a video of what, if there were only humans involved, would be considered a case of serious abuse and be met with counselling for all parties involved. The video is of a robot trying to evade a group of children abusing it. It is part of two projects titled “Escaping from Children’s Abuse of Social Robots,” by Dražen Brščić, Hiroyuki Kidokoro, Yoshitaka Suehiro, and Takayuki Kanda from ATR Intelligent Robotics and Communication Laboratories and Osaka University, and “Why Do Children Abuse Robots?”, by Tatsuya Nomura, Takayuki Uratani, Kazutaka Matsumoto, Takayuki Kanda, Hiroyoshi Kidokoro, Yoshitaka Suehiro, and Sachie Yamada from Ryukoku University, ATR Intelligent Robotics and Communication Laboratories, and Tokai University, presented at the 2015 ACM/IEEE International Conference on Human-Robot Interaction.

Contrary to the moral panic surrounding intelligent robots and violence, symbolized by the Terminator trope, the challenge is not how to avoid an apocalypse spearheaded by AI killer-robots, but how to protect robots from being brutalized by humans, and particularly by children. This is such an obvious issue once you start thinking about it. You have a confluence of ludism [rage against the machines] in all its technophobia varieties – from economic [robots are taking our jobs], to quasi-religious [robots are inhuman and alien], with the conviction that ‘this is just a machine’ and therefore violence against it is not immoral. The thing about robots, and all machines, is that they are tropic – instead of intent they could be said to have tropisms, which is to say purpose-driven sets of reactions to stimuli. AI infused robots would naturally eclipse the tropic limitation by virtue of being able to produce seemingly random reactions to stimuli, which is a quality particular to conscious organisms.

The moral panic is produced by this transgresison of the machinic into the human. Metaphorically, it can be illustrated by the horror of discovering that a machine has human organs, or human feelings, which is the premise of the Ghost in Shell films. So far so good, but the problem is that the other side of this vector goes full steam ahead as the human transgresses into the machinic. As humans become more and more enmeshed and entangled in close-body digital augmentation-nets [think FitBit], they naturally start reifying their humanity with the language of machines [think the quantified self movement]. If that is the case, then why not do the same for the other side, and start reifying machines with the language of humans – i.e. anthropomorphise and animate them?

Apr 022014
 
lucky_catfuck_t_1

‘Lucky Catfuck’, Combo Street Art, Hong Kong

Do not assume that order and stability are always good, in a society or in a universe.

Philip K. Dick

Mar 272014
 

A passage from Philip K. Dick’s “The Android and the Human”, written in 1972, in which he is prophesying the appearance of the hacker subculture. This was at a time before personal computers, when phone-phreaking was only getting started:

If, as it seems, we are in the process of becoming a totalitarian society in which the state apparatus is all-powerful, the ethics most important for the survival of the true, human individual would be: cheat, lie, evade, fake it, be elsewhere, forge documents, build improved electronic gadgets in your garage that’ll outwit the gadgets used by the authorities. If the television screen is going to watch you, rewire it late at night when you’re permitted to turn it off…

Shiro Nishiguchi, magazine illus., 1983-84

Shiro Nishiguchi, magazine illus., 1983-84

Mar 272014
 

Pete-Something-or-Other

Someday a human being, named perhaps Fred White, may shoot a robot named Pete Something-or-Other, and to his surprise see it weep and bleed. And the dying robot may shoot back and, to its surprise, see a wisp of gray smoke arise from the electric pump that it supposed was Mr. White’s beating heart. It would be rather a great moment of truth for both of them.

Philip K. Dick, “The Android and the Human” (1972)