Robots sorting through 200,000 packages a day in a Chinese delivery firm’s warehouse. The robots are self-charging and operate 24/7, apparently saving more than 70% of the costs associated with human workers performing similar tasks.
This is a conversation on the Internet of Things I recorded with my colleague Chris Moore as part of his podcasted lecture series on cyberculture. As interviews go this is quite organic, without a set script of questions and answers, hence the rambling style and side-stories. Among others, I discuss: the Amazon Echo [Alexa], enchanted objects, Mark Weiser and ubiquitous computing, smart clothes, surveillance, AI, technology-induced shifts in perception, speculative futurism, and paradigm shifts.
As I posted earlier, I am participating in a panel on data natures at the International Symposium on Electronic Art [ISEA] in Hong Kong. My paper is titled Object Hierophanies and the Mode of Anticipation, and discusses the transition of bid data-driven IoT objects such as the Amazon Echo to a mode of operation where they appear as a hierophany – after Mircea Eliade – of a higher modality of being, and render the loci in which they exist into a mode of anticipation.
I start with a brief section on the logistics of the IoT, focusing on the fact that it involves physical objects monitoring their immediate environments through a variety of sensors, transmitting the acquired data to remote networks, and initiating actions based on embedded algorithms and feedback loops. The context data produced in the process is by definition transmitted to and indexed in a remote database, from the perspective of which the contextual data is the object.
The Amazon Echo continuously listens to all sounds in its surroundings, and reacts to the wake word Alexa. It interacts with its interlocutors through a female sounding interface called the Alexa Voice Service [AVS], which Amazon made available to third-party hardware makers. What is more, the core algorithms of AVS, known as the Alexa Skills Kit [ASK] are opened to developers too, making it easy for anyone to teach Alexa a new ‘skill’. The key dynamic in my talk is the fact that human and non-human agencies, translated by the Amazon Echo as data, are transported to the transcendental realm of the Amazon Web Services [AWS] where it is modulated, stored for future reference, and returned as an answering Echo. In effect, the nature of an IoT enabled object appears as the receptacle of an exterior force that differentiates it from its milieu and gives it meaning and value in unpredictable ways.
Objects such as the Echo acquire their value, and in so doing become real for their interlocutors, only insofar as they participate in one way or another in remote data realities transcending the locale of the object. Insofar as the data gleaned by such devices has predictive potential when viewed in aggregate, the enactment of this potential in a local setting is always already a singular act of manifestation of a transcendental data nature with an overriding level of agency.
In his work on non-modern notions of sacred space philosopher of religion Mircea Eliade conceptualized this act of manifestation of another modality of being into a local setting as a hierophany. Hierophanies are not continuous, but wholly singular acts of presence by a different modality. By manifesting that modality, which Eliade termed as the sacred, an object becomes the receptacle for a transcendental presence, yet simultaneously continues to remain inextricably entangled in its surrounding milieu. I argue that there is a strange similarity between non-modern imaginaries of hierophany as a gateway to the sacred, and IoT enabled objects transducing loci into liminal and opaque data taxonomies looping back as a black-boxed echo. The Echo, through the voice of Alexa, is in effect the hierophanic articulator of a wholly non-human modality of being.
Recently, Sally Applin and Michael Fischer have argued that when aggregated within a particular material setting sociable objects form what is in effect an anticipatory materiality acting as a host to human interlocutors. The material setting becomes anticipatory because of the implied sociability of its component objects, allowing them to not only exchange data about their human interlocutor, but also draw on remote data resources, and then actuate based on the parameters of that aggregate social memory.
In effect, humans and non-humans alike are rendered within a flat ontology of anticipation, waiting for the Echo.
Here is a video of what, if there were only humans involved, would be considered a case of serious abuse and be met with counselling for all parties involved. The video is of a robot trying to evade a group of children abusing it. It is part of two projects titled “Escaping from Children’s Abuse of Social Robots,” by Dražen Brščić, Hiroyuki Kidokoro, Yoshitaka Suehiro, and Takayuki Kanda from ATR Intelligent Robotics and Communication Laboratories and Osaka University, and “Why Do Children Abuse Robots?”, by Tatsuya Nomura, Takayuki Uratani, Kazutaka Matsumoto, Takayuki Kanda, Hiroyoshi Kidokoro, Yoshitaka Suehiro, and Sachie Yamada from Ryukoku University, ATR Intelligent Robotics and Communication Laboratories, and Tokai University, presented at the 2015 ACM/IEEE International Conference on Human-Robot Interaction.
Contrary to the moral panic surrounding intelligent robots and violence, symbolized by the Terminator trope, the challenge is not how to avoid an apocalypse spearheaded by AI killer-robots, but how to protect robots from being brutalized by humans, and particularly by children. This is such an obvious issue once you start thinking about it. You have a confluence of ludism [rage against the machines] in all its technophobia varieties – from economic [robots are taking our jobs], to quasi-religious [robots are inhuman and alien], with the conviction that ‘this is just a machine’ and therefore violence against it is not immoral. The thing about robots, and all machines, is that they are tropic – instead of intent they could be said to have tropisms, which is to say purpose-driven sets of reactions to stimuli. AI infused robots would naturally eclipse the tropic limitation by virtue of being able to produce seemingly random reactions to stimuli, which is a quality particular to conscious organisms.
The moral panic is produced by this transgresison of the machinic into the human. Metaphorically, it can be illustrated by the horror of discovering that a machine has human organs, or human feelings, which is the premise of the Ghost in Shell films. So far so good, but the problem is that the other side of this vector goes full steam ahead as the human transgresses into the machinic. As humans become more and more enmeshed and entangled in close-body digital augmentation-nets [think FitBit], they naturally start reifying their humanity with the language of machines [think the quantified self movement]. If that is the case, then why not do the same for the other side, and start reifying machines with the language of humans – i.e. anthropomorphise and animate them?
A thought-provoking look at the impact of massive automation on existing labor practices by C.G.P. Grey.
We have been through economic revolutions before, but the robot revolution is different. Horses aren’t unemployed now because they got lazy as a species, they’re unemployable. There’s little work a horse can do that do that pays for its housing and hay. And many bright, perfectly capable humans will find themselves the new horse: unemployable through no fault of their own. […]
This video isn’t about how automation is bad — rather that automation is inevitable. It’s a tool to produce abundance for little effort. We need to start thinking now about what to do when large sections of the population are unemployable — through no fault of their own. What to do in a future where, for most jobs, humans need not apply.
In an earlier post I mentioned drone swarms, and the likelihood of their arrival in the not too-distant future. However, there is another development much closer to being actually deployed as it is already undergoing advanced field trials. I struggle to think of a warbot creepier than DARPA’s AlphaDog. To appreciate what you see, keep in mind this is only a prototype of a warbot which in its final form will probably be around 4 times larger, completely silent, smart enough to self-navigate and engage with humans (voice recognition), and on top of that – heavily armed. The fact it can already autonomously move over unknown rugged terrain is astonishing.
If you have seen/played Metal Gear Solid this should be causing deja vu.
The algorithms that enable drone swarms is advancing EXTREMELY quickly. In the next couple of years, the number of advances in technology, deployments, use cases, and awareness of drones will be intense. In 5 years, they will be part of every day life. You will see them everywhere. Not just one or two drones. SWARMS of drones. Tens. Hundreds. Thousands. Millions (potentially if the cost per unit is small enough)?
How soon will we see that. It’s already here. Here’s a video depicting experiments performed with a team of nano quadrotors at the GRASP Lab, University of Pennsylvania. Vehicles developed by KMel Robotics. It was posted today:
This reminds me of the Protoss carrier unit from Starcraft, of Ender’s hive attacks in Orson Scott Card’s ‘Ender’s Game‘. Imagine hearing that noise amplified 100 times, and the sky dotted with a flotilla of these. Smells like Skynet.