Aug 102015

Here is a video of what, if there were only humans involved, would be considered a case of serious abuse and be met with counselling for all parties involved. The video is of a robot trying to evade a group of children abusing it. It is part of two projects titled “Escaping from Children’s Abuse of Social Robots,” by Dražen Brščić, Hiroyuki Kidokoro, Yoshitaka Suehiro, and Takayuki Kanda from ATR Intelligent Robotics and Communication Laboratories and Osaka University, and “Why Do Children Abuse Robots?”, by Tatsuya Nomura, Takayuki Uratani, Kazutaka Matsumoto, Takayuki Kanda, Hiroyoshi Kidokoro, Yoshitaka Suehiro, and Sachie Yamada from Ryukoku University, ATR Intelligent Robotics and Communication Laboratories, and Tokai University, presented at the 2015 ACM/IEEE International Conference on Human-Robot Interaction.

Contrary to the moral panic surrounding intelligent robots and violence, symbolized by the Terminator trope, the challenge is not how to avoid an apocalypse spearheaded by AI killer-robots, but how to protect robots from being brutalized by humans, and particularly by children. This is such an obvious issue once you start thinking about it. You have a confluence of ludism [rage against the machines] in all its technophobia varieties – from economic [robots are taking our jobs], to quasi-religious [robots are inhuman and alien], with the conviction that ‘this is just a machine’ and therefore violence against it is not immoral. The thing about robots, and all machines, is that they are tropic – instead of intent they could be said to have tropisms, which is to say purpose-driven sets of reactions to stimuli. AI infused robots would naturally eclipse the tropic limitation by virtue of being able to produce seemingly random reactions to stimuli, which is a quality particular to conscious organisms.

The moral panic is produced by this transgresison of the machinic into the human. Metaphorically, it can be illustrated by the horror of discovering that a machine has human organs, or human feelings, which is the premise of the Ghost in Shell films. So far so good, but the problem is that the other side of this vector goes full steam ahead as the human transgresses into the machinic. As humans become more and more enmeshed and entangled in close-body digital augmentation-nets [think FitBit], they naturally start reifying their humanity with the language of machines [think the quantified self movement]. If that is the case, then why not do the same for the other side, and start reifying machines with the language of humans – i.e. anthropomorphise and animate them?

Aug 082015

I think that the Internet of Things [IoT] for the masses will first manifest itself, paradoxically, through branded and high-end objects, because that is the usual vector for popularizing a new technical affordance. The entire nascent industry of trackers, from the Nest line of ambient ‘see everything’s’, and the FitBit ecology of wearable trackers, to the SenSe Mother household tracking hubs, the trajectory is to appeal to the aspiring [and lately shrinking] middle class and above. Enter Remy Martin IoT bottles for the Chinese market, positioned around the notion of authenticity. How do you get authenticity in this age of fakes? You connect to the internet of course, and personally register your bottle for that extra bit of authentic stamp of approval that someone somewhere has recognized your conspicuous consumption.

I think the two big trajectories along which we will be experiencing the IoT are nicely illustrated by this ad. The lumpen-proletariat gets the surveillance end, the middle class and everyone above gets the authenticity and personalization.

Jul 202015

To understand the effects, affordances, and contextual implications of cars one has to imagine not a single car, but the mindbogglingly dull commute in a suburban traffic jam. Similarly, to understand the affordances of drones and UAVs [unmanned aerial vehicles – a terrible term] one has to imagine the sky-air permeated by networked machines; from micro-drones suitable as toys and message relay, to massive permanent-hover drones suitable for advertising, surveillance, and – inevitably – policing. Enter The Drone Aviary – an R&D project from The Superflux Lab.

The Drone Aviary reveals fleeting glimpses of the city from the perspective of drones. It explores a world where the ‘network’ begins to gain physical autonomy. Drones become protagonists, moving through the city, making decisions about the world and influencing our lives in often opaque yet profound ways.

Nov 042014

This is a prezi from the research paper I gave at an Institute for Social Transformation Research (ISTR) seminar last week. I played with ideas going into several papers I am working on at the moment, but mainly my focus was on anticipatory materiality and the notion of liquid objects. Here is the abstract for the talk:

As internet-connected objects become more and more sociable – smart fridge, smart car, etc. – they become less and less ‘stable’ (think of rocks, coffee mugs, etc. as examples of material stability), and more and more like a twitter feed. 3D printing only compounds this process as the material is literally liquefied and injected based on computer code – in effect the code is primary, and tangible materiality is secondary in this process. The resulting materiality is literally ‘on demand’ – in that it exists as relational data first and foremost and as material artefact only when demanded; and anticipatory – in that the main characteristic of connected objects is their capacity to initiate action based on predictive algorithms. My argument is structured as a provocation examining the notion of anticipatory materiality in the context of the internet of things and 3D printing.

Oct 012014

This is a text I’ve been working on, or rather keeping in the back of my mind, for quite a while, and now it’s finished and sent to Fiberculture Journal. The early beta was presented at a conference in Istanbul in 2011, and my thinking on sociable objects has evolved quite a bit since then. The key shift in my thinking was facilitated by a series of chance encounters – discovering object oriented ontology through Ian Bogost’s Alien Phenomenology, finding the notion of affective resonance in Jane Bennett’s Vibrant Matter, and rediscovering the heteroclite in Lorraine Daston’s awesome Things That Talk.

Sep 212014

This semester I’ve started uploading my lectures for DIGC202 Global Networks to YouTube, while abandoning the face-to-face lecture format in that subject. The obvious benefit of this shift is to allow students to engage with the lectures on their own terms – the lectures are broken into segments which can be accessed discretely or in a sequence, on any device, at any time. The legacy alternative would have been either attending a physical lecture or listening to the university-provided recording, which is an hour-long file hidden within the cavern of the university intranet, accessible only from a computer [must keep that knowledge away from prying eyes!], and, as a rule of thumb, of terrible quality. Anecdotal evidence from students already validates my decision to shift, as this gives them the ability to structure their learning activities in a format productive for them.

The meta-benefit is that the lectures – and therefore my labour – now exist within a generative value ecology on the open net, accessible to [gasp] people outside the university. On a more strategic level, I can now annotate the lectures as I go along, adding links to additional content which will only enrich the experience. In that sense the lectures stop being an end-product, an artefact of dead labour [dead as in dead-end], and become an open process.

The only downside I have had to deal with so far is that lecture preparation, delivery, and post-production takes me on average three times as long as the legacy model. I am still experimenting with the process and learning on the go – fail early, fail often.

I am uploading all lectures to a DIGC202 playlist, which can be accessed below: