This is a YouTube playlist of my lectures in BCM206 Future Networks, covering the story of information networks from the invention of the telegraph to the internet of things. The lecture series begins with the invention of the telegraph and the first great wiring on the planet. I tie this with the historical context of the US Civil War, the expansion of European colonial power, the work of Charles Babbage and Ada Lovelace, followed by the work of Tesla, Bell, and Turing. I close with the second world war, which acts as a terminus and marker for the paradigm shift from telegraph to computer. Each of the weekly topics is big enough to deserve its own lecture series, therefore by necessity I have to cover a lot, and focus on key tropes emergent from the new networked society paradigm – i.e. separation of information from matter, the global brain, the knowledge society, the electronic frontier – and examine their role in our complex cyberpunk present.
These are the slides for what was perhaps my favorite lecture so far in BCM112. The lecture has three distinct parts, presented by myself and my PhD students Doug Simkin and Travis Wall. I opened by building on the previous lecture which focused on the dynamics of networked participation, and expanded on the shift from passive consumption to produsage. The modalities of this shift are elegantly illustrated by the event-frame-story structure I developed to formalize the process of news production [it applies to any content production]. The event stage is where the original footage appears – it often is user generated, raw, messy, and with indeterminate context. The frame stage provides the filter for interpreting the raw data. The story stage is what is produced after the frame has done its work. In the legacy media paradigm the event and frame stages are closed to everyone except the authority figures responsible for story production – governments, institutions, journalists, academics, intellectuals, corporate content producers. This generates an environment where authority is dominant, and authenticity is whatever authority decides – the audience is passive and in a state of pure consumption. In the distributed media paradigm the entire process is open and can be entered by anyone at any point – event, frame, or story. This generates an environment where multiple event versions, frames, and stories compete for produser attention on an equal footing.
These dynamics have profound effects on information as a tool for persuasion and frame shifting, or in other words – propaganda. In legacy media propaganda is a function of the dynamics of the paradigm: high cost of entry, high cost of failure, minimum experimentation, inherent quality filter, limited competition, cartelization with limited variation, and an inevitable stagnation.
In distributed media propaganda is memes. Here too propaganda is a function of the dynamics of the paradigm, but those are characterized by collective intelligence as the default form of participation in distributed networks. In this configuration users act as a self-coordinating swarm towards an emergent aggregate goal. The swarm has an orders of magnitude faster production time than the legacy media. This results in orders of magnitude faster feedback loops and information dissemination.
The next part of the lecture, delivered by Doug Simkin, focused on a case study of the /SG/ threads on 4chan’s /pol/ board as an illustration of an emergent distributed swarm in action. This is an excellent case study as it focuses on real-world change produced with astonishing speed in a fully distributed manner.
The final part of the lecture, delivered by Travis Wall, focused on a case study of the #draftourdaughters memetic warfare campaign, which occurred on 4chan’s /pol/ board in the days preceding the 2016 US presidential election. This case study is a potent illustration of the ability of networked swarms to leverage fast feedback loops, rapid prototyping, error discovery, and distributed coordination in highly scalable content production.
This is a lecture I thoroughly enjoyed preparing, and had great fun delivering to my first year digital media class of 200 students. The prezi slides are below. My intention was to provoke students into thinking in interesting and weird ways about remediation across media platforms, about object animation through digital means, and about the new aesthetics of the glitch and hyper kawaii. I ended up being more successful than I expected, in that the lecture provoked extreme reactions oscillating from strong rejection of the very premises to enthusiastic exploration of the implications and pathways opened by them. I start with a quick overview of the changing meaning of craft in a time of digital mediation, then move on to the aesthetics of remediation between analog and digital forms, and object animation and its effect on experiences of the material.
I constructed the main argument around the transition from industrial culture in which production is determined by the logic of the assembly line, to a post-industrial culture in which production is determined by the logic of mass customization. Arguably, the latter is characterized by rapid prototyping, experimentation, iterative error discovery, and modifications leading to unexpected outcomes. I illustrate this with a beautiful quote by David Pye, from his The Nature and Art of Workmanship, where he argues that while industrial manufacturing is characterized by the production of certainty, craftsmanship is always the production of risk because the quality of the result is an unknown during the process of making.
My favorite part of the lecture is where I managed to integrate into a single narrative phenomena such as glitch aesthetics and hyper kawaii, exemplified by Julie Watai and xMinks, with a cameo by Microsoft’s ill-fated Tay AI bot.
The image I used as canvas for the prezi is a remediation of the Amen Break 6-second loop into a 3-d printed sound wave, crafted by a student of mine last year.
Here is a video of what, if there were only humans involved, would be considered a case of serious abuse and be met with counselling for all parties involved. The video is of a robot trying to evade a group of children abusing it. It is part of two projects titled “Escaping from Children’s Abuse of Social Robots,” by Dražen Brščić, Hiroyuki Kidokoro, Yoshitaka Suehiro, and Takayuki Kanda from ATR Intelligent Robotics and Communication Laboratories and Osaka University, and “Why Do Children Abuse Robots?”, by Tatsuya Nomura, Takayuki Uratani, Kazutaka Matsumoto, Takayuki Kanda, Hiroyoshi Kidokoro, Yoshitaka Suehiro, and Sachie Yamada from Ryukoku University, ATR Intelligent Robotics and Communication Laboratories, and Tokai University, presented at the 2015 ACM/IEEE International Conference on Human-Robot Interaction.
Contrary to the moral panic surrounding intelligent robots and violence, symbolized by the Terminator trope, the challenge is not how to avoid an apocalypse spearheaded by AI killer-robots, but how to protect robots from being brutalized by humans, and particularly by children. This is such an obvious issue once you start thinking about it. You have a confluence of ludism [rage against the machines] in all its technophobia varieties – from economic [robots are taking our jobs], to quasi-religious [robots are inhuman and alien], with the conviction that ‘this is just a machine’ and therefore violence against it is not immoral. The thing about robots, and all machines, is that they are tropic – instead of intent they could be said to have tropisms, which is to say purpose-driven sets of reactions to stimuli. AI infused robots would naturally eclipse the tropic limitation by virtue of being able to produce seemingly random reactions to stimuli, which is a quality particular to conscious organisms.
The moral panic is produced by this transgresison of the machinic into the human. Metaphorically, it can be illustrated by the horror of discovering that a machine has human organs, or human feelings, which is the premise of the Ghost in Shell films. So far so good, but the problem is that the other side of this vector goes full steam ahead as the human transgresses into the machinic. As humans become more and more enmeshed and entangled in close-body digital augmentation-nets [think FitBit], they naturally start reifying their humanity with the language of machines [think the quantified self movement]. If that is the case, then why not do the same for the other side, and start reifying machines with the language of humans – i.e. anthropomorphise and animate them?