This is a YouTube playlist of my lectures in BCM206 Future Networks, covering the story of information networks from the invention of the telegraph to the internet of things. The lecture series begins with the invention of the telegraph and the first great wiring on the planet. I tie this with the historical context of the US Civil War, the expansion of European colonial power, the work of Charles Babbage and Ada Lovelace, followed by the work of Tesla, Bell, and Turing. I close with the second world war, which acts as a terminus and marker for the paradigm shift from telegraph to computer. Each of the weekly topics is big enough to deserve its own lecture series, therefore by necessity I have to cover a lot, and focus on key tropes emergent from the new networked society paradigm – i.e. separation of information from matter, the global brain, the knowledge society, the electronic frontier – and examine their role in our complex cyberpunk present.
These are the slides for what was perhaps my favorite lecture so far in BCM112. The lecture has three distinct parts, presented by myself and my PhD students Doug Simkin and Travis Wall. I opened by building on the previous lecture which focused on the dynamics of networked participation, and expanded on the shift from passive consumption to produsage. The modalities of this shift are elegantly illustrated by the event-frame-story structure I developed to formalize the process of news production [it applies to any content production]. The event stage is where the original footage appears – it often is user generated, raw, messy, and with indeterminate context. The frame stage provides the filter for interpreting the raw data. The story stage is what is produced after the frame has done its work. In the legacy media paradigm the event and frame stages are closed to everyone except the authority figures responsible for story production – governments, institutions, journalists, academics, intellectuals, corporate content producers. This generates an environment where authority is dominant, and authenticity is whatever authority decides – the audience is passive and in a state of pure consumption. In the distributed media paradigm the entire process is open and can be entered by anyone at any point – event, frame, or story. This generates an environment where multiple event versions, frames, and stories compete for produser attention on an equal footing.
These dynamics have profound effects on information as a tool for persuasion and frame shifting, or in other words – propaganda. In legacy media propaganda is a function of the dynamics of the paradigm: high cost of entry, high cost of failure, minimum experimentation, inherent quality filter, limited competition, cartelization with limited variation, and an inevitable stagnation.
In distributed media propaganda is memes. Here too propaganda is a function of the dynamics of the paradigm, but those are characterized by collective intelligence as the default form of participation in distributed networks. In this configuration users act as a self-coordinating swarm towards an emergent aggregate goal. The swarm has an orders of magnitude faster production time than the legacy media. This results in orders of magnitude faster feedback loops and information dissemination.
The next part of the lecture, delivered by Doug Simkin, focused on a case study of the /SG/ threads on 4chan’s /pol/ board as an illustration of an emergent distributed swarm in action. This is an excellent case study as it focuses on real-world change produced with astonishing speed in a fully distributed manner.
The final part of the lecture, delivered by Travis Wall, focused on a case study of the #draftourdaughters memetic warfare campaign, which occurred on 4chan’s /pol/ board in the days preceding the 2016 US presidential election. This case study is a potent illustration of the ability of networked swarms to leverage fast feedback loops, rapid prototyping, error discovery, and distributed coordination in highly scalable content production.
These are slides from a lecture I delivered in the fifth week of BCM112, building on open-process arguments conceptualized in a lecture on the logic and aesthetics of digital production. My particular focus in this lecture was on examining the main dynamics of the audience trajectory in the process of convergence. I develop the conceptual frame around Richard Sennet’s notion of dialogic media as ontologically distinct from monologic media, where the latter render a passive audience as listeners and consumers, while the former render conversational participants. I then build on this with Axel Bruns’ ideas on produsage [a better term than prosumer], and specifically his identification of thew new modalities of media in this configuration: a distributed generation of content, fluid movement of produsers between roles, digital artefacts remaining open and in a state of indeterminacy, and permissive ownership regimes enabling continuous collaboration. The key conceptual element here is that the entire chain of the process of production, aggregation, and curation of content is open to modification, and can be entered at any point.
Following from the opening lecture for BCM112, in which I laid the foundation for approaching digital media convergence from a McLuhan perspective, these are the prezi slides for the follow-up lecture focusing on the logic of digital production. I open the lecture with a fairly dense conceptual frame establishing the logic of craft and production in digital media, and then follow this up with a range of examples focusing on the aesthetics of glitch, hyper kawaii, vaporwave, and Twitch mess. Again, I build up the concept frame as a shift from the industrial logic of the assembly line to the internet’s logic of mass-customization, where the new aesthetic form is characterized by rapid prototyping, experimentation, rapid error discovery, and open-process mods leading to unexpected outcomes . The key element of this logic-frame is that the openness of the process of digital making – all aspects of the object are open for modification even after release – leads to an emergent unpredictability of the end-result [there is no closure], and a resultant risk embedded in the process. This state of indeterminacy is how digital craft operates, and it is the risky openness that generates the new aesthetic of the medium.
This is a lecture I thoroughly enjoyed preparing, and had great fun delivering to my first year digital media class of 200 students. The prezi slides are below. My intention was to provoke students into thinking in interesting and weird ways about remediation across media platforms, about object animation through digital means, and about the new aesthetics of the glitch and hyper kawaii. I ended up being more successful than I expected, in that the lecture provoked extreme reactions oscillating from strong rejection of the very premises to enthusiastic exploration of the implications and pathways opened by them. I start with a quick overview of the changing meaning of craft in a time of digital mediation, then move on to the aesthetics of remediation between analog and digital forms, and object animation and its effect on experiences of the material.
I constructed the main argument around the transition from industrial culture in which production is determined by the logic of the assembly line, to a post-industrial culture in which production is determined by the logic of mass customization. Arguably, the latter is characterized by rapid prototyping, experimentation, iterative error discovery, and modifications leading to unexpected outcomes. I illustrate this with a beautiful quote by David Pye, from his The Nature and Art of Workmanship, where he argues that while industrial manufacturing is characterized by the production of certainty, craftsmanship is always the production of risk because the quality of the result is an unknown during the process of making.
My favorite part of the lecture is where I managed to integrate into a single narrative phenomena such as glitch aesthetics and hyper kawaii, exemplified by Julie Watai and xMinks, with a cameo by Microsoft’s ill-fated Tay AI bot.
The image I used as canvas for the prezi is a remediation of the Amen Break 6-second loop into a 3-d printed sound wave, crafted by a student of mine last year.
Some time ago I was invited to give a lecture on mapping to a crowd of mostly first year digital media students working on locative media projects. Below are the prezi slides. Considering the audience, I made a light theory introduction focusing on the notions or representation and the factual, and then moved to discussing various examples of maps as interfaces to movement and agency. My talk was mostly a simplified version of my paper on mapping theory, with a focus on the dynamics of translation and transportation of immutable mobiles – a fundamental concept in actor network theory. In essence, the lecture is built around a dichotomy between two concepts of mapping: 1] mapping as a representation of a static frame of reference – an actual fact, and 2] mapping as a translation of and an interface to agency and movement – a factual act. The tension between actual facts and factual acts is a nerdy reference to Latour’s from matters of fact to matters of concern, and is intended to illustrate the affordances of digital media in opening and mapping black-boxed settings. Apparently, the lecture was a success, with the Sand Andreas Streaming Deer Cam being a crowd favorite.
This semester I’ve started uploading my lectures for DIGC202 Global Networks to YouTube, while abandoning the face-to-face lecture format in that subject. The obvious benefit of this shift is to allow students to engage with the lectures on their own terms – the lectures are broken into segments which can be accessed discretely or in a sequence, on any device, at any time. The legacy alternative would have been either attending a physical lecture or listening to the university-provided recording, which is an hour-long file hidden within the cavern of the university intranet, accessible only from a computer [must keep that knowledge away from prying eyes!], and, as a rule of thumb, of terrible quality. Anecdotal evidence from students already validates my decision to shift, as this gives them the ability to structure their learning activities in a format productive for them.
The meta-benefit is that the lectures – and therefore my labour – now exist within a generative value ecology on the open net, accessible to [gasp] people outside the university. On a more strategic level, I can now annotate the lectures as I go along, adding links to additional content which will only enrich the experience. In that sense the lectures stop being an end-product, an artefact of dead labour [dead as in dead-end], and become an open process.
The only downside I have had to deal with so far is that lecture preparation, delivery, and post-production takes me on average three times as long as the legacy model. I am still experimenting with the process and learning on the go – fail early, fail often.
I am uploading all lectures to a DIGC202 playlist, which can be accessed below: