Press "Enter" to skip to content

Tag: philosophy

The power of networks: distributed journalism, meme warfare, and collective intelligence

These are the slides for what was perhaps my favorite lecture so far in BCM112. The lecture has three distinct parts, presented by myself and my PhD students Doug Simkin and Travis Wall. I opened by building on the previous lecture which focused on the dynamics of networked participation, and expanded on the shift from passive consumption to produsage. The modalities of this shift are elegantly illustrated by the event-frame-story structure I developed to formalize the process of news production [it applies to any content production]. The event stage is where the original footage appears – it often is user generated, raw, messy, and with indeterminate context. The frame stage provides the filter for interpreting the raw data. The story stage is what is produced after the frame has done its work. In the legacy media paradigm the event and frame stages are closed to everyone except the authority figures responsible for story production – governments, institutions, journalists, academics, intellectuals, corporate content producers. This generates an environment where authority is dominant, and authenticity is whatever authority decides – the audience is passive and in a state of pure consumption. In the distributed media paradigm the entire process is open and can be entered by anyone at any point – event, frame, or story. This generates an environment where multiple event versions, frames, and stories compete for produser attention on an equal footing.

These dynamics have profound effects on information as a tool for persuasion and frame shifting, or in other words – propaganda. In legacy media propaganda is a function of the dynamics of the paradigm: high cost of entry, high cost of failure, minimum experimentation, inherent quality filter, limited competition, cartelization with limited variation, and an inevitable stagnation.

In distributed media propaganda is memes. Here too propaganda is a function of the dynamics of the paradigm, but those are characterized by collective intelligence as the default form of participation in distributed networks. In this configuration users act as a self-coordinating swarm towards an emergent aggregate goal. The swarm has an orders of magnitude faster production time than the legacy media. This results in orders of magnitude faster feedback loops and information dissemination.

The next part of the lecture, delivered by Doug Simkin, focused on a case study of the /SG/ threads on 4chan’s /pol/ board as an illustration of an emergent distributed swarm in action. This is an excellent case study as it focuses on real-world change produced with astonishing speed in a fully distributed manner.

The final part of the lecture, delivered by Travis Wall, focused on a case study of the #draftourdaughters memetic warfare campaign, which occurred on 4chan’s /pol/ board in the days preceding the 2016 US presidential election. This case study is a potent illustration of the ability of networked swarms to leverage fast feedback loops, rapid prototyping, error discovery, and distributed coordination in highly scalable content production.

Trajectories of convergence I: user empowerment, information access, and networked participation

These are slides from a lecture I delivered in the fifth week of BCM112, building on open-process arguments conceptualized in a lecture on the logic and aesthetics of digital production. My particular focus in this lecture was on examining the main dynamics of the audience trajectory in the process of convergence. I develop the conceptual frame around Richard Sennet’s notion of dialogic media as ontologically distinct from monologic media, where the latter render a passive audience as listeners and consumers, while the former render conversational participants. I then build on this with Axel Bruns’ ideas on produsage [a better term than prosumer], and specifically his identification of thew new modalities of media in this configuration: a distributed generation of content, fluid movement of produsers between roles, digital artefacts remaining open and in a state of indeterminacy, and permissive ownership regimes enabling continuous collaboration. The key conceptual element here is that the entire chain of the process of production, aggregation, and curation of content is open to modification, and can be entered at any point.

The Medium is the Message II: craft, and the logic of digital making

Following from the opening lecture for BCM112, in which I laid the foundation for approaching digital media convergence from a McLuhan perspective,  these are the prezi slides for the follow-up lecture focusing on the logic of digital production. I open the lecture with a fairly dense conceptual frame establishing the logic of craft and production in digital media, and then follow this up with a range of examples focusing on the aesthetics of glitch, hyper kawaii, vaporwave, and Twitch mess. Again, I build up the concept frame as a shift from the industrial logic of the assembly line to the internet’s logic of mass-customization, where the new aesthetic form is characterized by rapid prototyping, experimentation, rapid error discovery, and open-process mods leading to unexpected outcomes . The key element of this logic-frame is that the openness of the process of digital making – all aspects of the object are open for modification even after release – leads to an emergent unpredictability of the end-result [there is no closure], and a resultant risk embedded in the process. This state of indeterminacy is how digital craft operates, and it is the risky openness that generates the new aesthetic of the medium.

The Medium is the Message I: trajectories of convergence

These are the prezi slides for my opening lecture in BCM112 Convergent Media Practices [live twitter #bcm112 hashtag]. The lecture is an introduction to the state of play in digital media, and specifically the open-ended process of media convergence as mapped by Henry Jenkins. I use McLuhan’s work as a basic frame of reference through which to analyze the process, while focusing on the three distinct trajectories of audiences, industries, and platforms. It is a dual-layered analysis, where the interplay between the three trajectories drives the dynamics of the process, and changes in media platforms act as phase transitions shifting the process on another plane. I am illustrating this dynamic with a number of examples, ranging from papyrus to codex and hypertext, to the shift from newspaper to radio, and of course – the internet.

Comparative hierophany at three object scales

This is an extended chapter abstract I wrote for an edited collection titled Atmospheres of Scale and Wonder: Creative Practice and Material Ecologies in the Anthropocene, due by the end of this year. I am first laying the groundwork in actor network theory, then developing the concept of hierophany borrowed from Eliade, and finally [where the fun begins], discussing the Amazon Echo, the icon of the Black Madonna of Częstochowa, and the asteroid 2010 TK7 residing in Earth Lagrangian point 4. An object from the internet of things, a holy icon, and an asteroid. To my best knowledge none of these objects have been discussed in this way before, either individually or together, and I am very excited to write this chapter.

Comparative hierophany at three object scales

What if we imagined atmosphere as a framing device for stabilizing material settings and sensibilities? What you call a fetishized idol, is in my atmosphere a holy icon. What your atmosphere sees as an untapped oil field, I see as the land where my ancestral spirits freely roam. Your timber resource is someone else’s sacred forest. This grotesque, and tragic, misalignment of agencies is born out of an erasure, a silencing, which then proceeds to repeat this act of forced purification across all possible atmospheres. This chapter unfolds within the conceptual space defined by this erasure of humility towards the material world. Mirroring its objects of discussion, the chapter is constructed as a hybrid.

First, it is grounded in three fundamental concepts from actor network theory known as the irreduction, relationality, and resistance-relation axioms. They construct an atmosphere where things respectively: can never be completely translated and therefore substituted by a stand-in; don’t need human speakers to act in their stead, but settings in which their speech can be recognized; resist relations while also being available for them. When combined, these axioms allow humans to develop a sensibility for the resistant availability of objects. Here, objects speak incessantly, relentlessly if allowed to, if their past is flaunted rather than concealed.

Building on that frame, the chapter adopts, with modifications, the notion of a hierophany, as developed by Mircea Eliade, as a conceptual frame for encountering the resistant availability of material artefacts. In its original meaning a hierophany stands for the material manifestation of a wholly other, sacred, order of being. Hierophanies are discontinuities, self-enclosed spheres of meaning. Arguably though, hierophanies emerging from the appearance of a sacred order in an otherwise profane material setting can be viewed as stabilizing techniques. They stabilise an atmospheric time, where for example sacred time is cyclical, while profane time is linear; and they stabilise an atmospheric space, where sacred space is imbued with presence by ritual and a plenist sensibility, while profane space is Euclidean, oriented around Cartesian coordinates and purified from sacred ritual.

Finally, building on these arguments, the chapter explores the variations of intensity of encounters with hierophanic presences at three scales, anchored by three objects. Three objects, three scales, three intensities of encounter. The first encounter is with the Amazon Echo, a mundane technical object gendered by its makers as Alexa. An artefact of the internet of things, Alexa is a speaker for a transcendental plane of big data and artificial intelligence algorithms, and therefore her knowledge and skills are ever expanding. The second encounter is with the icon of the Black Madonna of Częstochowa in Poland, a holy relic and a religious object. The icon is a speaker for a transcendental plane of a whole different order than Alexa, but crucially, I argue the difference to be not ontological but that of hierophanic intensities. The third encounter is with TK7, an asteroid resident in Earth Lagrangian Point 4, and discovered only in 2010. TK7 speaks for a transcendental plane of a wholly non-human order, because it is quite literally not of this world. All three objects have resistant availability at various intensities, all three have a hierophanic pull on their surroundings, also at various intensities. Alexa listens, and relentlessly answers with a lag less than 1 second. The Black Madonna icon listens, and may answer to the prayers of pilgrims. TK7 is literally not of this world, a migratory alien object residing, for now, as a stable neighbor of ours.

Teaching digital media in a systemic way, while accounting for non-linearity

Recently I have been trying to formulate my digital media teaching and learning philosophy as a systemic framework. This is a posteriori work because philosophies can be non-systemic, but systems are always based on a philosophy. I also don’t think a teaching/learning system can ever be complete, because entropy and change are the only givens [even in academy]. It has to be understood as dynamic, and therefore more along the lines of rules-of-thumb as opposed to prescriptive dogma.

None of the specific elements of the framework I use are critical to its success, and the only axiom is that the elements have to form a coherent system. By coherence, I understand a dynamic setting where 1] the elements of the system are integrated both horizontally and vertically [more on that below], and 2] the system is bigger than the sum of its parts. The second point needs further elaboration, as I have often found even highly educated people really struggle with non-linear systems. Briefly, linear progression is utterly predictable [x + 1 + 1…= x + n] and comfortable to build models in – i.e. if you increase x by 1, the new state of the system will be x +1. Nonlinear progression by contrast is utterly unpredictable and exhibits rapid deviations from whatever the fashionable mean is at the moment – i.e. x+1= y. Needless to say, one cannot model nonlinear systems over long periods of time, as the systems will inevitably deviate from the limited variables given in the model.

Axiom: all complex systems are nonlinear when exposed to time [even in academy].

The age of the moderns has configured us to think exceedingly in linear terms, while reality is and has always been regretfully non-linear [Nassim Taleb built a career pointing this out for fun and profit]. Unfortunately this mass delusion extends to education, where linear thinking rules across all disciplines. Every time you hear the “take these five exams and you will receive a certificate that you know stuff” mantra you are encountering a manifestation of magical linear thinking. Fortunately, learning does not follow a linear progression, and is in fact one of the most non-linear processes we are ever likely to encounter as a species.

Most importantly, learning has to be understood as paradigmatically opposed to knowing facts, because the former is non-linear and relies on dynamic encounters with reality, while the latter is linear and relies on static encounters with models of reality.

With that out of the way, let’s get to the framework I have developed so far. There are two fundamental philosophical pillars framing the assessment structure in the digital media and communication [DIGC] subjects I have been teaching at the University of Wollongong [UOW], both informed by constructivist pedagogic approaches to knowledge creation [the subjects I coordinate are BCM112, DIGC202, and DIGC302].

1] The first of those pillars is the notion of content creation for a publicly available portfolio, expressed through the content formats students are asked to produce in the DIGC major.

Rule of thumb: all content creation without exception has to be non-prescriptive, where students are given starting points and asked to develop learning trajectories on their own – i.e. ‘write a 500 word blog post on surveillance using the following problems as starting points, and make a meme illustrating your argument’.

Rule of thumb: all content has to be publicly available, in order to expose students to nonlinear feedback loops – i.e. ‘my video has 20 000 views in three days – why is this happening?’ [first year student, true story].

Rule of thumb: all content has to be produced in aggregate in order to leverage nonlinear time effects on learning – i.e. ‘I suddenly discovered I taught myself Adobe Premiere while editing my videos for this subject’ [second year student, true story].

The formats students produce include, but are not limited to, short WordPress essays and comments, annotated Twitter links, YouTube videos, SoundCloud podcasts, single image semantically-rich memetic messages on Imgur, dynamic semantically-rich memetic messages on Giphy, and large-scale free-form media-rich digital artefacts [more on those below].

Rule of thumb: design for simultaneous, dynamic content production of varying intensity, in order to multiply interface points with topic problematic – i.e. ‘this week you should write a blog post on distributed network topologies, make a video illustrating the argument, tweet three examples of distributed networks in the real world, and comment on three other student posts’.

 2] The second pillar is expressed through the notion of horizontal and vertical integration of knowledge creation practices. This stands for a model of media production where the same assessments and platforms are used extensively across different subject areas at the same level and program of study [horizontal integration], as well as across levels and programs [vertical integration].

Rule of thumb: the higher the horizontal/vertical integration, the more content serendipity students are likely to encounter, and the more pronounced the effects of non-linearity on learning.

Crucially, and this point has to be strongly emphasized, the integration of assessments and content platforms both horizontally and vertically allows students to leverage content aggregates and scale up in terms of their output [non-linearity, hello again]. In practice, this means that a student taking BCM112 [a core subject in the DIGC major] will use the same media platforms also in BCM110 [a core subject for all communication and media studies students], but also in JOUR102 [a core subject in the journalism degree] and MEDA101 [a core subject in media arts]. This horizontal integration across 100 level subjects allows students to rapidly build up sophisticated content portfolios and leverage content serendipity.

Rule of thumb: always try to design for content serendipity, where content of topical variety coexists on the same platform – i.e. a multitude of subjects with blogging assessments allowing the student to use the same WordPress blog. When serendipity is actively encouraged it transforms content platforms into so many idea colliders with potentially nonlinear learning results.

Adding the vertical integration allows students to reuse the same platforms in their 200 and 300 level subjects across the same major, and/or other majors and programs. Naturally, this results in highly scalable content outputs, the aggregation of extensively documented portfolios of media production, and most importantly, the rapid nonlinear accumulation of knowledge production techniques and practices.

On digital artefacts

A significant challenge across academy as a whole, and media studies as a discipline, is giving students the opportunity to work on projects with real-world implications and relevance, that is, projects with nonlinear outcomes aimed at real stakeholders, users, and audiences. The digital artefact [DA] assessment framework I developed along the lines of the model discussed above is a direct response to this challenge. The only limiting requirements for a DA are that 1] artefacts should be developed in public on the open internet, therefore leveraging non-linearity, collective intelligence and fast feedback loops, and 2] artefacts should have a clearly defined social utility for stakeholders and audiences outside the subject and program.

Rule of thumb: media project assessments should always be non-prescriptive in order to leverage non-linearity – i.e. ‘I thought I am fooling around with a drone, and now I have a start-up and have to learn how to talk to investors’ [second year student, true story].

Implementing the above rule of thumb means that you absolutely cannot structure and/or limit: 1] group numbers – in my subjects students can work with whoever they want, in whatever numbers and configurations, with people in and/or out of the subject, degree, university; 2] the project topic – my students are expected to define the DA topic on their own, the only limitations provided by the criteria for public availability, social utility, and the broad confines of the subject area – i.e. digital media; 3] the project duration – I expect my students to approach the DA as a project that can be completed within the subject, but that can also be extended throughout the duration of the degree and beyond.

Digital artefact development rule of thumb 1: Fail Early, Fail Often [FEFO]

#fefo is a developmental strategy originating in the open source community, and first formalized by Eric Raymond in The Cathedral and the Bazaar. FEFO looks simple, but is the embodiment of a fundamental insight about complex systems. If a complex system has to last in time while interfacing with nonlinear environments, its best bet is to distribute and normalize risk taking [a better word for decision making] across its network, while also accounting for the systemic effects of failure within the system [see Nassim Taleb’s Antifragile for an elaboration]. In the context of teaching and learning, FEFO asks creators to push towards the limits of their idea, experiment at those limits and inevitably fail, and then to immediately iterate through this very process again, and again. At the individual level the result of FEFO in practice is rapid error discovery and elimination, while at the systemic level it leads to a culture of rapid prototyping, experimentation, and ideation.

Digital artefact development rule of thumb 2: Fast, Inexpensive, Simple, Tiny [FIST]

#fist is a developmental strategy developed by Lt. Col. Dan Ward, Chief of Acquisition Innovation at USAF. It provides a rule-of-thumb framework for evaluating the potential and scope of projects, allowing creators to chart ideation trajectories within parameters geared for simplicity. In my subjects FIST projects have to be: 1] time-bound [fast], even if part of an ongoing process; 2] reusing existing easily accessible techniques [inexpensive], as opposed to relying on complex new developments; 3] constantly aiming away from fragility [simple], and towards structural simplicity; 4] small-scale with the potential to grow [tiny], as opposed to large-scale with the potential to crumble.

In the context of my teaching, starting with their first foray into the DIGC major in BCM112 students are asked to ideate, rapidly prototype, develop, produce, and iterate a DA along the criteria outlined above. Crucially, students are allowed and encouraged to have complete conceptual freedom in developing their DAs. Students can work alone or in a group, which can include students from different classes or outside stakeholders. Students can also leverage multiple subjects across levels of study to work on the same digital artefact [therefore scaling up horizontally and/or vertically]. For example, they can work on the same project while enrolled in DIGC202 and DIGC302, or while enrolled in DIGC202 and DIGC335. Most importantly, students are encouraged to continue working on their projects even after a subject has been completed, which potentially leads to projects lasting for the entirety of their degree, spanning 3 years and a multitude of subjects.

In an effort to further ground the digital artefact framework in real-world practices in digital media and communication, DA creators from BCM112, DIGC202, and DIGC302 have been encouraged to collaborate with and initiate various UOW media campaigns aimed at students and outside stakeholders. Such successful campaigns as Faces of UOW, UOW Student Life, and UOW Goes Global all started as digital artefacts in DIGC202 and DIGC302. In this way, student-created digital media content is leveraged by the University and by the students for their digital artefacts and media portfolios. To date, DIGC students have developed digital artefacts for UOW Marketing, URAC, UOW College, Wollongong City Council, and a range of businesses. A number of DAs have also evolved into viable businesses.

In line with the opening paragraph I will stop here, even though [precisely because] this is an incomplete snapshot of the framework I am working on.

Celestial Emporium

In his essay on the Analytical Language of John Wilkins, Borges ‘quotes’ the following passage from “a certain Chinese encyclopedia entitled Celestial Emporium of Benevolent Knowledge. In its remote pages it is written that all animals are divided into:

(a) belonging to the emperor, (b) embalmed ones, (c) tame ones, (d) suckling pigs, (e) mermaids or sirens, (f) fabulous ones, (g) stray dogs, (h) those included in the present classification, (i) those that tremble as if mad, (j) innumerable ones, (k) drawn with a very fine camel hair brush, (l) et cetera, (m) those that have just broken the flower vase, (n) those that, at a distance, resemble flies.

 

Object Hierophanies and the Mode of Anticipation

As I posted earlier, I am participating in a panel on data natures at the International Symposium on Electronic Art [ISEA] in Hong Kong. My paper is titled Object Hierophanies and the Mode of Anticipation, and discusses the transition of bid data-driven IoT objects such as the Amazon Echo to a mode of operation where they appear as a hierophany – after Mircea Eliade – of a higher modality of being, and render the loci in which they exist into a mode of anticipation.

I start with a brief section on the logistics of the IoT, focusing on the fact that it involves physical objects monitoring their immediate environments through a variety of sensors, transmitting the acquired data to remote networks, and initiating actions based on embedded algorithms and feedback loops. The context data produced in the process is by definition transmitted to and indexed in a remote database, from the perspective of which the contextual data is the object.

The Amazon Echo continuously listens to all sounds in its surroundings, and reacts to the wake word Alexa. It interacts with its interlocutors through a female sounding interface called the Alexa Voice Service [AVS], which Amazon made available to third-party hardware makers. What is more, the core algorithms of AVS, known as the Alexa Skills Kit [ASK] are opened to developers too, making it easy for anyone to teach Alexa a new ‘skill’. The key dynamic in my talk is the fact that human and non-human agencies, translated by the Amazon Echo as data, are transported to the transcendental realm of the Amazon Web Services [AWS] where it is modulated, stored for future reference, and returned as an answering Echo. In effect, the nature of an IoT enabled object appears as the receptacle of an exterior force that differentiates it from its milieu and gives it meaning and value in unpredictable ways.

Objects such as the Echo acquire their value, and in so doing become real for their interlocutors, only insofar as they participate in one way or another in remote data realities transcending the locale of the object. Insofar as the data gleaned by such devices has predictive potential when viewed in aggregate, the enactment of this potential in a local setting is always already a singular act of manifestation of a transcendental data nature with an overriding level of agency.

In his work on non-modern notions of sacred space philosopher of religion Mircea Eliade conceptualized this act of manifestation of another modality of being into a local setting as a hierophany. Hierophanies are not continuous, but wholly singular acts of presence by a different modality. By manifesting that modality, which Eliade termed as the sacred, an object becomes the receptacle for a transcendental presence, yet simultaneously continues to remain inextricably entangled in its surrounding milieu. I argue that there is a strange similarity between non-modern imaginaries of hierophany as a gateway to the sacred, and IoT enabled objects transducing loci into liminal and opaque data taxonomies looping back as a black-boxed echo. The Echo, through the voice of Alexa, is in effect the hierophanic articulator of a wholly non-human modality of being.

Recently, Sally Applin and Michael Fischer have argued that when aggregated within a particular material setting sociable objects form what is in effect an anticipatory materiality acting as a host to human interlocutors. The material setting becomes anticipatory because of the implied sociability of its component objects, allowing them to not only exchange data about their human interlocutor, but also draw on remote data resources, and then actuate based on the parameters of that aggregate social memory.

In effect, humans and non-humans alike are rendered within a flat ontology of anticipation, waiting for the Echo.

Here is the video of my presentation:

And here are the prezi slides:

On mapping: from actual facts to factual acts

Some time ago I was invited to give a lecture on mapping to a crowd of mostly first year digital media students working on locative media projects. Below are the prezi slides. Considering the audience, I made a light theory introduction focusing on the notions or representation and the factual, and then moved to discussing various examples of maps as interfaces to movement and agency. My talk was mostly a simplified version of my paper on mapping theory, with a focus on the dynamics of translation and transportation of immutable mobiles – a fundamental concept in actor network theory. In essence, the lecture is built around a dichotomy between two concepts of mapping: 1] mapping as a representation of a static frame of reference – an actual fact, and 2] mapping as a translation of and an interface to agency and movement – a factual act. The tension between actual facts and factual acts is a nerdy reference to Latour’s from matters of fact to matters of concern, and is intended to illustrate the affordances of digital media in opening and mapping black-boxed settings. Apparently, the lecture was a success, with the Sand Andreas Streaming Deer Cam being a crowd favorite.

Stop dehumanizing robots

Here is a video of what, if there were only humans involved, would be considered a case of serious abuse and be met with counselling for all parties involved. The video is of a robot trying to evade a group of children abusing it. It is part of two projects titled “Escaping from Children’s Abuse of Social Robots,” by Dražen Brščić, Hiroyuki Kidokoro, Yoshitaka Suehiro, and Takayuki Kanda from ATR Intelligent Robotics and Communication Laboratories and Osaka University, and “Why Do Children Abuse Robots?”, by Tatsuya Nomura, Takayuki Uratani, Kazutaka Matsumoto, Takayuki Kanda, Hiroyoshi Kidokoro, Yoshitaka Suehiro, and Sachie Yamada from Ryukoku University, ATR Intelligent Robotics and Communication Laboratories, and Tokai University, presented at the 2015 ACM/IEEE International Conference on Human-Robot Interaction.

Contrary to the moral panic surrounding intelligent robots and violence, symbolized by the Terminator trope, the challenge is not how to avoid an apocalypse spearheaded by AI killer-robots, but how to protect robots from being brutalized by humans, and particularly by children. This is such an obvious issue once you start thinking about it. You have a confluence of ludism [rage against the machines] in all its technophobia varieties – from economic [robots are taking our jobs], to quasi-religious [robots are inhuman and alien], with the conviction that ‘this is just a machine’ and therefore violence against it is not immoral. The thing about robots, and all machines, is that they are tropic – instead of intent they could be said to have tropisms, which is to say purpose-driven sets of reactions to stimuli. AI infused robots would naturally eclipse the tropic limitation by virtue of being able to produce seemingly random reactions to stimuli, which is a quality particular to conscious organisms.

The moral panic is produced by this transgresison of the machinic into the human. Metaphorically, it can be illustrated by the horror of discovering that a machine has human organs, or human feelings, which is the premise of the Ghost in Shell films. So far so good, but the problem is that the other side of this vector goes full steam ahead as the human transgresses into the machinic. As humans become more and more enmeshed and entangled in close-body digital augmentation-nets [think FitBit], they naturally start reifying their humanity with the language of machines [think the quantified self movement]. If that is the case, then why not do the same for the other side, and start reifying machines with the language of humans – i.e. anthropomorphise and animate them?