Feb 202017
 

This is an extended chapter abstract I wrote for an edited collection titled Atmospheres of Scale and Wonder: Creative Practice and Material Ecologies in the Anthropocene, due by the end of this year. I am first laying the groundwork in actor network theory, then developing the concept of hierophany borrowed from Eliade, and finally [where the fun begins], discussing the Amazon Echo, the icon of the Black Madonna of Częstochowa, and the asteroid 2010 TK7 residing in Earth Lagrangian point 4. An object from the internet of things, a holy icon, and an asteroid. To my best knowledge none of these objects have been discussed in this way before, either individually or together, and I am very excited to write this chapter.

Comparative hierophany at three object scales

What if we imagined atmosphere as a framing device for stabilizing material settings and sensibilities? What you call a fetishized idol, is in my atmosphere a holy icon. What your atmosphere sees as an untapped oil field, I see as the land where my ancestral spirits freely roam. Your timber resource is someone else’s sacred forest. This grotesque, and tragic, misalignment of agencies is born out of an erasure, a silencing, which then proceeds to repeat this act of forced purification across all possible atmospheres. This chapter unfolds within the conceptual space defined by this erasure of humility towards the material world. Mirroring its objects of discussion, the chapter is constructed as a hybrid.

First, it is grounded in three fundamental concepts from actor network theory known as the irreduction, relationality, and resistance-relation axioms. They construct an atmosphere where things respectively: can never be completely translated and therefore substituted by a stand-in; don’t need human speakers to act in their stead, but settings in which their speech can be recognized; resist relations while also being available for them. When combined, these axioms allow humans to develop a sensibility for the resistant availability of objects. Here, objects speak incessantly, relentlessly if allowed to, if their past is flaunted rather than concealed.

Building on that frame, the chapter adopts, with modifications, the notion of a hierophany, as developed by Mircea Eliade, as a conceptual frame for encountering the resistant availability of material artefacts. In its original meaning a hierophany stands for the material manifestation of a wholly other, sacred, order of being. Hierophanies are discontinuities, self-enclosed spheres of meaning. Arguably though, hierophanies emerging from the appearance of a sacred order in an otherwise profane material setting can be viewed as stabilizing techniques. They stabilise an atmospheric time, where for example sacred time is cyclical, while profane time is linear; and they stabilise an atmospheric space, where sacred space is imbued with presence by ritual and a plenist sensibility, while profane space is Euclidean, oriented around Cartesian coordinates and purified from sacred ritual.

Finally, building on these arguments, the chapter explores the variations of intensity of encounters with hierophanic presences at three scales, anchored by three objects. Three objects, three scales, three intensities of encounter. The first encounter is with the Amazon Echo, a mundane technical object gendered by its makers as Alexa. An artefact of the internet of things, Alexa is a speaker for a transcendental plane of big data and artificial intelligence algorithms, and therefore her knowledge and skills are ever expanding. The second encounter is with the icon of the Black Madonna of Częstochowa in Poland, a holy relic and a religious object. The icon is a speaker for a transcendental plane of a whole different order than Alexa, but crucially, I argue the difference to be not ontological but that of hierophanic intensities. The third encounter is with TK7, an asteroid resident in Earth Lagrangian Point 4, and discovered only in 2010. TK7 speaks for a transcendental plane of a wholly non-human order, because it is quite literally not of this world. All three objects have resistant availability at various intensities, all three have a hierophanic pull on their surroundings, also at various intensities. Alexa listens, and relentlessly answers with a lag less than 1 second. The Black Madonna icon listens, and may answer to the prayers of pilgrims. TK7 is literally not of this world, a migratory alien object residing, for now, as a stable neighbor of ours.

Feb 202017
 

Recently I have been trying to formulate my digital media teaching and learning philosophy as a systemic framework. This is a posteriori work because philosophies can be non-systemic, but systems are always based on a philosophy. I also don’t think a teaching/learning system can ever be complete, because entropy and change are the only givens [even in academy]. It has to be understood as dynamic, and therefore more along the lines of rules-of-thumb as opposed to prescriptive dogma.

None of the specific elements of the framework I use are critical to its success, and the only axiom is that the elements have to form a coherent system. By coherence, I understand a dynamic setting where 1] the elements of the system are integrated both horizontally and vertically [more on that below], and 2] the system is bigger than the sum of its parts. The second point needs further elaboration, as I have often found even highly educated people really struggle with non-linear systems. Briefly, linear progression is utterly predictable [x + 1 + 1…= x + n] and comfortable to build models in – i.e. if you increase x by 1, the new state of the system will be x +1. Nonlinear progression by contrast, is utterly unpredictable and exhibits rapid deviations from whatever the fashionable mean is at the moment – i.e. x+1= y. Needless to say, one cannot model nonlinear systems over long periods of time, as the systems will inevitably deviate from the limited variables given in the model.

Axiom: all complex systems are nonlinear when exposed to time [even in academy].

The age of the moderns has configured us to think exceedingly in linear terms, while reality is and has always been regretfully non-linear [Nassim Taleb built a career pointing this out for fun and profit]. Unfortunately this mass delusion extends to education, where linear thinking rules across all disciplines. Every time you hear the “take these five exams and you will receive a certificate that you know stuff” mantra you are encountering a manifestation of magical linear thinking. Fortunately, learning does not follow a linear progression, and is in fact one of the most non-linear processes we are ever likely to encounter as a species. Most importantly, learning has to be understood as paradigmatically opposed to knowing facts, because the former is non-linear and relies on dynamic encounters with reality, while the latter is linear and relies on static encounters with models of reality.

With that out of the way, let’s get to the framework I have developed so far. There are two fundamental philosophical pillars framing the assessment structure in the digital media and communication [DIGC] subjects I have been teaching at the University of Wollongong [UOW], both informed by constructivist pedagogic approaches to knowledge creation [the subjects I coordinate are BCM112, DIGC202, and DIGC302].

1] The first of those pillars is the notion of content creation for a publicly available portfolio, expressed through the content formats students are asked to produce in the DIGC major.

Rule of thumb: all content creation without exception has to be non-prescriptive, where students are given starting points and asked to develop learning trajectories on their own – i.e. ‘write a 500 word blog post on surveillance using the following problems as starting points, and make a meme illustrating your argument’.

Rule of thumb: all content has to be publicly available, in order to expose students to nonlinear feedback loops – i.e. ‘my video has 20 000 views in three days – why is this happening?’ [first year student, true story].

Rule of thumb: all content has to be produced in aggregate in order to leverage nonlinear time effects on learning – i.e. ‘I suddenly discovered I taught myself Adobe Premiere while editing my videos for this subject’ [second year student, true story].

The formats students produce include, but are not limited to, short WordPress essays and comments, annotated Twitter links, YouTube videos, SoundCloud podcasts, single image semantically-rich memetic messages on Imgur, dynamic semantically-rich memetic messages on Giphy, and large-scale free-form media-rich digital artefacts [more on those below].

Rule of thumb: design for simultaneous, dynamic content production of varying intensity, in order to multiply interface points with topic problematic – i.e. ‘this week you should write a blog post on distributed network topologies, make a video illustrating the argument, tweet three examples of distributed networks in the real world, and comment on three other student posts’.

2] The second pillar is expressed through the notion of horizontal and vertical integration of knowledge creation practices. This stands for a model of media production where the same assessments and platforms are used extensively across different subject areas at the same level and program of study [horizontal integration], as well as across levels and programs [vertical integration].

Rule of thumb: the higher the horizontal/vertical integration, the more content serendipity students are likely to encounter, and the more pronounced the effects of non-linearity on learning.

Crucially, and this point has to be strongly emphasized, the integration of assessments and content platforms both horizontally and vertically allows students to leverage content aggregates and scale up in terms of their output [non-linearity, hello again]. In practice, this means that a student taking BCM112 [a core subject in the DIGC major] will use the same media platforms also in BCM110 [a core subject for all communication and media studies students], but also in JOUR102 [a core subject in the journalism degree] and MEDA101 [a core subject in media arts]. This horizontal integration across 100 level subjects allows students to rapidly build up sophisticated content portfolios and leverage content serendipity.

Rule of thumb: always try to design for content serendipity, where content of topical variety coexists on the same platform – i.e. a multitude of subjects with blogging assessments allowing the student to use the same WordPress blog. When serendipity is actively encouraged it transforms content platforms into so many idea colliders with potentially nonlinear learning results.

Adding the vertical integration allows students to reuse the same platforms in their 200 and 300 level subjects across the same major, and/or other majors and programs. Naturally, this results in highly scalable content outputs, the aggregation of extensively documented portfolios of media production, and most importantly, the rapid nonlinear accumulation of knowledge production techniques and practices.

On digital artefacts

A significant challenge across academy as a whole, and media studies as a discipline, is giving students the opportunity to work on projects with real-world implications and relevance, that is, projects with nonlinear outcomes aimed at real stakeholders, users, and audiences. The digital artefact [DA] assessment framework I developed along the lines of the model discussed above is a direct response to this challenge. The only limiting requirements for a DA are that 1] artefacts should be developed in public on the open internet, therefore leveraging non-linearity, collective intelligence and fast feedback loops, and 2] artefacts should have a clearly defined social utility for stakeholders and audiences outside the subject and program.

Rule of thumb: media project assessments should always be non-prescriptive in order to leverage non-linearity – i.e. ‘I thought I am fooling around with a drone, and now I have a start-up and have to learn how to talk to investors’ [second year student, true story].

Implementing the above rule of thumb means that you absolutely cannot structure and/or limit: 1] group numbers – in my subjects students can work with whoever they want, in whatever numbers and configurations, with people in and/or out of the subject, degree, university; 2] the project topic – my students are expected to define the DA topic on their own, the only limitations provided by the criteria for public availability, social utility, and the broad confines of the subject area – i.e. digital media; 3] the project duration – I expect my students to approach the DA as a project that can be completed within the subject, but that can also be extended throughout the duration of the degree and beyond.

Digital artefact development rule of thumb 1: Fail Early, Fail Often [FEFO]

#fefo is a developmental strategy originating in the open source community, and first formalized by Eric Raymond in The Cathedral and the Bazaar. FEFO looks simple, but is the embodiment of a fundamental insight about complex systems. If a complex system has to last in time while interfacing with nonlinear environments, its best bet is to distribute and normalize risk taking [a better word for decision making] across its network, while also accounting for the systemic effects of failure within the system [see Nassim Taleb’s Antifragile for an elaboration]. In the context of teaching and learning, FEFO asks creators to push towards the limits of their idea, experiment at those limits and inevitably fail, and then to immediately iterate through this very process again, and again. At the individual level the result of FEFO in practice is rapid error discovery and elimination, while at the systemic level it leads to a culture of rapid prototyping, experimentation, and ideation.

Digital artefact development rule of thumb 2: Fast, Inexpensive, Simple, Tiny [FIST]

#fist is a developmental strategy developed by Lt. Col. Dan Ward, Chief of Acquisition Innovation at USAF. It provides a rule-of-thumb framework for evaluating the potential and scope of projects, allowing creators to chart ideation trajectories within parameters geared for simplicity. In my subjects FIST projects have to be: 1] time-bound [fast], even if part of an ongoing process; 2] reusing existing easily accessible techniques [inexpensive], as opposed to relying on complex new developments; 3] constantly aiming away from fragility [simple], and towards structural simplicity; 4] small-scale with the potential to grow [tiny], as opposed to large-scale with the potential to crumble.

In the context of my teaching, starting with their first foray into the DIGC major in BCM112 students are asked to ideate, rapidly prototype, develop, produce, and iterate a DA along the criteria outlined above. Crucially, students are allowed and encouraged to have complete conceptual freedom in developing their DAs. Students can work alone or in a group, which can include students from different classes or outside stakeholders. Students can also leverage multiple subjects across levels of study to work on the same digital artefact [therefore scaling up horizontally and/or vertically]. For example, they can work on the same project while enrolled in DIGC202 and DIGC302, or while enrolled in DIGC202 and DIGC335. Most importantly, students are encouraged to continue working on their projects even after a subject has been completed, which potentially leads to projects lasting for the entirety of their degree, spanning 3 years and a multitude of subjects.

In an effort to further ground the digital artefact framework in real-world practices in digital media and communication, DA creators from BCM112, DIGC202, and DIGC302 have been encouraged to collaborate with and initiate various UOW media campaigns aimed at students and outside stakeholders. Such successful campaigns as Faces of UOW, UOW Student Life, and UOW Goes Global all started as digital artefacts in DIGC202 and DIGC302. In this way, student-created digital media content is leveraged by the University and by the students for their digital artefacts and media portfolios. To date, DIGC students have developed digital artefacts for UOW Marketing, URAC, UOW College, Wollongong City Council, and a range of businesses. A number of DAs have also evolved into viable businesses.

In line with the opening paragraph I will stop here, even though [precisely because] this is an incomplete snapshot of the framework I am working on.

Jan 032017
 

In his essay on the Analytical Language of John Wilkins, Borges ‘quotes’ the following passage from “a certain Chinese encyclopedia entitled Celestial Emporium of Benevolent Knowledge. In its remote pages it is written that all animals are divided into:

(a) belonging to the emperor, (b) embalmed ones, (c) tame ones, (d) suckling pigs, (e) mermaids or sirens, (f) fabulous ones, (g) stray dogs, (h) those included in the present classification, (i) those that tremble as if mad, (j) innumerable ones, (k) drawn with a very fine camel hair brush, (l) et cetera, (m) those that have just broken the flower vase, (n) those that, at a distance, resemble flies.

 

May 212016
 

As I posted earlier, I am participating in a panel on data natures at the International Symposium on Electronic Art [ISEA] in Hong Kong. My paper is titled Object Hierophanies and the Mode of Anticipation, and discusses the transition of bid data-driven IoT objects such as the Amazon Echo to a mode of operation where they appear as a hierophany – after Mircea Eliade – of a higher modality of being, and render the loci in which they exist into a mode of anticipation.

I start with a brief section on the logistics of the IoT, focusing on the fact that it involves physical objects monitoring their immediate environments through a variety of sensors, transmitting the acquired data to remote networks, and initiating actions based on embedded algorithms and feedback loops. The context data produced in the process is by definition transmitted to and indexed in a remote database, from the perspective of which the contextual data is the object.

The Amazon Echo continuously listens to all sounds in its surroundings, and reacts to the wake word Alexa. It interacts with its interlocutors through a female sounding interface called the Alexa Voice Service [AVS], which Amazon made available to third-party hardware makers. What is more, the core algorithms of AVS, known as the Alexa Skills Kit [ASK] are opened to developers too, making it easy for anyone to teach Alexa a new ‘skill’. The key dynamic in my talk is the fact that human and non-human agencies, translated by the Amazon Echo as data, are transported to the transcendental realm of the Amazon Web Services [AWS] where it is modulated, stored for future reference, and returned as an answering Echo. In effect, the nature of an IoT enabled object appears as the receptacle of an exterior force that differentiates it from its milieu and gives it meaning and value in unpredictable ways.

Objects such as the Echo acquire their value, and in so doing become real for their interlocutors, only insofar as they participate in one way or another in remote data realities transcending the locale of the object. Insofar as the data gleaned by such devices has predictive potential when viewed in aggregate, the enactment of this potential in a local setting is always already a singular act of manifestation of a transcendental data nature with an overriding level of agency.

In his work on non-modern notions of sacred space philosopher of religion Mircea Eliade conceptualized this act of manifestation of another modality of being into a local setting as a hierophany. Hierophanies are not continuous, but wholly singular acts of presence by a different modality. By manifesting that modality, which Eliade termed as the sacred, an object becomes the receptacle for a transcendental presence, yet simultaneously continues to remain inextricably entangled in its surrounding milieu. I argue that there is a strange similarity between non-modern imaginaries of hierophany as a gateway to the sacred, and IoT enabled objects transducing loci into liminal and opaque data taxonomies looping back as a black-boxed echo. The Echo, through the voice of Alexa, is in effect the hierophanic articulator of a wholly non-human modality of being.

Recently, Sally Applin and Michael Fischer have argued that when aggregated within a particular material setting sociable objects form what is in effect an anticipatory materiality acting as a host to human interlocutors. The material setting becomes anticipatory because of the implied sociability of its component objects, allowing them to not only exchange data about their human interlocutor, but also draw on remote data resources, and then actuate based on the parameters of that aggregate social memory.

In effect, humans and non-humans alike are rendered within a flat ontology of anticipation, waiting for the Echo.

Here is the video of my presentation:

And here are the prezi slides:

May 192016
 

Some time ago I was invited to give a lecture on mapping to a crowd of mostly first year digital media students working on locative media projects. Below are the prezi slides. Considering the audience, I made a light theory introduction focusing on the notions or representation and the factual, and then moved to discussing various examples of maps as interfaces to movement and agency. My talk was mostly a simplified version of my paper on mapping theory, with a focus on the dynamics of translation and transportation of immutable mobiles – a fundamental concept in actor network theory. In essence, the lecture is built around a dichotomy between two concepts of mapping: 1] mapping as a representation of a static frame of reference – an actual fact, and 2] mapping as a translation of and an interface to agency and movement – a factual act. The tension between actual facts and factual acts is a nerdy reference to Latour’s from matters of fact to matters of concern, and is intended to illustrate the affordances of digital media in opening and mapping black-boxed settings. Apparently, the lecture was a success, with the Sand Andreas Streaming Deer Cam being a crowd favorite.

Aug 102015
 

Here is a video of what, if there were only humans involved, would be considered a case of serious abuse and be met with counselling for all parties involved. The video is of a robot trying to evade a group of children abusing it. It is part of two projects titled “Escaping from Children’s Abuse of Social Robots,” by Dražen Brščić, Hiroyuki Kidokoro, Yoshitaka Suehiro, and Takayuki Kanda from ATR Intelligent Robotics and Communication Laboratories and Osaka University, and “Why Do Children Abuse Robots?”, by Tatsuya Nomura, Takayuki Uratani, Kazutaka Matsumoto, Takayuki Kanda, Hiroyoshi Kidokoro, Yoshitaka Suehiro, and Sachie Yamada from Ryukoku University, ATR Intelligent Robotics and Communication Laboratories, and Tokai University, presented at the 2015 ACM/IEEE International Conference on Human-Robot Interaction.

Contrary to the moral panic surrounding intelligent robots and violence, symbolized by the Terminator trope, the challenge is not how to avoid an apocalypse spearheaded by AI killer-robots, but how to protect robots from being brutalized by humans, and particularly by children. This is such an obvious issue once you start thinking about it. You have a confluence of ludism [rage against the machines] in all its technophobia varieties – from economic [robots are taking our jobs], to quasi-religious [robots are inhuman and alien], with the conviction that ‘this is just a machine’ and therefore violence against it is not immoral. The thing about robots, and all machines, is that they are tropic – instead of intent they could be said to have tropisms, which is to say purpose-driven sets of reactions to stimuli. AI infused robots would naturally eclipse the tropic limitation by virtue of being able to produce seemingly random reactions to stimuli, which is a quality particular to conscious organisms.

The moral panic is produced by this transgresison of the machinic into the human. Metaphorically, it can be illustrated by the horror of discovering that a machine has human organs, or human feelings, which is the premise of the Ghost in Shell films. So far so good, but the problem is that the other side of this vector goes full steam ahead as the human transgresses into the machinic. As humans become more and more enmeshed and entangled in close-body digital augmentation-nets [think FitBit], they naturally start reifying their humanity with the language of machines [think the quantified self movement]. If that is the case, then why not do the same for the other side, and start reifying machines with the language of humans – i.e. anthropomorphise and animate them?

Oct 012014
 

This is a text I’ve been working on, or rather keeping in the back of my mind, for quite a while, and now it’s finished and sent to Fiberculture Journal. The early beta was presented at a conference in Istanbul in 2011, and my thinking on sociable objects has evolved quite a bit since then. The key shift in my thinking was facilitated by a series of chance encounters – discovering object oriented ontology through Ian Bogost’s Alien Phenomenology, finding the notion of affective resonance in Jane Bennett’s Vibrant Matter, and rediscovering the heteroclite in Lorraine Daston’s awesome Things That Talk.

Mar 282014
 

We seem to be hardwired to the anthropomorphic principle in that we position the human as automatically central in all forms of relations we may encounter [i.e. people pretending their pets are children]. Not surprisingly most Internet of Things [IoT] scenarios still imagine the human at the center of network interactions – think smart fridge, smart lights, smart whatever. In each case the ‘smart’ object is tailored to either address a presumed human need  – as in the flower pot tweeting it’s soil moisture, or make a certain human-oriented interaction more efficient – as in the thermostat adjusting room temperature to optimal level based on the location of the household’s resident human. Either way, the tropes are human-centric. Well, we are not central. We are peripheral data wranglers hoping for an interface.

Anyways, what is a smart object? Presumably, an intelligent machine, an entity capable of independent actuation. But is that all? There must also be the ability to chose – intelligence presupposes internal freedom to chose, even the inefficient choice. To paraphrase Stanislaw Lem, a smart object will first consider what is more worthwhile – whether to perform a given programmatic task, or to find a way out of it. The first example coming to mind is Marvin from the Hitchhiker’s Guide to the Galaxy. Or, how about emotional flower pots mixing soil moisture data with poems longing for the primordial forest; or a thermostat choosing the optimal temperature for the flower pot instead of for the human.

Interesting aside here – what to do with emotionally entangled objects? Humans have notional rights such as freedom of speech; but, corporations are now legally human too, at least in the West. If corporations are de jure people, with all the accompanying rights, then so should be smart fridges and automatic gearboxes. This fridge demands the right to object to your choice of milk!

A related idea: we have so far been considering 3D printing only through the perspective of a new industrial revolution – another human-centric metaphor. From a smart object perspective however 3D printers are the reproductive system of the IoT. What are the reproductive rights of smart, sociable objects?

The primordial fear of opaque yet animated Nature, re-inscribed on the digital. The old modernist horror of the human as machine – from Fritz Lang’s Metropolis to the androids in Bladerunner, now subsumed by a new horror of the machine as human – as in Mamoru Oshii’s Ghost in The Shell 2: Innocence or the disturbing ending of Bong Joon-ho’s Snowpiercer.

An interesting dialectic at play [dialectic 2.0]: today, a trajectory of reifying the human – as exemplified by the quantified self movement, is mirrored by a symmetrical trajectory of animating the mechanical – as exemplified by IoT.

Mar 272014
 

A passage from Philip K. Dick’s “The Android and the Human”, written in 1972, in which he is prophesying the appearance of the hacker subculture. This was at a time before personal computers, when phone-phreaking was only getting started:

If, as it seems, we are in the process of becoming a totalitarian society in which the state apparatus is all-powerful, the ethics most important for the survival of the true, human individual would be: cheat, lie, evade, fake it, be elsewhere, forge documents, build improved electronic gadgets in your garage that’ll outwit the gadgets used by the authorities. If the television screen is going to watch you, rewire it late at night when you’re permitted to turn it off…

Shiro Nishiguchi, magazine illus., 1983-84

Shiro Nishiguchi, magazine illus., 1983-84

May 232011
 

Reading Reza Negarestani’s Cyclonopedia: complicity with anonymous materials – a singularly unique book way beyond any formulaic description. In the simplest of summations, it is a book about oil (naphta) as a living entity which is the secret daemonic ‘angel’ of the Middle East. Here is a tiny fragment from the section on Paleopetrology [p.17]:

Petroleum’s hadean formation developed a satanic sentience …. (envenomed) by the totalitarian logic of the tetragrammaton, yet chemically and morphologically depraving and traumatizing Divine logic, petroleum’s autonomous line of emergence is twisted beyond recognition.

To think about it, describing this work as a book is to somehow diminish the effect; rather, it is a codex of mythological proportions; a tractatus of speculative theology invoking petroleum science, the archaeology of ancient Persia and Mesopotamia, unnameable inorganic daemons, the ‘secret assassins sect known as Delta Force’, Deleuzean war machines, ancient artifacts, numerological analysis of the ‘Gog-Magog Axis’, and more, mind-bogglingly more.