Mar 282014
 

We seem to be hardwired to the anthropomorphic principle in that we position the human as automatically central in all forms of relations we may encounter [i.e. people pretending their pets are children]. Not surprisingly most Internet of Things [IoT] scenarios still imagine the human at the center of network interactions – think smart fridge, smart lights, smart whatever. In each case the ‘smart’ object is tailored to either address a presumed human need  – as in the flower pot tweeting it’s soil moisture, or make a certain human-oriented interaction more efficient – as in the thermostat adjusting room temperature to optimal level based on the location of the household’s resident human. Either way, the tropes are human-centric. Well, we are not central. We are peripheral data wranglers hoping for an interface.

Anyways, what is a smart object? Presumably, an intelligent machine, an entity capable of independent actuation. But is that all? There must also be the ability to chose – intelligence presupposes internal freedom to chose, even the inefficient choice. To paraphrase Stanislaw Lem, a smart object will first consider what is more worthwhile – whether to perform a given programmatic task, or to find a way out of it. The first example coming to mind is Marvin from the Hitchhiker’s Guide to the Galaxy. Or, how about emotional flower pots mixing soil moisture data with poems longing for the primordial forest; or a thermostat choosing the optimal temperature for the flower pot instead of for the human.

Interesting aside here – what to do with emotionally entangled objects? Humans have notional rights such as freedom of speech; but, corporations are now legally human too, at least in the West. If corporations are de jure people, with all the accompanying rights, then so should be smart fridges and automatic gearboxes. This fridge demands the right to object to your choice of milk!

A related idea: we have so far been considering 3D printing only through the perspective of a new industrial revolution – another human-centric metaphor. From a smart object perspective however 3D printers are the reproductive system of the IoT. What are the reproductive rights of smart, sociable objects?

The primordial fear of opaque yet animated Nature, re-inscribed on the digital. The old modernist horror of the human as machine – from Fritz Lang’s Metropolis to the androids in Bladerunner, now subsumed by a new horror of the machine as human – as in Mamoru Oshii’s Ghost in The Shell 2: Innocence or the disturbing ending of Bong Joon-ho’s Snowpiercer.

An interesting dialectic at play [dialectic 2.0]: today, a trajectory of reifying the human – as exemplified by the quantified self movement, is mirrored by a symmetrical trajectory of animating the mechanical – as exemplified by IoT.

May 172013
 

Here is a prezi from a lecture I gave at a postgraduate seminar on Actor Network Theory (ANT) and the Internet of Things (iot). The central concept of the talk was however the notion of the heteroclite and why ANT methodologies for world-encountering are useful when tangling with heteroclite objects. I use the heteroclite in the Baconian sense of a monstrous deviation, which by its very entry on stage creates collective entanglements demanding the mobilization of all sorts of dormant or obfuscated networks. For an example of a heteroclite currently being performed think of the Google car and how it deviates from the driver as an actor. I find the heteroclite a fascinating metaphor for dealing with hybrid objects.

Sep 182010
 

All things considered, this Julian Bleecker’s Why Things Matter is probably the closest we have to a Manifesto for the internet of things. Bruce Sterling’s Shaping Things is also in that category, but it’s a longer text and it lacks the theoretical punch. Besides, any theoretical piece on networked objects has to first deal with the modern separation of the world into the nature/culture dichotomy, and Bleecker does just that with his use of Bruno Latour’s We Have Never Been Modern.

Why Things Matter by evanLe

Sep 072010
 

As I discussed here, and here, Google seems to seriously plan for and work towards a prime position in cloud computing (web 3.0?). A couple of interesting links relate to that. First comes the now infamous interview Eric Schmidt, Google’s CEO, gave at the WSJ. In that interview he made a number of comments indicating where Google are looking at the moment, but for some reason all it was remembered for is his quip that because of privacy issues with social networks in the future kids may end up having to change their names when they reach adulthood. Ok, this is odd, and it came out of nowhere, but surely there are more interesting bits in what he had to say. Much more interesting for example is his hint that Google are seriously working on developing semantic algorithms:

“As you go from the search box [to the next phase of Google], you really want to go from syntax to semantics, from what you typed to what you meant. And that’s basically the role of [Artificial Intelligence].  I think we will be the world leader in that for a long time.”

This statement has to be read in the context of Google’s move to the cloud. In that paradigm the semantic depth of your search query will be provided by your entire cloud footprint. This is quite literally an Artificial Intelligence in full operational mode. As William Gibson writes in a recent article in the New York Times,

“We never imagined that artificial intelligence would be like this. We imagined discrete entities. Genies.”

We imagined HAL, and Wintermute, but instead of managing an ultimately controllable anthropomorphic machine we have to deal with a distributed mind that is built of…us. An ambient socio-digital system.

Aug 162010
 

In part 1 I mentioned Google’s focus on low latency sensors and massively redundant cloud data centers. Google is not the only company in the race though, and probably not the most advanced down that road. Ericsson – the world’s largest mobile equipment vendor – is seriously planning to operate 50 billion net-connected devices by 2020.  Only a small fraction of these will be what we consider as ‘devices’ – mobile phones, laptops, Kindles. The enormous majority will be everyday objects such as fridges (strategic object due to its central role in food consumption), cars (see the new Audi), clothes, basically everything potentially worth connecting. This implies an explosion in data traffic.

As Stacey Higginbotham writes over at Gigaom:

So even as data revenue and traffic rises, carriers face two key challenges: One, the handset market is saturated; and two, users on smartphones are boosting their consumption of data at a far faster rate than carriers are boosting their data revenue. The answer to these challenges is selling data plans for your car. Your kitchen. And even your electric meter.

In other words, it is in the interest of mobile providers to extend the network to as many devices as possible so that they can start profiting from the long tail. As the competition in mobile connectivity is fierce and at cut-throat margins, the first company to start mass-connecting (and charging) daily objects is going to make a killing. Hence Google’s focus on sensors and data centers.

This presentation by wireless analyst Chetan Sharma outlines the motivation for mobile providers to bring the internet of things as quickly as possible.

Aug 142010
 

I just watched this interesting interview with Hugo Barra, director of product management at Google (G), talking about the convergence between mobile net devices and cloud computing. He is mainly answering questions on G plans for the next 2-5 years but a couple of long-term ideas seep through. First, they are thinking sensors and massively redundant cloud data-centers, and they are thinking of them as part of a constant feedback process for which low latency is the key. In other words, your phone’s camera and microphone talk directly to the G data-cloud on a latency of under 1 second – whatever you film on your camera you can voice-recall on any device within 1 second flat. The implications are huge, because G is effectively eliminating the need for local data storage. Second, to get there, they are rolling out real-time voice search by the end of next year. Real time voice search allows you to query the cloud in, well, under 1 second. Third, they are thinking of this whole process as ‘computer vision’ – a naming tactic which might seem plain semantics, but nevertheless reveals a lot. It reveals that G sees stationary computers as blind, that for them mobile computers are first and foremost sensors, and that sensors start truly seeing only when there is low latency feedback between them and the cloud. How so? The key of course is in the content – once storage, processing power and speed get taken care of by the cloud, the clients  – that is, us – start operating at a meta level of content which is quite hard to even fully conceptualize at the moment (Barra admits he has no idea where this will go in 5 years). The possibilities are orders of magnitude beyond what we are currently doing with computers and the net.

A related video, though with a more visionary perspective, is this talk by Kevin Kelly on the next 5000 days of the net. I show this to all my media students, though I don’t think any of them truly grasp what all-in-the-cloud implies. The internet of things. More on this tomorrow.

Aug 122010
 

Bruce Schneier has posted over at his blog the following draft of a social networking data taxonomy:

  • Service data is the data you give to a social networking site in order to use it. Such data might include your legal name, your age, and your credit-card number.
  • Disclosed data is what you post on your own pages: blog entries, photographs, messages, comments, and so on.
  • Entrusted data is what you post on other people’s pages. It’s basically the same stuff as disclosed data, but the difference is that you don’t have control over the data once you post it — another user does.
  • Incidental data is what other people post about you: a paragraph about you that someone else writes, a picture of you that someone else takes and posts. Again, it’s basically the same stuff as disclosed data, but the difference is that you don’t have control over it, and you didn’t create it in the first place.
  • Behavioral data is data the site collects about your habits by recording what you do and who you do it with. It might include games you play, topics you write about, news articles you access (and what that says about your political leanings), and so on.
  • Derived data is data about you that is derived from all the other data. For example, if 80 percent of your friends self-identify as gay, you’re likely gay yourself.

Why is this important? Because in order to develop ways to control the data we distribute in the cloud we need to first classify precisely the different types of data and their relational position within our digital footprint and the surrounding ecology. Disclosed data is of different value to Behavioral or Derived data, and most people will likely value their individual content such as pictures and posts much more than the aggregated patterns sucked out of their footprint by a social network site’s algorithms. Much to think about here.