Press "Enter" to skip to content

Tag: internet of things

Do objects dream of an internet of things? [abstract]

I will be presenting a paper at the upcoming International Association for Media and Communication Research (IAMCR) conference in Istanbul which is under the general theme of Cities, Creativity, Connectivity.

Below is the abstract for my paper titled:

Do objects dream of an internet of things: re-locating the social in ambient socio-digital systems

This paper engages the notion of an internet of things and its implications for conceptualisations of the social, as exemplified by issues such as network identity, privacy, and surveillance. The internet of things can be roughly defined as object networks linking physical and virtual objects into an assemblage with ambient data-capture capabilities. The metamorphosis of the human-centred internet into an internet of things entails the emergence of socio-digital assemblages, with ambient connectivity ‘gelling’ the practices of humans and nonhumans into an augmented, hybrid space. This hybrid space offers two sets of problems – from the perspective of its human users it questions fundamental notions of privacy and identity, while from the perspective of objects it demands for a yet-to-be developed taxonomy of hitherto black-boxed data.

The paper argues that this problematic is fundamentally a function of a social projection ill-equipped to manoeuvre in hybrid space, and suggests an examination of mobile socio-digital assemblages with a conceptual apparatus borrowed from actor-network theory (ANT) and the work of Gabriel Tarde. Key to this reasoning is the specific delineation of the social emerging from these approaches. For ANT, distinctions between entities appear as an effect of the relations between them, while for Tarde the elementary social fact consists of the forms of relations through which difference is produced. The main strength of this conceptual apparatus lies in its capacity to encounter the hybrid complexity of socio-digital assemblages without assigning a priori subject-object relationalities – spatial relations are performed simultaneously with the construction of (hybrid) objects. The paper’s argument is illustrated with case-studies of the internet of things.

The paper suggests that while the internet of things profoundly undermines human-centric projections of network sociality, it also makes the semantics of circulating objects readable for, and visible to, humans. As projects such as talesofthings, itizen, and pachube already demonstrate, making object-semantics explicit and mobile renders their human interlocutors in a hitherto unknown terrain. The enfolding of objects into socio-digital assemblages portends a rearrangement of the rules of occupancy and patterns of mobility within the physical world, because when objects are enrolled as explicit actors their circulations become explicit too.

Examining this research problematic can provide a theoretical understanding of the arguably fundamental shifts in sociality and subjectivity entailed by the proliferation of ambient socio-digital assemblages. Such an understanding is crucial if we are to formulate a stable and coherent approach to the challenges posed by an internet of things.

Web 2.0 five years on

I recently discovered this white paper by Tim O’Reilly and John Battelle – two of the major gurus of web 2.0 and search respectively. Plenty of interesting information and ideas to chew on, but what really caught my attention is their discussion of the notions of information shadows and deep context learning by database algorithms. It is fairly banal to observe that each of us consists of multiple identity-layers, but how do you teach a database to make the meta-connections between these disparate pieces of data? And what happens when you teach it that? I need to think more about this but it seems an argument could be made that at a certain meta-level the subject-object divide becomes inconsequential; in other words, the difference between a human and an object is a function of their ‘entanglement networks’.

O’Reilly & Batelle – Web 2.0 Five Years on [White Paper]

Internet of Things links

The Internet of Things is slowly but surely becoming an unevenly distributed reality. Early precursors – a number of sites dedicated to creating accessible crowdsourced data-clouds for everyday objects. After being created, each data-cloud is accessible through scanning a printable tag which can be downloaded from the site. The information I upload into the cloud together with the image/video can be as trivial as ‘this is my writing desk’, or as arcane as the travails of family heirloom. It doesn’t matter – most of the data will be useless, but the potential of object socialization is immense because of a/ the ability to create semantic depth where until now there was none, b/enfolding the rich dynamics of space and time in objects, c/ merging physical reality with the net.

Object stories: Tales of Things, Itizen, StickyBits

Architecture intermediaries: Pachube, Mbed

While object stories are the easiest path of entry and therefore probably where the majority of participation will occur for now, a project like Mbed has massive potential as it ultimately may do the for the internet of things what blogspot and the likes did for self-expression.

The Internet of Things Manifesto

All things considered, this Julian Bleecker’s Why Things Matter is probably the closest we have to a Manifesto for the internet of things. Bruce Sterling’s Shaping Things is also in that category, but it’s a longer text and it lacks the theoretical punch. Besides, any theoretical piece on networked objects has to first deal with the modern separation of the world into the nature/culture dichotomy, and Bleecker does just that with his use of Bruno Latour’s We Have Never Been Modern.

Why Things Matter by evanLe

The Google cloud [part 2]

In part 1 I mentioned Google’s focus on low latency sensors and massively redundant cloud data centers. Google is not the only company in the race though, and probably not the most advanced down that road. Ericsson – the world’s largest mobile equipment vendor – is seriously planning to operate 50 billion net-connected devices by 2020.  Only a small fraction of these will be what we consider as ‘devices’ – mobile phones, laptops, Kindles. The enormous majority will be everyday objects such as fridges (strategic object due to its central role in food consumption), cars (see the new Audi), clothes, basically everything potentially worth connecting. This implies an explosion in data traffic.

As Stacey Higginbotham writes over at Gigaom:

So even as data revenue and traffic rises, carriers face two key challenges: One, the handset market is saturated; and two, users on smartphones are boosting their consumption of data at a far faster rate than carriers are boosting their data revenue. The answer to these challenges is selling data plans for your car. Your kitchen. And even your electric meter.

In other words, it is in the interest of mobile providers to extend the network to as many devices as possible so that they can start profiting from the long tail. As the competition in mobile connectivity is fierce and at cut-throat margins, the first company to start mass-connecting (and charging) daily objects is going to make a killing. Hence Google’s focus on sensors and data centers.

This presentation by wireless analyst Chetan Sharma outlines the motivation for mobile providers to bring the internet of things as quickly as possible.

The Google cloud

I just watched this interesting interview with Hugo Barra, director of product management at Google (G), talking about the convergence between mobile net devices and cloud computing. He is mainly answering questions on G plans for the next 2-5 years but a couple of long-term ideas seep through. First, they are thinking sensors and massively redundant cloud data-centers, and they are thinking of them as part of a constant feedback process for which low latency is the key. In other words, your phone’s camera and microphone talk directly to the G data-cloud on a latency of under 1 second – whatever you film on your camera you can voice-recall on any device within 1 second flat. The implications are huge, because G is effectively eliminating the need for local data storage. Second, to get there, they are rolling out real-time voice search by the end of next year. Real time voice search allows you to query the cloud in, well, under 1 second. Third, they are thinking of this whole process as ‘computer vision’ – a naming tactic which might seem plain semantics, but nevertheless reveals a lot. It reveals that G sees stationary computers as blind, that for them mobile computers are first and foremost sensors, and that sensors start truly seeing only when there is low latency feedback between them and the cloud. How so? The key of course is in the content – once storage, processing power and speed get taken care of by the cloud, the clients  – that is, us – start operating at a meta level of content which is quite hard to even fully conceptualize at the moment (Barra admits he has no idea where this will go in 5 years). The possibilities are orders of magnitude beyond what we are currently doing with computers and the net.

A related video, though with a more visionary perspective, is this talk by Kevin Kelly on the next 5000 days of the net. I show this to all my media students, though I don’t think any of them truly grasp what all-in-the-cloud implies. The internet of things. More on this tomorrow.

Towards a Taxonomy of Social Networking Data

Bruce Schneier has posted over at his blog the following draft of a social networking data taxonomy:

  • Service data is the data you give to a social networking site in order to use it. Such data might include your legal name, your age, and your credit-card number.
  • Disclosed data is what you post on your own pages: blog entries, photographs, messages, comments, and so on.
  • Entrusted data is what you post on other people’s pages. It’s basically the same stuff as disclosed data, but the difference is that you don’t have control over the data once you post it — another user does.
  • Incidental data is what other people post about you: a paragraph about you that someone else writes, a picture of you that someone else takes and posts. Again, it’s basically the same stuff as disclosed data, but the difference is that you don’t have control over it, and you didn’t create it in the first place.
  • Behavioral data is data the site collects about your habits by recording what you do and who you do it with. It might include games you play, topics you write about, news articles you access (and what that says about your political leanings), and so on.
  • Derived data is data about you that is derived from all the other data. For example, if 80 percent of your friends self-identify as gay, you’re likely gay yourself.

Why is this important? Because in order to develop ways to control the data we distribute in the cloud we need to first classify precisely the different types of data and their relational position within our digital footprint and the surrounding ecology. Disclosed data is of different value to Behavioral or Derived data, and most people will likely value their individual content such as pictures and posts much more than the aggregated patterns sucked out of their footprint by a social network site’s algorithms. Much to think about here.

Cloud computing and secret gardens

Charlie Stross has a great piece on his site commenting Apple’s strategy with the iPad and Steve Jobs’s vicious antipathy towards any cross-platform apps not originating from Apple. Plenty of material to discuss there, but for me the interesting part is [1] the notion that cloud computing is going to displace the PC in a controlled walled-garden way. By walled-garden I mean a total-control platform like iTunes – or anything else from that nightmarish company for that matter. I suspect that Stross is right, at least when it comes to Apple – their strategy after all is easy to deduce, but I just don’t see how walled-garden platform is going to dominate the cloud-space when you consider the relentless pressure for interoperability applied by a constantly emerging market. One could argue that Microsoft’s success with the PC platform has been solely due to their complete openness to hardware and third-party soft. Google seem to go down a similar path and if anything it is their already developing cloud platform that would probably dominate the early decade of cloud computing. Stross sums it up nicely:

‘Because you won’t have a “computer” in the current sense of the word. You’ll just be surrounded by a swarm of devices that give you access to your data whenever and however you need it.’

Apple’s and their ilk ‘success’ would be to maintain the cult by porting to a cloud platform, but the sheer necessity of total interoperability related to broad market penetration will prevent them from dominating the cloud. Finally, the comparison between Apple and BMW/Mercedes ‘high-end’ cars doesn’t work for me – I see Jobs’s cult as a Saab.

John Battelle: My Location Is A Box of Cereal

John Battelle’s Signal Weds: My Location Is A Box of Cereal on location based services.

““Where I am” is a powerful signal, in particular if where you are is a local business that might answer that signal with an offer that engenders loyalty, purchase, or both.

But I’m starting to think that we need to expand the concept of location to more than physical spaces. Why can’t I check-in to a website? An article? A state of mind? An emotion? Or…an object?”