I recently discovered this white paper by Tim O’Reilly and John Battelle – two of the major gurus of web 2.0 and search respectively. Plenty of interesting information and ideas to chew on, but what really caught my attention is their discussion of the notions of information shadows and deep context learning by database algorithms. It is fairly banal to observe that each of us consists of multiple identity-layers, but how do you teach a database to make the meta-connections between these disparate pieces of data? And what happens when you teach it that? I need to think more about this but it seems an argument could be made that at a certain meta-level the subject-object divide becomes inconsequential; in other words, the difference between a human and an object is a function of their ‘entanglement networks’.
Tag: internet of things
The Internet of Things is slowly but surely becoming an unevenly distributed reality. Early precursors – a number of sites dedicated to creating accessible crowdsourced data-clouds for everyday objects. After being created, each data-cloud is accessible through scanning a printable tag which can be downloaded from the site. The information I upload into the cloud together with the image/video can be as trivial as ‘this is my writing desk’, or as arcane as the travails of family heirloom. It doesn’t matter – most of the data will be useless, but the potential of object socialization is immense because of a/ the ability to create semantic depth where until now there was none, b/enfolding the rich dynamics of space and time in objects, c/ merging physical reality with the net.
While object stories are the easiest path of entry and therefore probably where the majority of participation will occur for now, a project like Mbed has massive potential as it ultimately may do the for the internet of things what blogspot and the likes did for self-expression.
All things considered, this Julian Bleecker’s Why Things Matter is probably the closest we have to a Manifesto for the internet of things. Bruce Sterling’s Shaping Things is also in that category, but it’s a longer text and it lacks the theoretical punch. Besides, any theoretical piece on networked objects has to first deal with the modern separation of the world into the nature/culture dichotomy, and Bleecker does just that with his use of Bruno Latour’s We Have Never Been Modern.
In part 1 I mentioned Google’s focus on low latency sensors and massively redundant cloud data centers. Google is not the only company in the race though, and probably not the most advanced down that road. Ericsson – the world’s largest mobile equipment vendor – is seriously planning to operate 50 billion net-connected devices by 2020. Only a small fraction of these will be what we consider as ‘devices’ – mobile phones, laptops, Kindles. The enormous majority will be everyday objects such as fridges (strategic object due to its central role in food consumption), cars (see the new Audi), clothes, basically everything potentially worth connecting. This implies an explosion in data traffic.
As Stacey Higginbotham writes over at Gigaom:
So even as data revenue and traffic rises, carriers face two key challenges: One, the handset market is saturated; and two, users on smartphones are boosting their consumption of data at a far faster rate than carriers are boosting their data revenue. The answer to these challenges is selling data plans for your car. Your kitchen. And even your electric meter.
In other words, it is in the interest of mobile providers to extend the network to as many devices as possible so that they can start profiting from the long tail. As the competition in mobile connectivity is fierce and at cut-throat margins, the first company to start mass-connecting (and charging) daily objects is going to make a killing. Hence Google’s focus on sensors and data centers.
This presentation by wireless analyst Chetan Sharma outlines the motivation for mobile providers to bring the internet of things as quickly as possible.
Bruce Schneier has posted over at his blog the following draft of a social networking data taxonomy:
- Service data is the data you give to a social networking site in order to use it. Such data might include your legal name, your age, and your credit-card number.
- Disclosed data is what you post on your own pages: blog entries, photographs, messages, comments, and so on.
- Entrusted data is what you post on other people’s pages. It’s basically the same stuff as disclosed data, but the difference is that you don’t have control over the data once you post it — another user does.
- Incidental data is what other people post about you: a paragraph about you that someone else writes, a picture of you that someone else takes and posts. Again, it’s basically the same stuff as disclosed data, but the difference is that you don’t have control over it, and you didn’t create it in the first place.
- Behavioral data is data the site collects about your habits by recording what you do and who you do it with. It might include games you play, topics you write about, news articles you access (and what that says about your political leanings), and so on.
- Derived data is data about you that is derived from all the other data. For example, if 80 percent of your friends self-identify as gay, you’re likely gay yourself.
Why is this important? Because in order to develop ways to control the data we distribute in the cloud we need to first classify precisely the different types of data and their relational position within our digital footprint and the surrounding ecology. Disclosed data is of different value to Behavioral or Derived data, and most people will likely value their individual content such as pictures and posts much more than the aggregated patterns sucked out of their footprint by a social network site’s algorithms. Much to think about here.
Charlie Stross has a great piece on his site commenting Apple’s strategy with the iPad and Steve Jobs’s vicious antipathy towards any cross-platform apps not originating from Apple. Plenty of material to discuss there, but for me the interesting part is  the notion that cloud computing is going to displace the PC in a controlled walled-garden way. By walled-garden I mean a total-control platform like iTunes – or anything else from that nightmarish company for that matter. I suspect that Stross is right, at least when it comes to Apple – their strategy after all is easy to deduce, but I just don’t see how walled-garden platform is going to dominate the cloud-space when you consider the relentless pressure for interoperability applied by a constantly emerging market. One could argue that Microsoft’s success with the PC platform has been solely due to their complete openness to hardware and third-party soft. Google seem to go down a similar path and if anything it is their already developing cloud platform that would probably dominate the early decade of cloud computing. Stross sums it up nicely:
‘Because you won’t have a “computer” in the current sense of the word. You’ll just be surrounded by a swarm of devices that give you access to your data whenever and however you need it.’
Apple’s and their ilk ‘success’ would be to maintain the cult by porting to a cloud platform, but the sheer necessity of total interoperability related to broad market penetration will prevent them from dominating the cloud. Finally, the comparison between Apple and BMW/Mercedes ‘high-end’ cars doesn’t work for me – I see Jobs’s cult as a Saab.
John Battelle’s Signal Weds: My Location Is A Box of Cereal on location based services.
““Where I am” is a powerful signal, in particular if where you are is a local business that might answer that signal with an offer that engenders loyalty, purchase, or both.
But I’m starting to think that we need to expand the concept of location to more than physical spaces. Why can’t I check-in to a website? An article? A state of mind? An emotion? Or…an object?”