Press "Enter" to skip to content

Category: academic

Engaging authorless content

These are the Prezi slides from a lecture I gave at Curtin University on the dynamics of form, content, and authorless collaboration online. Working in Prezi is great fun and forces you to develop your thoughts in three dimensions as opposed to PowerPoint’s single-plane linearity.

The Google cloud [part 2]

In part 1 I mentioned Google’s focus on low latency sensors and massively redundant cloud data centers. Google is not the only company in the race though, and probably not the most advanced down that road. Ericsson – the world’s largest mobile equipment vendor – is seriously planning to operate 50 billion net-connected devices by 2020.  Only a small fraction of these will be what we consider as ‘devices’ – mobile phones, laptops, Kindles. The enormous majority will be everyday objects such as fridges (strategic object due to its central role in food consumption), cars (see the new Audi), clothes, basically everything potentially worth connecting. This implies an explosion in data traffic.

As Stacey Higginbotham writes over at Gigaom:

So even as data revenue and traffic rises, carriers face two key challenges: One, the handset market is saturated; and two, users on smartphones are boosting their consumption of data at a far faster rate than carriers are boosting their data revenue. The answer to these challenges is selling data plans for your car. Your kitchen. And even your electric meter.

In other words, it is in the interest of mobile providers to extend the network to as many devices as possible so that they can start profiting from the long tail. As the competition in mobile connectivity is fierce and at cut-throat margins, the first company to start mass-connecting (and charging) daily objects is going to make a killing. Hence Google’s focus on sensors and data centers.

This presentation by wireless analyst Chetan Sharma outlines the motivation for mobile providers to bring the internet of things as quickly as possible.

The Google cloud

I just watched this interesting interview with Hugo Barra, director of product management at Google (G), talking about the convergence between mobile net devices and cloud computing. He is mainly answering questions on G plans for the next 2-5 years but a couple of long-term ideas seep through. First, they are thinking sensors and massively redundant cloud data-centers, and they are thinking of them as part of a constant feedback process for which low latency is the key. In other words, your phone’s camera and microphone talk directly to the G data-cloud on a latency of under 1 second – whatever you film on your camera you can voice-recall on any device within 1 second flat. The implications are huge, because G is effectively eliminating the need for local data storage. Second, to get there, they are rolling out real-time voice search by the end of next year. Real time voice search allows you to query the cloud in, well, under 1 second. Third, they are thinking of this whole process as ‘computer vision’ – a naming tactic which might seem plain semantics, but nevertheless reveals a lot. It reveals that G sees stationary computers as blind, that for them mobile computers are first and foremost sensors, and that sensors start truly seeing only when there is low latency feedback between them and the cloud. How so? The key of course is in the content – once storage, processing power and speed get taken care of by the cloud, the clients  – that is, us – start operating at a meta level of content which is quite hard to even fully conceptualize at the moment (Barra admits he has no idea where this will go in 5 years). The possibilities are orders of magnitude beyond what we are currently doing with computers and the net.

A related video, though with a more visionary perspective, is this talk by Kevin Kelly on the next 5000 days of the net. I show this to all my media students, though I don’t think any of them truly grasp what all-in-the-cloud implies. The internet of things. More on this tomorrow.

Random Links

What collapsing empire looks like by Glenn Greenwald: – The title speaks for itself. A list of bad news from all across the US – power blackouts, roads in disrepair, no streetlights, no schools, no libraries – reads like Eastern Europe after the fall of communism, only that the fall is yet to come here.

Special Operations’ Robocopter Spotted in Belize by Olivia Koski: – Super quiet rotors, synthetic-aperture radar capable of following slow moving people through dense foliage, and ability to fly autonomously through a programmed route. This article complements nicely the one above.

Open Source Tools Turn WikiLeaks Into Illustrated Afghan Meltdown by Noah Shachtman: – Meticulous graphical representation of the WikiLeaks Afghan log. The Hazara provinces in the center of the country, and the shia provinces next to the Iranian border seem strangely quiet.

Google Agonizes on Privacy as Ad World Vaults Ahead by Jessica E. Vascellaro: – A fascinating look at the inside of the Google machine. They seem to have reached a crossroad of their own making – they either start using the Aladdin’s cave of data they have gathered already, or they keep it at arm’s length and lay the foundations of their own demise. Key statement: ‘In short, Google is trying to establish itself as the clearinghouse for as many ad transactions as possible, even when those deals don’t actually involve consumer data that Google provides or sees.’

Towards a Taxonomy of Social Networking Data

Bruce Schneier has posted over at his blog the following draft of a social networking data taxonomy:

  • Service data is the data you give to a social networking site in order to use it. Such data might include your legal name, your age, and your credit-card number.
  • Disclosed data is what you post on your own pages: blog entries, photographs, messages, comments, and so on.
  • Entrusted data is what you post on other people’s pages. It’s basically the same stuff as disclosed data, but the difference is that you don’t have control over the data once you post it — another user does.
  • Incidental data is what other people post about you: a paragraph about you that someone else writes, a picture of you that someone else takes and posts. Again, it’s basically the same stuff as disclosed data, but the difference is that you don’t have control over it, and you didn’t create it in the first place.
  • Behavioral data is data the site collects about your habits by recording what you do and who you do it with. It might include games you play, topics you write about, news articles you access (and what that says about your political leanings), and so on.
  • Derived data is data about you that is derived from all the other data. For example, if 80 percent of your friends self-identify as gay, you’re likely gay yourself.

Why is this important? Because in order to develop ways to control the data we distribute in the cloud we need to first classify precisely the different types of data and their relational position within our digital footprint and the surrounding ecology. Disclosed data is of different value to Behavioral or Derived data, and most people will likely value their individual content such as pictures and posts much more than the aggregated patterns sucked out of their footprint by a social network site’s algorithms. Much to think about here.

Who owns you (and your genes)?

This is a trailer for an upcoming documentary on gene patenting everyone should watch.

‘Over the last 20 years, the United States Patent and Trademark Office has been issuing patents to universities and private companies on raw human genes. One company or university is given a legal monopoly over a molecule that is inside every human being and many other animals. This documentary explores the legal, ethical, and clinical ramifications of human gene patenting.’

The principal author is Dr David Koepsell who takes a libertarian approach against the notion of intellectual property. Below is a video of his argument against IP from an ethical perspective.

De Revolutionibus

I am currently reading Paolo Rossi’s The Birth of Modern Science [available here]. The chapter on Copernicus discusses his De Revolutionibus Orbium Coelestium from 1543, which is today seen as the revolutionary work that established the heliocentric system and forever removed the earth from the center of the universe. Rossi however demonstrates how un-revolutionary Copernicus’ work in fact was, not only in terms of style and format – which were based entirely on Ptolemy’s Almagest from 1400 years earlier, but also in terms of argumentation. Virtually all of Copernicus’s arguments existed in one form or another before him, and some of them were in fact Ptolemy’s – most important of all being the argument on the uniform and regular circular motion of heavenly bodies. Fascinatingly, Copernicus argued that his work is important because it explains Ptolemean astronomy better than its author did – and the concept of circular motion across heavenly spheres was crucial for that. Apparently Johannes Kepler commented that Copernicus had interpreted Ptolemy rather than nature when he wrote his treatise (deducing from authority was a very Medieval approach to science, and the fact that the greatest scientific achievement of the Renaissance was achieved in that way is a damning comment on the notion of the Renaissance as a negation of scholasticism).

Copernicus never went as far as Giordano Bruno and suggest an infinite universe, populated by bodies in irregular motion. Rather, the revolutionary aspect of Copernicus’s work was in using the very same facts as everyone else, to propose a previously unsought direction disguised as an improvement on the dogma. It’s hard to get more unintentionally subversive than that. From the perspective of scientific advancement, the fascinating observation here is that a revolutionary jump was achieved thanks to a proposal asking many more questions than it could answer, rather than delivering a coherent theory to substitute the previous one. For example, Copernicus’s position on the earth’s rotation led directly to the need to explain gravity (now that the earth was not the center of a spherical universe), which in turn led to Newton.

This realization is interesting, because it questions, as so many other examples, the image of science as a monolithic coherent discipline engaged in an ever-forward progress. The move, if there is any move at all, is never forward, but more sideways-backwards-sideways until a new way to question the obvious emerges somewhere on the periphery.

Of course, it never hurts to ruffle a few feathers in the process – apparently Martin Luther fumed against ‘that fool astronomer who claims that the earth moves’.

Cloud computing and secret gardens

Charlie Stross has a great piece on his site commenting Apple’s strategy with the iPad and Steve Jobs’s vicious antipathy towards any cross-platform apps not originating from Apple. Plenty of material to discuss there, but for me the interesting part is [1] the notion that cloud computing is going to displace the PC in a controlled walled-garden way. By walled-garden I mean a total-control platform like iTunes – or anything else from that nightmarish company for that matter. I suspect that Stross is right, at least when it comes to Apple – their strategy after all is easy to deduce, but I just don’t see how walled-garden platform is going to dominate the cloud-space when you consider the relentless pressure for interoperability applied by a constantly emerging market. One could argue that Microsoft’s success with the PC platform has been solely due to their complete openness to hardware and third-party soft. Google seem to go down a similar path and if anything it is their already developing cloud platform that would probably dominate the early decade of cloud computing. Stross sums it up nicely:

‘Because you won’t have a “computer” in the current sense of the word. You’ll just be surrounded by a swarm of devices that give you access to your data whenever and however you need it.’

Apple’s and their ilk ‘success’ would be to maintain the cult by porting to a cloud platform, but the sheer necessity of total interoperability related to broad market penetration will prevent them from dominating the cloud. Finally, the comparison between Apple and BMW/Mercedes ‘high-end’ cars doesn’t work for me – I see Jobs’s cult as a Saab.

Tim O’Reilly: The State of the Internet Operating System

Just went though Tim O’Reilly’s The State of the Internet Operating System. Fascinating, thought-provoking, directly related to what I am working on regarding ambient socio-digital systems (ASDS). Key bits:

“What mobile app (other than casual games) exists solely on the phone? Virtually every application is a network application, relying on remote services to perform its function. Where is the “operating system” in all this? Clearly, it is still evolving. Applications use a hodgepodge of services from multiple different providers to get the information they need.”

“We are once again approaching the point at which the Faustian bargain will be made: simply use our facilities, and the complexity will go away. And much as happened during the 1980s, there is more than one company making that promise. We’re entering a modern version of “the Great Game”, the rivalry to control the narrow passes to the promised future of computing. “

“The underlying services accessed by applications today are not just device components and operating system features, but data subsystems: locations, social networks, indexes of web sites, speech recognition, image recognition, automated translation. It’s easy to think that it’s the sensors in your device – the touch screen, the microphone, the GPS, the magnetometer, the accelerometer – that are enabling their cool new functionality. But really, these sensors are just inputs to massive data subsystems living in the cloud.”

“Location is the sine-qua-non of mobile apps. When your phone knows where you are, it can find your friends, find services nearby, and even better authenticate a transaction.”

“Where is the memory management?”

Location, time, and emotive attachments (intensity) are the key vectors he identifies, and I agree. A fascinating problem is the management of a locally-cached memory-shadow. All in all, plenty to think of.