In a Data-Driven World, a Picture is Still Worth 1,000 Words

If you read the Economist, you’ll remember their February cover story on “The Data Deluge”, which throws out some mind boggling numbers about the exponential growth of information in our society. In 2005, mankind created 250 exabytes (billion gigabytes) of data. In 2010, that number grew to 1,200 exabytes. Google CEO Eric Schmidt put it another way at the Techonomy conference in August – every two days we now create as much new information as we did from the dawn of civilization up until 2003. That’s what I call explosive growth.

But how much information can we even consume? Just look at all the new things we are measuring today about our world, our systems (markets, economies, servers, transactions) and our bodies that we never have before. I think as a society we have recognized that there is value in measurement (manage what you measure), but now we’re confronted with the difficultly of actually deriving meaning from all the data we are collecting. There is simply no way to understand and absorb all of that information using traditional methods.

The phrase “traditional methods” is important, because it gets to the unique way that human brains process information. To a computer, a byte is a byte, whether it’s text, image, or video. But humans process data differently. Consider – How long does it take you to absorb a picture of your parents? How long to read and understand “War and Peace”? It’s about the same number of bytes, but the visual information is much more rapidly consumed and categorized by our brains. The number of bytes we are able to consume varies greatly with modality. A megabyte of text (about 500 pages) is harder to consume than a megabyte of HD video (less than a second).

I believe that the key to understanding and deriving meaning from the exponentially growing amount of information in our world is to exploit efficiencies in the way our brains process data. With the help of computers, we are able to translate vast amounts of raw data into visualizations that allow us to consume it more efficiently. I think that’s why we’ve seen an explosion recently of investments in so-called “big data” companies like Palantir Technologies, Recorded Future, and more (you can see the IA Ventures portfolio for even more examples). We’re going to need new ways to summarize and interpret data in the future if we want to gain value from all the things we are now recording and measuring.

So in essence, we are experiencing a race between the volume of data that is being generated and the systems that allow us to consume it all in manageable modalities (summary charts, images, video, etc). The systems we devise must evolve rapidly to aggregate and present increasing amounts of data, allowing human users to arrive at the holy grail – “answers”.

Special thanks to Dave Peck for inspiring this post after our conversation on Namesake.

Did this post help you? If it did, I'd love it if you'd sign up for my newsletter. I send out non-public stories, coaching tips, tricks, and funny stories every week. Enter your email address below.

Leave a Comment

1 comment
  • I believe that the key to understanding and deriving meaning from the exponentially growing amount of information in our world is to exploit efficiencies in the way our brains process data.nhttp://www.child-behaviorproblems.com/nChildren normal behaviors depend on various natural and environmental circumstances in which a child grow and observes the way for his best possible conduct within his reach and interact amongst those who respond his gestures and body talks.

Bill D'Alessandro