A recently published essay by University of Virginia professor Chad Wellmon in The Hedgehog Review stands as one of the most elegant, incisive, and persuasive entries I’ve yet read in the great debate over the effects of the Internet/digital media revolution on human consciousness and culture. And I’ve read a fair amount of them.
Wellmon says:
On the one hand, there are those who claim that the digitization efforts of Google, the social-networking power of Facebook, and the era of big data in general are finally realizing that ancient dream of unifying all knowledge…[They say] Our information age is unique not only in its scale, but in its inherently open and democratic arrangement of information. Information has finally been set free. Digital technologies, claim the most optimistic among us, will deliver a universal knowledge that will make us smarter and ultimately liberate us. These utopic claims are related to similar visions about a trans-humanist future in which technology will overcome what were once the historical limits of humanity: physical, intellectual, and psychological
…On the other hand, less sanguine observers interpret the advent of digitization and big data as portending an age of information overload. We are suffering under a deluge of data. Many worry that the Web’s hyperlinks that propel us from page to page, the blogs that reduce long articles to a more consumable line or two, and the tweets that condense thoughts to 140 characters have all created a culture of distraction. The very technologies that help us manage all of this information are undermining our ability to read with any depth or care.
…Both narratives, however, make two basic mistakes. First, they imagine our information age to be unprecedented, but information explosions and the utopian and apocalyptic pronouncements that accompany them are an old concern…Second, both narratives make a key conceptual error by isolating the causal effects of technology. Technologies, be it the printed book or Google, do not make us unboundedly free or unflaggingly stupid. Such a sharp dichotomy between humans and technology simplifies the complex, unpredictable, and thoroughly historical ways in which humans and technologies interact and form each other.
…[A]sking whether Google makes us stupid, as some cultural critics recently have, is the wrong question. It assumes sharp distinctions between humans and technology that are no longer, if they ever were, tenable…New technologies, be it the printed encyclopedia or Wikipedia, are not abstract machines that independently render us stupid or smart. As we saw with Enlightenment reading technologies, knowledge emerges out of complex processes of selection, distinction, and judgment — out of the irreducible interactions of humans and technology. We should resist the false promise that the empty box below the Google logo has come to represent — either unmediated access to pure knowledge or a life of distraction and shallow information. It is a ruse. Knowledge is hard won; it is crafted, created, and organized by humans and their technologies. Google’s search algorithms are only the most recent in a long history of technologies that humans have developed to organize, evaluate, and engage their world.
— Chad Wellmon, “Why Google Isn’t Making Us Stupid…or Smart,”
The Hedgehog Review, Vol. 14, No. 1 (Spring 2012)
Although I have long stood staunchly in the camp those who, like Nicholas Carr (“Is Google Making Us Stupid?“), see the overall impact of the Internet on life, consciousness, and culture as tending almost inherently toward a dystopian outcome, I’m gradually coming around to a point of view more like Wellmon’s. I’m beginning to think that our relationship with the new technology is much more complex than the purely dystopian view would have it, although a dystopian or semi-dystopian outcome is by no means impossible, and at this point looks as likely as (or more likely than) not. One of the hinges here is the character of the surrounding matrix of sociopolitical, economic, and cultural circumstances, embodied in both prevailing popular attitudes and customs and actual government policies. And in this area we appear fairly screwed, so deeply dysfunctional have these realities become. But maybe, just maybe, there’s a chance for a different result. And in any case the technology itself is, I’m now thinking, conducive to both great harm and great good/enhancement/enrichment, as a matter of its inherent, McLuhuan-esque “message.”
Thoughts, anyone?