Long (but fascinating) article for a guy who says he ain’t got no more attention span.
The human brain is almost infinitely malleable. People used to think that our mental meshwork, the dense connections formed among the 100 billion or so neurons inside our skulls, was largely fixed by the time we reached adulthood. But brain researchers have discovered that that’s not the case. James Olds, a professor of neuroscience who directs the Krasnow Institute for Advanced Study at George Mason University, says that even the adult mind “is very plastic.” Nerve cells routinely break old connections and form new ones. “The brain,” according to Olds, “has the ability to reprogram itself on the fly, altering the way it functions.”
As we use what the sociologist Daniel Bell has called our “intellectual technologies”—the tools that extend our mental rather than our physical capacities—we inevitably begin to take on the qualities of those technologies. The mechanical clock, which came into common use in the 14th century, provides a compelling example. In Technics and Civilization, the historian and cultural critic Lewis Mumford described how the clock “disassociated time from human events and helped create the belief in an independent world of mathematically measurable sequences.” The “abstract framework of divided time” became “the point of reference for both action and thought.”
The clock’s methodical ticking helped bring into being the scientific mind and the scientific man. But it also took something away. As the late MIT computer scientist Joseph Weizenbaum observed in his 1976 book, Computer Power and Human Reason: From Judgment to Calculation, the conception of the world that emerged from the widespread use of timekeeping instruments “remains an impoverished version of the older one, for it rests on a rejection of those direct experiences that formed the basis for, and indeed constituted, the old reality.” In deciding when to eat, to work, to sleep, to rise, we stopped listening to our senses and started obeying the clock.
The question open is where our internet-addled brains are leading us. We’ve seen amazing advances in politics, the arts, and research.
Maybe I’m just a worrywart. Just as there’s a tendency to glorify technological progress, there’s a countertendency to expect the worst of every new tool or machine. In Plato’s Phaedrus, Socrates bemoaned the development of writing. He feared that, as people came to rely on the written word as a substitute for the knowledge they used to carry inside their heads, they would, in the words of one of the dialogue’s characters, “cease to exercise their memory and become forgetful.” And because they would be able to “receive a quantity of information without proper instruction,” they would “be thought very knowledgeable when they are for the most part quite ignorant.” They would be “filled with the conceit of wisdom instead of real wisdom.” Socrates wasn’t wrong—the new technology did often have the effects he feared—but he was shortsighted. He couldn’t foresee the many ways that writing and reading would serve to spread information, spur fresh ideas, and expand human knowledge (if not wisdom).
The arrival of Gutenberg’s printing press, in the 15th century, set off another round of teeth gnashing. The Italian humanist Hieronimo Squarciafico worried that the easy availability of books would lead to intellectual laziness, making men “less studious” and weakening their minds. Others argued that cheaply printed books and broadsheets would undermine religious authority, demean the work of scholars and scribes, and spread sedition and debauchery. As New York University professor Clay Shirky notes, “Most of the arguments made against the printing press were correct, even prescient.” But, again, the doomsayers were unable to imagine the myriad blessings that the printed word would deliver.
I can still dive into a book when I want to (though I almost never want to anymore), and while I share this guy’s guarded optimism that new mediums will lead to new innovations and a net plus for human advancement, there’s no denying that losing the ability to read full-length books is a mighty sacrifice. Ebook readers are becoming very cool, but once again you’re not holding just one book in your hand, but hundreds or thousands at your fingertips, and once again attention spans are challenged.
An article by Jamais Cascio points to where we may be going, as technology is integrated into our very perception:
The emerging technology, called “Augmented Reality,” enables users to see location-specific data superimposed over their surroundings. Long a staple of science fiction, it’s trickling into the real world through the iPhone and similar ultrasmart mobile phones. With AR applications such as Layar, the smart phone displays what its camera sees, with information about nearby buildings and shops, travel directions, even notes and “tags” left by other users in that location. Although AR now relies on handheld devices, electronics makers like Sony are working on systems that you wear like sunglasses, making augmented vision more immersive.
Eventually, contact lenses, corneal implants or artificially engineered eyes, I wager.
Conceivably, users could set AR spam filters to block any kind of unpalatable visual information, from political campaign signs to book covers. Parents might want to block sexual or violent images from their kids’ AR systems, and political activists and religious leaders might provide ideologically correct filters for their communities. The bad images get replaced by a red STOP, or perhaps by signs and pictures that reinforce the desired worldview.
Did I mention that the “wrong” people can get replaced too?
After California’s Prop 8 ban on gay marriage passed, opponents of the measure dug up public records of donors supporting the ban, and linked that data to an online map. Suddenly, you could find out which of your neighbors (or the businesses you frequent) were so opposed to gay marriage that they donated to the cause. Now imagine that instead of a map, those records were combined with an AR system able to identify faces.
You don’t want to see anybody who has donated to the Palin 2012 campaign? Gone, their faces covered up by black circles. You want to know who exactly gave money to the 2014 ban on SUVs? Easy—they now have green arrows pointing at their heads.
You want to block out any indication of viewpoints other than your own? Done.
Obviously people have figured out how to do that nowadays, and will continue to do so, but could it lead to greater heights? I’ll be sure to give my feedback after I get my iShades…
UPDATE: More on augmented reality here.