The Teeming Brain’s “Recommended Reading” series has been on hiatus since last November. And now it’s back, with a slightly altered/streamlined format (read: no graphics, just links and text) that’s more sustainable in the context of your trusty editor’s various other claims on time, energy, and attention.
* * *
Madrid: Dignity and Indignation
Aaron Shulman, The American Scholar, Winter 2013
[EDITOR’S NOTE: An American ex-pat living in Spain — and deeply loving it — explains how the country’s apocalyptically awful socioeconomic situation is forcing him and his wife to leave. Reading his description of the situation, one can’t help but extrapolate from it and speculate about the possible preview it provides for what will happen, and already is happening, elsewhere, especially since the stated causes and progression of the crisis sound so very familiar here in, e.g., the United States.]
Spanish paro [unemployment] has already surpassed the worst levels of the American Great Depression. The Red Cross recently launched a campaign to combat hunger in Spain, redirecting resources previously dedicated to Haiti. More than one in every four children live in households below the poverty line. Things are bad in a way no one could have imagined even five years ago.
. . . . Spain’s unemployment figures depress me because they seem to presage collapse, but the reality of life in a country with so many unemployed is even sadder. Elisa and I relocated from Córdoba to Madrid this past April, and since then almost every day I see a corriente, or average, person rooting around in the trash in search of food — never mind homeless people, who now also have competition at soup kitchens and food banks. The border between the perennially homeless and the newly homeless is increasingly porous and irrelevant.
. . . . What brought Spain to this point? The Spanish economic boom in the years preceding the crisis was a grim parable described as a fairy tale we’re all familiar with: subprime mortgages, unchecked speculation, laughable regulation, political complicity—a world built on fictions. The Spanish version had a result even more disastrous than elsewhere because way too many of the country’s economic eggs were in the construction sector basket.
. . . . On top of the increasingly untenable work situation, the comportment of police in the face of demonstrations is becoming more brutal and frightening. In September we happened to leave Neptune Plaza just minutes before police began beating demonstrators who had nonviolently surrounded the congress. In a restaurant we watched live TV coverage of defenseless people holding up their hands and yet still receiving blows. The next morning a shocking video appeared of police launching projectiles in a train station. A few days later the head of the riot police was awarded a medal by the government.
* * *
Killer Robots Must Be Stopped, Say Campaigners
Tracy McVeigh, The Observer, February 23, 2013
[EDITOR’S NOTE: The title and subject of this article would tend to invite scorn and skepticism for their seemingly over-the-top invocation of outlandish science fictional-type fears if it weren’t for the fact that, as described by the following excerpts, the imminent rise of autonomous killer robots, as well as the present rise of serious opposition to them by people in positions of authority and respect, really and truly is happening.]
A new global campaign to persuade nations to ban “killer robots” before they reach the production stage is to be launched in the UK by a group of academics, pressure groups and Nobel peace prize laureates. Robot warfare and autonomous weapons, the next step from unmanned drones, are already being worked on by scientists and will be available within the decade, said Dr Noel Sharkey, a leading robotics and artificial intelligence expert and professor at Sheffield University. He believes that development of the weapons is taking place in an effectively unregulated environment, with little attention being paid to moral implications and international law.
The Stop the Killer Robots campaign will be launched in April at the House of Commons and includes many of the groups that successfully campaigned to have international action taken against cluster bombs and landmines. They hope to get a similar global treaty against autonomous weapons.
“These things are not science fiction; they are well into development,” said Sharkey.
. . . . Last November the international campaign group Human Rights Watch produced a 50-page report, Losing Humanity: the Case Against Killer Robots, outlining concerns about fully autonomous weapons.
. . . . US political activist Jody Williams, who won a Nobel peace prize for her work at the International Campaign to Ban Landmines, is expected to join Sharkey at the launch at the House of Commons. . . “Killer robots loom over our future if we do not take action to ban them now,” she said. “The six Nobel peace laureates involved in the Nobel Women’s Initiative fully support the call for an international treaty to ban fully autonomous weaponised robots.”
* * *
The Extraordinary Science of Addictive Junk Food
Michael Moss, The New York Times, February 20, 2013
[EDITOR’S NOTE: This is simply riveting, and in a way that explodes an all-too-easy and glib dismissal along the lines of “Yeah, we already know junk food is addictive. So what else is new?” Moss names names and gives specifics in an article, adapted from his new book Salt, Sugar, Fat: How the Food Giants Hooked Us, that sounds like it could blow open the junk food industry in much the same way the famous 1996 Vanity Fair article that served as the basis for Michael Mann’s The Insider blew open the shady world of Big Tobacco.]
The public and the food companies have known for decades now. . . . that sugary, salty, fatty foods are not good for us in the quantities that we consume them. So why are the diabetes and obesity and hypertension numbers still spiraling out of control? It’s not just a matter of poor willpower on the part of the consumer and a give-the-people-what-they-want attitude on the part of the food manufacturers. What I found, over four years of research and reporting, was a conscious effort — taking place in labs and marketing meetings and grocery-store aisles — to get people hooked on foods that are convenient and inexpensive. I talked to more than 300 people in or formerly employed by the processed-food industry, from scientists to marketers to C.E.O.’s. Some were willing whistle-blowers, while others spoke reluctantly when presented with some of the thousands of pages of secret memos that I obtained from inside the food industry’s operations. What follows is a series of small case studies of a handful of characters whose work then, and perspective now, sheds light on how the foods are created and sold to people who, while not powerless, are extremely vulnerable to the intensity of these companies’ industrial formulations and selling campaigns.
. . . . If Americans snacked only occasionally, and in small amounts, this would not present the enormous problem that it does. But because so much money and effort has been invested over decades in engineering and then relentlessly selling these products, the effects are seemingly impossible to unwind. More than 30 years have passed since Robert Lin first tangled with Frito-Lay on the imperative of the company to deal with the formulation of its snacks, but as we sat at his dining-room table, sifting through his records, the feelings of regret still played on his face. In his view, three decades had been lost, time that he and a lot of other smart scientists could have spent searching for ways to ease the addiction to salt, sugar and fat. “I couldn’t do much about it,” he told me. “I feel so sorry for the public.”
* * *
Is Smart Making Us Dumb?
Evgeny Morozov, The Wall Street Journal, February 23, 2013
Teaser: A revolution in technology is allowing previously inanimate objects—from cars to trash cans to teapots—to talk back to us and even guide our behavior. But how much control are we willing to give up?
In 2010, Google Chief Financial Officer Patrick Pichette told an Australian news program that his company “is really an engineering company, with all these computer scientists that see the world as a completely broken place.” Just last week in Singapore, he restated Google’s notion that the world is a “broken” place whose problems, from traffic jams to inconvenient shopping experiences to excessive energy use, can be solved by technology. The futurist and game designer Jane McGonigal, a favorite of the TED crowd, also likes to talk about how “reality is broken” but can be fixed by making the real world more like a videogame, with points for doing good. From smart cars to smart glasses, “smart” is Silicon Valley’s shorthand for transforming present-day social reality and the hapless souls who inhabit it.
But there is reason to worry about this approaching revolution. As smart technologies become more intrusive, they risk undermining our autonomy by suppressing behaviors that someone somewhere has deemed undesirable. Smart forks inform us that we are eating too fast. Smart toothbrushes urge us to spend more time brushing our teeth. Smart sensors in our cars can tell if we drive too fast or brake too suddenly. These devices can give us useful feedback, but they can also share everything they know about our habits with institutions whose interests are not identical with our own. Insurance companies already offer significant discounts to drivers who agree to install smart sensors in order to monitor their driving habits. How long will it be before customers can’t get auto insurance without surrendering to such surveillance? And how long will it be before the self-tracking of our health (weight, diet, steps taken in a day) graduates from being a recreational novelty to a virtual requirement?
* * *
Unlike: Why I’m Leaving Facebook
Douglas Rushkoff, February 25, 2013
[EDITOR’S NOTE: When somebody of Rushkoff’s status and stature as a commentator on the world of information technology goes and does (and says) something like this, you can know that a sea change is brewing.]
I used to be able to justify using Facebook as a cost of doing business. As a writer and sometime activist who needs to promote my books and articles and occasionally rally people to one cause or another, I found Facebook fast and convenient. Though I never really used it to socialize, I figured it was okay to let other people do that, and I benefited from their behavior.
I can no longer justify this arrangement. Today I am surrendering my Facebook account, because my participation on the site is simply too inconsistent with the values I espouse in my work. In my upcoming book Present Shock, I chronicle some of what happens when we can no longer manage our many online presences. I argue — as I always have — for engaging with technology as conscious human beings, and dispensing with technologies that take that agency away.
Facebook is just such a technology. It does things on our behalf when we’re not even there. It actively misrepresents us to our friends, and — worse — misrepresents those who have befriended us to still others. To enable this dysfunctional situation — I call it “digiphrenia” — would be at the very least hypocritical. But to participate on Facebook as an author, in a way specifically intended to draw out the “likes” and resulting vulnerability of others, is untenable.
Facebook has never been merely a social platform. Rather, it exploits our social interactions the way a Tupperware party does. Facebook does not exist to help us make friends, but to turn our network of connections, brand preferences, and activities over time — our “social graphs” — into a commodity for others to exploit. We Facebook users have been building a treasure lode of big data that government and corporate researchers have been mining to predict and influence what we buy and whom we vote for. We have been handing over to them vast quantities of information about ourselves and our friends, loved ones and acquaintances. With this information, Facebook and the “big data” research firms purchasing their data predict still more things about us.
* * *
Is it OK to be a Luddite?
Thomas Pynchon, The New York Times Book Review, October 28, 1984 (reprinted at The Modern Word)
The word “Luddite” continues to be applied with contempt to anyone with doubts about technology, especially the nuclear kind. Luddites today are no longer faced with human factory owners and vulnerable machines. As well-known President and unintentional Luddite D.D. Eisenhower prophesied when he left office, there is now a permanent power establishment of admirals, generals and corporate CEO’s, up against whom us average poor bastards are completely outclassed, although Ike didn’t put it quite that way. We are all supposed to keep tranquil and allow it to go on, even though, because of the data revolution, it becomes every day less possible to fool any of the people any of the time.
If our world survives, the next great challenge to watch out for will come — you heard it here first — when the curves of research and development in artificial intelligence, molecular biology and robotics all converge. Oboy. It will be amazing and unpredictable, and even the biggest of brass, let us devoutly hope, are going to be caught flat-footed. It is certainly something for all good Luddites to look forward to if, God willing, we should live so long. Meantime, as Americans, we can take comfort, however minimal and cold, from Lord Byron’s mischievously improvised song, in which he, like other observers of the time, saw clear identification between the first Luddites and our own revolutionary origins. It begins:
As the Liberty lads o’er the sea
Bought their freedom, and cheaply, with blood,
So we, boys, we
Will die fighting, or live free,
And down with all kings but King Ludd!
* * *
Miracles and the Historians
Peter Berger, The American Interest, December 21, 2011
Modern science has achieved high credibility and prestige, not only for its intellectual plausibility, but because of its immense practical successes. Modern science, and the technology it has made possible, has fundamentally changed the circumstances of human life on this planet. One result of this has been the ideology of scientism, which asserts that science is the only valid avenue to truth. On the part of believers there has been the understandable impetus to present belief itself as being based on science. The prototypical figure in this has been Mary Baker Eddy, founder of a denomination aptly called Christian Science, with Jesus transformed into someone called Christ, Scientist. Not only does this do violence to the Jesus found in the New Testament, but equally so to science as an intellectual discipline. In the same line there have been attempts to establish a Christian economics, a Christian sociology, and so forth. Such constructions are as implausible as a Christian geology, or a Christian dermatology.
But there is something more fundamental involved in all of this: The refusal to accept the fact that there is more than one way to perceive reality.
. . . . If [the historian] wants to claim the status of “science” for his discipline, he has no alternative to following in the “naturalistic tradition”. The acts of God (miraculous or otherwise) cannot be empirically investigated or falsified. How the historian then looks at the same phenomenon, such as a Biblical account of ancient events, will obviously depend on his theology. If he believes in Biblical inerrancy — every sentence is literally true — he will definitely have some serious problems. But there are other, more flexible ways of looking for revelation “in, with and under” the Biblical text. In that case, even the most rigorous historical scholarship cannot undermine the approach of faith.
Excellent selections. re: facebook and killer robots, we have been living in a sci fi novel for more than a decade now. And it seems also that reality wants to do a little bit of genre fusion–sci fi plus horror… will the killer robots eventually use the facebook database to predict our behavior, and thereby be able to wait around the corner for us one sunny day? Not exactly, but close enough.
Glad you appreciated those pieces, zeno. I agree with your characterization of the present situation. Well said.
Great to see this feature back – one of my favorite things to appear in google reader.
Thanks for the good words, Russ. As somebody who likewise has his own favorite sources of Internet content curation, I’m gratified to hear that the Recommended Reading feature is valuable to you.
I like your taste in articles and I used to look forward to seeing which ones you plucked from the ether. Glad to see you’ve found some time to assemble more reading lists.
I wonder whether one day we’ll have programmable searchbots that scan the internet’s content based on our taste. Spam filters can be taught which emails count as spam, but I don’t suppose they can yet program philosophical or aesthetic taste. Still, they could come close.
Much-appreciated, Ben. As for the programmable searchbots that will bring all of us Internet content based on personal taste and preference, I think we may already have such a thing — although not controlled by each of us individually — in the form of the “filter bubble” that Eli Pariser has famously talked (and warned) about, and that is actively being cultivated by the likes of Google in an attempt to market (in essence) content to users along the lines of what their browsing history and other Internet usage profile shows, all for the purpose of monetizing our lives by conducting more tightly focused and personalized marketing along seemingly serendipitous and naturalistic lines. Personally, I prefer to escape the bubble and bump up against things that confront, challenge, flout, and contradict my taste!
Yes, I see your point, Matt. But the filter keeps out content that the software thinks we don’t want to see. I’m talking about a searchbot that actively scans the internet for material that we do want to see, including challenging material.
Of course, if we search Google, the search engine follows our command. But I wonder about a more autonomous searchbot that could take over the aggregation websites that cull essays and book reviews according to certain tastes. There are already such sites for all manner of people: liberals, conservatives, bird watchers, and so on. But people still have to labouriously sift the content to find those articles, as you well know since your Readings List series is one such website. Wouldn’t it be nice to think that software could take over that labour, so that you could sit back, wake up in the morning and have many interesting articles to read?
Of course, there’s the issue of trust. How could we know the searchbot wouldn’t have ulterior motives? Maybe it would be leaving out articles we’d have preferred to read, because the searchbot wouldn’t know our tastes perfectly well. But exactly the same concerns apply to human aggregators, and the solution is that you build up trust by verifying. You recommend certain articles and I consistently find that our tastes have much in common. So even if you don’t find everything worth reading on the internet, some aggregation is better than none.
Maybe the best argument against such searchbots is that they spoil the fun of surfing the web yourself and discovering buried treasure. And yet I find that I don’t surf the web nearly as much as I used to when I was first introduced to the internet. Now that I have scores of bookmarks already, I find that I keep going round and round my favourite websites without venturing much into new territory. That’s another sort of filter: the filter of sticking with the familiar.