Random finds (2018, week 34) — On the hidden injuries of the age of exposure, the quest for immortality, and why the nature of work is a social choice
I have gathered a posy of other men’s flowers, and nothing but the thread that binds them is mine own.” — Michel de Montaigne
Random finds is a weekly curation of my tweets, and, as such, a reflection of my curiosity.
This week: The hidden injuries of the age of exposure; Silicon Valley’s longevity entrepreneurs; why technology isn’t disrupting our jobs; aesthetics in design; Barack Obama on listening; an endearing portrait of Jimmy Carter; why, for many Republicans, Trump remains uncorrupt; Giuliani and facts; poetry set in stone; and, if you make it to the end, some beautiful cello music from William Walton.
The hidden injuries of the age of exposure
In The Baffler, an essay by Rochelle Gurstein on the hidden injuries of the age of exposure, entitled Self-Invasions and the Invaded Self.
“What do we lose when we lose our privacy?” has become increasingly difficult question to answer, Gurstein writes. We live in a society that offers boundless opportunities for men and women to expose themselves (in all dimensions of that word) and commit what are essentially self-invasions of privacy — from selfies and Instagrammed trivia to the almost automatic, everyday activity on Facebook. But our online pastime is nowhere near as private as we had been led to believe. The mania for attention of any kind is, however, so pervasive — and the invasion of privacy so nonchalant — that many of us no longer notice, let alone mind, what in the past would have been experienced as insolent violations of privacy.
“Given our widespread obliviousness to the current situation, we might be better served by asking: What did people used to believe they lost when they lost their privacy? Surprisingly, it turns out that a large number of people began to speak of privacy in a self-conscious way only toward the end of the nineteenth century. As is often the case, the first defenders of privacy became aware of its value at the moment they were on the verge of losing it,” Gurstein writes.
“Moral coarsening — the wearing away of the capacity to recognize what one has become — was both the deepest anxiety and the deepest insight of [the first defenders of privacy]. If it is our very capacity for sensitivity, our feeling for ‘certain differences and decencies’ — what used to be regarded as a sense of shame — that we lose as a consequence of inhabiting a world where no one is guaranteed the refuge of privacy and no subject is afforded the protection of silence, then this goes a long way toward explaining why more than a century later — after the invention and proliferation of the radio, television, cell phones, twenty-four-hour news cycles, and the internet — so many of us today have such a hard time recognizing what we lose when we lose our privacy. It turns out that the very atmosphere in which we move and breathe deprives us of the perception we need to recognize our predicament.”
Like the self-conscious understanding of privacy, also the cult of exposure is of recent vintage, emerging during the last part of the nineteenth century when a deep-seated suspicion of privacy as a hiding place for wrongdoing took on a particular cast in Western democracies.
“‘In all democratic societies today,’ wrote Godkin, ‘the public is disposed either to resent attempts at privacy, either of mind or body, or to turn them into ridicule.’ In addition, ‘democratic’ apostles of exposure were apt to suspect ‘all regard for or precautions about privacy’ as signs of ‘exclusiveness’ — what today is called ‘elitism.’”
In his novel, The Reverberator (1888), the American author, Henry James, “brings this attitude to exquisite life when he has the prying newspaperman George Flack explain his ambitions to his friend Francie Dosson:
‘I’m going for the inside view, the choice bits . . . what the people want is just what ain’t told, and I’m going to tell it. . . .That’s about played out, anyway, the idea of sticking up a sign of private and hands off and no thoroughfare and thinking you can keep the place to yourself. . . . Now what I’m going to do is set up the biggest lamp yet made and make it shine all over the place. We’ll see who’s private then, and whose hands are off, and who’ll frustrate the People — the People that wants to know. That’s a sign of the American people that they do want to know.’
This allegedly democratic appeal to the ‘people’ was constantly put forward by editors of the new-style journalism: ‘We are giving the people what they want and we have the receipts to prove it.’”
Since then, nothing much has changed.
But, according to Gurstein, “We are confronted with another loss of sensibility that has blinded us to what we might call the collateral damage of today’s widespread disregard for privacy. We are no longer aware, as [the first defenders of privacy were], that when private matters are indiscriminately flooded with light their very nature changes. [Take, for example, the affair of Donald Trump and Stormy Daniels.] For the people involved […] the affairs were important and consequential, but once they were exposed in public they became banal and laughable, furnishing steady material for the jokes of late-night talk shows. And the transformation can go in another direction: now that newspapers have abandoned euphemism to describe what these people did in what they believed was private, their sexual proclivities, flooded by light, have become obscene. The latter has especially been the case with the #MeToo movement: any reader of the latest, minutely detailed article about sexual harassment that the New York Times specializes in quickly finds that he or she has been turned into a voyeur. It is no wonder, then, that the world we inhabit together feels ever more ugly, coarse, and trivial. When the boundary between public and private becomes as extremely porous as it is today, we lose far more than ‘that kingdom of the mind, that inner world of personal thought and feeling in which every man passes some time,’ which would have been disastrous enough.
[…]
What is needed to protect both our privacy and our common world belongs to an entirely different realm — one that is deeper, and far more elusive than the law: the realm of sensibility. Here we need to acknowledge again that the sensibility that once protected our privacy and our common world — the reticent sensibility with its keywords of shame, propriety, decorum, and decency — has been discredited and now feels anachronistic. Yet, without it, in a cruel turn of historical irony, we are largely resourceless and defenseless.”
Silicon Valley’s quest for immortality
“I welcomed him generously and fed him, and promised to make him immortal and un-aging. But since no god can escape or deny the will of Zeus the aegis bearer, let him go, if Zeus so orders and commands it, let him sail the restless sea. But I will not convey him, having no oared ship, and no crew, to send him off over the wide sea’s back. Yet I’ll cheerfully advise him, and openly, so he may get back safe to his native land.” — Calypso to Hermes when he delivers Zeus’ request to send Odysseus swiftly on his way (Homer, The Odyssey, V.92-147)
“Consider this fact of modern life: Nearly all of the technological products that we buy and use are designed with planned obsolescence in mind. […]
The irony, however, is that the same Silicon Valley culture that produces these gadgets seems to be obsessed with living forever,” Allison Arieff writes in Life Is Short. That’s the Point.
“There are now people who refer to themselves as ‘longevity entrepreneurs,’ who see death not as a problem but rather as something to be eliminated. Instead of pursuing a good death, why die at all?
Beneath the surface of this quest for eternal life seems to be an unwillingness on the part of its proponents to imagine the world without themselves in it.
In a very fundamental way, this tendency is inhuman.”
In her new book, Natural Causes: An Epidemic of Wellness, the Certainty of Dying, and Killing Ourselves to Live Longer, Barbara Ehrenreich writes: “You can think of death bitterly or with resignation, as a tragic interruption of your life, and take every possible measure to postpone it. Or, more realistically, you can think of life as an interruption of an eternity of personal nonexistence, and seize it as a brief opportunity to observe and interact with the living, ever-surprising world around us.”
Arieff is taken by the notion that our experience of life, though unique to us, is just part of a broader continuum. “Our time here is but a blip, and when we leave, the great world continues to spin,” she notes. “As such, the appreciation of our own lives has much to do with the ever-increasing awareness of its relative brevity. It is this — an awareness and acceptance of our own mortality — that makes us human. And it is the impetus, I’d argue, for living our lives to the fullest.”
“I must die, must I? If at once, then I am dying: if soon, I dine now, as it is time for dinner, and afterwards when time comes I will die. And die how? As befits one who gives back what is not his own.” — Epictetus, Discourses (I.1)
A Stoic’s view on Silicon Valley’s quest for immortality comes from professor of philosophy, Massimo Pigliucci. In How to be a Stoic, Pigliucci writes, after explaining Epictetus’s views on death and mortality, that some people aren’t persuaded at all by idea that “death itself is what gives urgent meaning to life.”
“On the contrary, a number of techno-optimist think that death is a disease that should be cured, and they are investing good money in the effort. Broadly speaking, they call themselves Transhumanists, and quite a few of them can be found among the white male millionaires of Silicon Valley, where many of the world’s most influential tech companies are located. Perhaps the most famous and influential of the bunch is Ray Kurzweil, a futurist (someone who thinks he can study and predict the future) currently working at Google to develop a software capabel of understanding natural language.
[…] Age sixty-eight at the time of this writing, he has been arguing for some time that the way to immortality will be to upload our consciousness into a computer, which he claims will be possible any day now. Indeed, we better manage that feat before the so-called Singularity, a term invented by the mathematician Stanislaw Ulam to describe the moment when computers outsmart people and begin to drive technological progress independently — and perhaps even in spite — of humanity itself.
This is not the place to explain why I think the whole idea of a Singularity is predicted on a fundamental misunderstanding of the nature of intelligence, or why ‘uploading’ our consciousness to a computer is extremely unlikely to be ever possible, since consciousness is neither a thing nor a piece of software. Here I’m more interested in the chutzpah displayed by people like Kurzweil as well as his almost cultlike following, who think themselves as so important that they ought to, godlike, transcend the laws of nature itself, never mind the fact that they are spending inordinate amounts of money and energy that could be directed toward ameliorating actual, urgent problems the world faces right now, or the disastrous ethical environmental consequences of their success (if it were possible). Who exactly, would have access to the new technology, and at what price? If we succeed in becoming physically immortal — the alternative to uploading hoped to some Transhumanists — will we keep having children? If so, how would an already diseased planet sustain the thirst for natural resources of a population that grows so relentlessly and manage its ever-escalating production of waste products? Ah, but we will expand beyond Earth! We shall colonize other worlds! Never mind that we still don’t know of any other inhabitable worlds in the galaxy, or that we have no clue about how to get to them, if they’re out there. The more I think about Transhumanism the more the word ‘hubris,’ famously invented by the Greeks precisely for such a thing, seems awfully appropriate.
The likes of Kurzweil simply don’t want to leave the party, it seems to me, no matter what the cost, and regardless of how priviliged they have been while attending it.”
(More Stoicism from Brian D Earp, who wrote an interesting essay, Against mourning, on how it takes a lifetime of preparation to grieve as the Stoics did — without weeping and wailing, but with a heart full of love.)
“There is one thing I can be sure of: I am going to die. But what am I to make of that fact? This course will examine a number of issues that arise once we begin to reflect on our mortality. The possibility that death may not actually be the end is considered. Are we, in some sense, immortal? Would immortality be desirable? Also a clearer notion of what it is to die is examined. What does it mean to say that a person has died? What kind of fact is that? And, finally, different attitudes to death are evaluated. Is death an evil? How? Why? Is suicide morally permissible? Is it rational? How should the knowledge that I am going to die affect the way I live my life?” — Description of a course on death by Yale professor Shelly Kagan
In In Silicon Valley’s Quest to Live Forever, also Tad Friend, reporter at large for The New Yorker, wonders whether billions of dollars’ worth of high-tech research will succeed in making death optional.
“Unsurprisingly, it was Google that transformed the Valley’s view of aging,” Friend writes. “Surprisingly, perhaps, it was the company’s Bill Maris who was in the vanguard. As the founder and CEO of Google Ventures, Maris led successful investments in companies such as Nest and Uber; he was amiable, admired, and financially secure — not an obvious modern-day alchemist.”
Marris decided to build a company that would solve death. The first problem was the long study time in humans. It’s hard to run clinical trials on subjects who take eighty years to die. The other problem was the immense difficulty of determining whether any seeming cause of aging was actually causal, or merely a correlative of some other, stealthier process. However, when Marris pitched his ideas to Google’s founders Sergey Brin and Larry Page, they both loved it. “We should do it here!,” Page said. So in 2013, with a billion dollars in funding, Google launched Calico, short for the California Life Company.
According to the website, it’s “mission is to harness advanced technologies to increase our understanding of the biology that controls lifespan. We will use that knowledge to devise interventions that enable people to lead longer and healthier lives. Executing on this mission will require an unprecedented level of interdisciplinary effort and a long-term focus for which funding is already in place.”
“The reigning view among longevity scientists is that aging is a product not of evolutionary intent but of evolutionary neglect: we are designed to live long enough to pass on our genes, and what happens afterward doesn’t much matter. […] Eric Verdin, the CEO of the Buck Institute for Research on Aging, the leading nonprofit in the field, noted that ‘if you just kept aging at the rate you age between twenty and thirty, you’d live to a thousand. At thirty, everything starts to change.’ From that point, our risk of mortality doubles every seven years. We’re like salmon, only we die in slow motion.
The battle between healthspanners and immortalists is essentially a contest between the power of evolution as ordained by nature and the potential power of evolution as directed by man. The healthspanners see us as subject to linear progress: animal studies take the time that they take; life sciences move at the speed of life. Noting that median life expectancy has been increasing in developed nations by about two and a half years a decade, Verdin told me, ‘If we can keep that pace up for the next two hundred years, and increase our life spans by forty years, that would be incredible.’
The immortalists have a different view of both our history and our potential. They see centuries of wild theorizing (that aging could be reversed by heating the body, or by breathing the same air as young virgins) swiftly replaced by computer-designed drugs and gene therapies. Bill Maris said, ‘Health technology, which for five thousand years was symptomatic and episodic — Here are some leeches! — is becoming an information technology, where we can read and edit our own genomes.’
Many immortalists view aging not as a biological process but as a physical one: entropy demolishing a machine. And, if it’s a machine, couldn’t it be like a computer? Progress in computers, or anyway in semiconductors, has been subject to Moore’s Law, the exponential flywheel that has doubled capacity every two years. In linear progress, after thirty iterations you’ve advanced thirty steps; in exponential progress, you’ve advanced 1.07 billion steps. Our progress in mapping the human genome looked like it was linear, and then was revealed, once the doublings grew significant, as exponential.
[…]
Aging doesn’t seem to be a program so much as a set of rules about how we fail. Yet the conviction that it must be a program is hard to dislodge from Silicon Valley’s algorithmic minds. If it is, then reversing aging would be a mere matter of locating and troubleshooting a recursive loop of code. After all, researchers at Columbia University announced in March that they’d stored an entire computer operating system (as well as a fifty-dollar Amazon gift card) on a strand of DNA. If DNA is just a big Dropbox for all the back-office paperwork that sustains life, how hard can it be to bug-fix?”
“Ray Kurzweil and Aubrey de Grey have the same backup plan if the work doesn’t advance as quickly as they expect: when they die, they will be frozen in liquid nitrogen, with instructions left to reawaken them once science has finished paving the road to immortality. Their optimism is admirable, and perhaps the anxieties that their blueprints stir up are just the standard resentments of the late adopters and the left-behinds. ‘People are daunted when they hear of these things,’ Kurzweil told me. ‘Then they say, I don’t know if I want to live that long.’ For Kurzweil, who has two children, the acceptance of inevitable death is no saner than the acceptance of early death. ‘It’s a common philosophical position that death gives meaning to life, but death is a great robber of meaning,’ he said. ‘It robs us of love. It is a complete loss of ourselves. It is a tragedy.’”
And yet …
Last year, the geneticist Nir Barzilai hosted a screening of a documentary about longevity, and afterward he posed a question to the three hundred people in the audience. Barzilai said, “In nature, longevity and reproduction are exchangeable. So Choice One is, you are immortalized, but there is no more reproduction on Earth, no pregnancy, no first birthday, no first love — and I go on and on and on. Choice Two is you live to be eighty-five and not one day sick, everything healthy and fine, and then one morning you just don’t wake up.” The vote was decisive: Choice One got ten or fifteen people, but everyone else raised their hands for Choice Two.
This, by no means, makes us modern ‘Odysseuses.’ After all, his prospects were rather dire when he declined immortality. But it seems most of us aren’t willing to join the gods at Mount Olympus either. Apparently, the real quest in life isn’t for immortality, but ‘simply’ for living well.
The nature of work is a social choice
“The insecure nature of work is a result of decisions by corporations and policymakers,” writes Louis Hyman in It’s Not Technology That’s Disrupting Our Jobs, an extract from his forthcoming book, Temp: How American Work, American Business and the American Dream Became Temporary.
“When we learn about the Industrial Revolution in school, we […] are taught that technological innovation drove social change and radically reshaped the world of work. Likewise, when we talk about today’s economy, we focus on smartphones, artificial intelligence, apps. Here, too, the inexorable march of technology is thought to be responsible for disrupting traditional work, phasing out the employee with a regular wage or salary and phasing in independent contractors, consultants, temps and freelancers — the so-called gig economy.”
But according to Hyman, this narrative is wrong. “The history of labor shows that technology does not usually drive social change. On the contrary, social change is typically driven by decisions we make about how to organize our world. Only later does technology swoop in, accelerating and consolidating those changes.” In short, the nature of work always remains a matter of social choice. It is a collection of decisions by corporations and policymakers, not a result of an algorithm.
The creation of factory technology during the Industrial Revolution was possible only because people’s relationship to work had already changed. A power loom would have served no purpose for networks of farmers making cloth at home, Hyman argues.
“The same goes for today’s digital revolution. While often described as a second machine age, our current historical moment is better understood as a second industrious revolution. It has been underway for at least 40 years, encompassing the collapse, since the 1970s, of the relatively secure wage-work economy of the postwar era — and the rise of post-industrialism and the service economy.”
The 1970s saw a new, strictly financial view of corporations that favored stock and bond prices over production, short-term gains over long-term investment. Theories of ‘lean’ organization, sold by management consultants and business gurus, became popular, work forces expendable and jobs more precarious.
“Internet technologies have certainly intensified this development,” Hyman writes. “But services like Uber and online freelance markets like TaskRabbit were created to take advantage of an already independent work force; they are not creating it. Their technology is solving the business and consumer problems of an already insecure work world. Uber is a symptom, not a cause.
[…]
We can’t turn back the clock, but neither is job insecurity inevitable. Just as the postwar period managed to make industrialization benefit industrial workers, we need to create new norms, institutions and policies that make digitization benefit today’s workers. Pundits have offered many paths forward — ‘portable’ benefits, universal basic income, worker reclassification — but regardless of the option, the important thing to remember is that we do have a choice. Insecurity is not the inevitable cost of technological progress. Only by understanding that fact can we act to make capitalism work for us, not work us over.”
And also this …
“Lately, we also hear a refreshingly different voice,” says Oliver Reichenstein in Aesthetics. “Not more technology but a more human use is what we need. There now is a loud call for more ‘ethics in design.’ Ethics should define rules for designers, and since designers design everything users use, that will solve our problems or at least help solve them.”
Although this may sound promising, there are issues, says Reichenstein.
“Ethics is a philosophical discipline that allows us to rationally question a highly irrational dimension of our lives. Ethics questions the morality of what we do. Unfortunately, as a genuinely philosophical discipline, ethics doesn’t offer solutions. Ethics makes us think about what we should do and why. It prevents us from continuously falling for what we feel is right and ask us to think clearly. And that’s great. But it won’t automatically solve our problems.
[…]
Instead of opening a theoretical can of worms, it may be more helpful to start with what we know. To a designer, aesthetics might offer a better access to good design […]. In everyday language, aesthetics are treated as a superficial close to the meaningless quality of visually pleasing objects. That is so unfair! Economically, aesthetics are a signature of overpriced luxury goods. Aesthetic objects say: ‘I am expensive, too expensive!’ We’re well trained in aesthetic cynicism. However, out of experience, we know those good things are rare, that quality always comes at a price and that the price tag of quality grows exponentially.
We also know that what is truly good is somehow beautiful, and what is truly beautiful is somehow good. It’s not a direct relationship, it’s a deeper connection, or a relationship higher up, outside the platonic cave, in the stark light of the blinding sun. It’s hard to explain. The goodness and the beauty a different beauty, a different goodness. But then again they are the same… Hell, is it thinkable that the old-fashioned beauty could hold the key to a more humane design?”
During his presidency, Barack Obama read 10 letters from members of the public every day. In Dear Mr President (an edited extract from To Obama: With Love, Joy, Hate and Despair by Jeanne Marie Laskas), he reveals what they meant to him.
Here is a part of the edited extract, also written by Jeanne Marie Laskas…
“Certainly what I learned during the presidency was that the office of the president itself carries enormous weight,” Obama went on. “And, sadly, probably where I learned that best was in moments of tragedy where you’d visit with grieving families. Sometimes they were in places where — I think it’s fair to say — I didn’t get a whole lot of votes. You know, after a tornado or a flood or a shooting. And what was clear was that my presence there signified to those families that they were important. Their loved ones were important. The grief they were feeling was important. That it had been seen and acknowledged.”
“That notion of being heard,” I said. “It seemed to be embedded throughout all of this.”
“I still believe it,” he said. “I think this whole letter-writing process and its importance reflected a more fundamental vision of what we were trying to do in the campaign and what I was trying to do with the presidency and my political philosophy. The foundational theory, it probably connects to my early days organising. Just going around and listening to people. Asking them about their lives, and what was important to them. And how did they come to believe what they believe? And what are they trying to pass on to their children?”
He looked straight ahead, at a spot somewhere near his feet propped up on the coffee table.
“I learned in that process that if you listen hard enough, everybody’s got a sacred story,” he said. “An organising story, of who they are and what their place in the world is. And they’re willing to share it with you if they feel as if you actually care about it. And that ends up being the glue around which relationships are formed, and trust is formed, and communities are formed. And ultimately — my theory was, at least — that’s the glue around which democracies work.”
“Listening,” I said.
“Yeah,” he said. “I don’t want to suggest that I would have necessarily described it in a sort of a straight line from when I started running. But I do think that that was pretty embedded in our campaign philosophy. I think that’s how we won Iowa, by having a bunch of young kids form those relationships because they were listening to people. It wasn’t us selling a policy manifesto, and it wasn’t even because we were selling me. It was because some young person in a town they’d never been to went around and talked to people, and listened to them, and saw them. And created the kinds of bonds that made people want to then try to work together.”
He was talking about [the Director of Presidential Correspondence, Fiona Reeves,] and all the people like her who knocked on doors.
“It’s the power of empathy not as an end-all, be-all,” he said. “Because even after you’ve listened to somebody or seen them, they still have a concrete problem. They’ve lost their house. They’ve lost their job. They disagree with you on abortion. They think that you’re pulling troops out of Afghanistan too soon and, you know, potentially betraying the sacrifices that have been made by the fallen. There are all these concrete issues that are real. And there are real conflicts and real choices.
But what this form of story sharing and empathy and listening does is it creates the conditions around which we can then have a meaningful conversation and sort through our differences and our challenges, and arrive at better decisions because we’ve been able to hear everybody. Everybody feels heard so that even if a decision’s made that they don’t completely agree with, then at least they feel like: ‘OK, I was part of this. This wasn’t just dumped on me.’”
“The 39th president of the United States lives modestly, a sharp contrast to his successors, who have left the White House to embrace power of another kind: wealth,” writes Kevin Sullivan and Mary Jordan in The un-celebrity president, a beautiful portrait of Jimmy Carter, who has been an ex-president for almost 37 years, longer than anyone else in history. His simple lifestyle is increasingly rare in this era of President Trump, a billionaire with gold-plated sinks in his private jet, Manhattan penthouse and Mar-a-Lago estate.
A few lines from a highly recommended read, accompanied by wonderful pictures by Matt McClain…
“When Carter left the White House after one tumultuous term, trounced by Ronald Reagan in the 1980 election, he returned to Plains, a speck of peanut and cotton farmland that to this day has a nearly 40 percent poverty rate.
The Democratic former president decided not to join corporate boards or give speeches for big money because, he says, he didn’t want to ‘capitalize financially on being in the White House.’
Presidential historian Michael Beschloss said that Gerald Ford, Carter’s predecessor and close friend, was the first to fully take advantage of those high-paid post-presidential opportunities, but that ‘Carter did the opposite.’
Since Ford, other former presidents, and sometimes their spouses, routinely earn hundreds of thousands of dollars per speech.
‘I don’t see anything wrong with it; I don’t blame other people for doing it,’ Carter says over dinner. ‘It just never had been my ambition to be rich.’”
“Carter has been notably quiet about President Trump. But on this night, two years into Trump’s term, he’s not holding back.
‘I think he’s a disaster,’ Carter says. ‘In human rights and taking care of people and treating people equal.’
‘The worst is that he is not telling the truth, and that just hurts everything,’ Rosalynn [Carter’s wife of 72 years] says.
Carter says his father taught him that truthfulness matters. He said that was reinforced at the U.S. Naval Academy, where he said students are expelled for telling even the smallest lie.
‘I think there’s been an attitude of ignorance toward the truth by President Trump,’ he says.
Carter says he thinks the Supreme Court’s Citizens United decision has ‘changed our political system from a democracy to an oligarchy. Money is now preeminent. I mean, it’s just gone to hell now.’
He says he believes that the nation’s ‘ethical and moral values’ are still intact and that Americans eventually will ‘return to what’s right and what’s wrong, and what’s decent and what’s indecent, and what’s truthful and what’s lies.’
But, he says, ‘I doubt if it happens in my lifetime.’”
In his forthcoming book, How Fascism Works, the Yale philosophy professor Jason Stanley suggests that, to the fascist politician, corruption is about the corruption of purity rather than of the law. Officially, the fascist politician’s denunciations of corruption sound like a denunciation of political corruption. But such talk is intended to evoke corruption in the sense of the usurpation of the traditional order.
So what Trump’s supporters fear most isn’t the corruption of American law, but the corruption of America’s traditional identity, Peter Beinart writes in Why Trump Supporters Believe He is Not Corrupt. “Cohen’s admission makes it harder for Republicans to claim Trump didn’t violate the law. But it doesn’t really matter. For many Republicans, Trump remains uncorrupt — indeed, anti-corrupt — because what they fear most isn’t the corruption of American law; it’s the corruption of America’s traditional identity. And in the struggle against that form of corruption — the kind embodied by Cristhian Rivera — Trump isn’t the problem. He’s the solution.”
“If there were any doubt at all that Anish Kapoor’s work Descent into Limbo is a big hole with a 2.5-metre drop, and not a black circle painted on the floor, it has been settled. An unnamed Italian man has discovered to his cost that the work is definitely a hole after apparently falling in it,” writes Mark Brown in Holed up.
“What can I say? It is a shame,” said Kapoor…
“The health and safety risks of artworks is a regular predicament for museums and curators. When the Colombian artist Doris Salcedo installed Shibboleth, a gigantically long crack in the floor of the Turbine Hall of Tate Modern, it was almost inevitable that some people would trip on it. Despite warnings for visitors to watch their step, at least 10 people did just that, although no one was badly hurt.”
“Who says that romance is dead? You turn a corner in a quiet spot of central London (there are a few), expecting perhaps some more of the dutiful brick or render that has become the new normal in residential development, and see instead a screen of limestone, solid and load-bearing as masonry should be. Better, the full, glorious range of the material’s surfaces are on show, rough where it has cleaved along natural faults, and smooth where it has been worked by masons, sometimes scored with parallel lines in the process of extraction,” Rowan Moore writes about Amin Taha’s latest building in the Borough of Islington, London: 15 Clerkenwell Close.
“Taha is not universally popular: while he was shortlisted for last year’s Stirling prize, and picked up two Riba awards this year, 15 Clerkenwell Close has curiously been nominated for the Carbuncle Cup, the anti-Stirling prize run by the magazine Building Design for the worst building of the year. He is currently engaged in a planning dispute over the building, which could end in its enforced demolition,” Moore writes.
“I can understand that this Miesian-Flintstone building is not to everyone’s taste. It’s a touch gawky, a bit raw. I was taken aback myself when I first saw it under construction — it makes a lot more sense as a finished work. But people like Taha are working hard to raise our experiences of the built environment. Councils should not be in the business of crushing them.”
Piers Gough writes in Rock of Ages: “Kurt Vonnegut reported that his architect father lamented that modern architecture was ultimately uninteresting because it lacked any element of chance. Indeed in this post-craft and liability-obsessed age, it is an absolute article of faith of architectural practice that every aspect of the design of buildings is nailed down to the smallest possible degree. Most of us subscribe to the total control theory that the more perfect the drawings and specifications, the more perfect the resulting building. Amin Taha — while excelling in technical innovation — positively relishes the possibility of imperfections in buildings, reflecting our own imperfections across the range from historic memory to construction techniques. In doing so he has produced some of the most surprising and exciting buildings to grace the capital in recent years. He is the kind of architect I suspect many of us always wished we could be: bold, brave, inventive and poetic.”
“No, facts are not in the eye of the beholder,” Chris Cuomo said.
“Yes it is — yes they are. Nowadays they are,” Rudy Giuliani asserted.
— An exchange between Rudy Giuliani, attorney to Donald Trump, and the CNN journalist Chris Cuomo, which, according to CNN’s editor-at-large Chris Cillizza, symbolises the current US politics in a nutshell.
If you made it to the end of this week’s Random finds, which is a Herculean achievement in and of itself, your reward is at least some beautiful music including William Walton’s Cello Concerto, performed by the Dutch cellist Pieter Wispelwey and the Sydney Symphony Orchestra conducted by the late Sir Jeffrey Tate.
All recordings from Pieter Wispelwey — Walton Cello Concerto, Bloch, Ligeti, Britten (Onyx, 2009).