Post scriptum (2022, week 25) — Reading ourselves to death, fashion has abandoned human taste, and the perils of smashing the past
Post scriptum is a weekly curation of my tweets. It is, in the words of the 16th-century French essayist and philosopher, Michel de Montaigne, “a posy of other men’s flowers and nothing but the thread that binds them is mine own.”
In this week’s Post scriptum: Each day, we see as many as 490,000 words — more than ‘War and Peace’; funny stuff starts to happen when you take creative decisions out of the hands of humans; reestablishing a common sense of belonging and ownership of the future; what’s worse: climate denial or climate hypocrisy?; Michael Sandel explains how deepening inequalities have corroded social ties; the dangers of mistaking the trivialities of ‘Silicon Valley Stoicism’ for the ancient school of Stoicism; how great works of art are reduced to moralizing message-delivery systems; Van Gogh’s ‘empty chair’ paintings; and, finally, Paul McCartney at 80.
But first, the overturn of Roe vs. Wade by the US Supreme Court and Margaret Atwood, who thought she was writing fiction in ‘The Handmaid’s Tale.’
“The Alito opinion purports to be based on America’s Constitution. But it relies on English jurisprudence from the 17th century, a time when a belief in witchcraft caused the death of many innocent people. The Salem witchcraft trials were trials — they had judges and juries — but they accepted ‘spectral evidence,’ in the belief that a witch could send her double, or specter, out into the world to do mischief. Thus, if you were sound asleep in bed, with many witnesses, but someone reported you supposedly doing sinister things to a cow several miles away, you were guilty of witchcraft. You had no way of proving otherwise.
Similarly, it will be very difficult to disprove a false accusation of abortion. The mere fact of a miscarriage, or a claim by a disgruntled former partner, will easily brand you a murderer. Revenge and spite charges will proliferate, as did arraignments for witchcraft 500 years ago.
If Justice Alito wants you to be governed by the laws of the 17th century, you should take a close look at that century. Is that when you want to live?”
Reading ourselves to death
“The journalist Nick Bilton has estimated that each day the average Internet user now sees as many as 490,000 words — more than War and Peace. If an alien landed on Earth today, it might assume that reading and writing are our species’ main function, second only to sleeping and well ahead of eating and reproducing,” Kit Wilson writes in Reading Ourselves to Death.
“Our immersion in the written word is but one ingredient in a cocktail of changes we have experienced thanks to cell phones and the Internet, and filtering out all the other factors and isolating the consequences of just text is impossible. Even if we could, we would have to account for the quality of reading too, as much of it involves skimming and darting around the page. But the sheer quantity matters. As both literacy theorists and neuroscientists attest, reading and writing have a profound effect on the way we think.
Consider the experience of reading. From a few signs, we summon into existence a whole world between our ears, our heads becoming a miniature simulacrum snow globe of reality, within which an infinite number of characters and objects and scenarios come alive. The act requires all sorts of imaginative effort — we are costume designer, set designer, sound designer, and casting director all for the tiny holographic play going on in our heads. ‡
Unlike photos or videos, words give reality a structure that isn’t there in and of itself. This may be part of what we find so enticing. Think about what it feels like when you put down your phone after a bumper session of doomscrolling through the day’s awful news. It’s the psychological equivalent of stepping off a merry-go-round and expecting the world to keep spinning. We are so used to our screens bombarding us with text — news, tweets, emails — that we are almost surprised to discover that the walls around us have nothing to say. The sudden absence of words — the evaporation of the sense of control they give us — feels disorienting. A sentence, even a bad one, means something; its syntax has a neat logic. The world is thus packaged into manageable chunks. Our immediate surroundings, in contrast, feel curiously structureless and amorphous — the sound of traffic outside or the sensation of cold air on our skin means … what?
This is why when I’m out and about and have left my phone somewhere, or it has run out of battery, I find myself desperate to read something, anything, even a leaflet or a menu. Simply watching the world go by, observing and reflecting on it, is too shapeless an experience. I want the world narrated, to have clear, meaningful sentences fed to me,” Wilson writes.
He even wonders if this craving for text isn’t a big part being addicted to our phones.
“Consider the historical trajectory. First, we developed language, which, as Nietzsche pointed out, led us to believe that our words were the same thing as the reality to which they referred. Then we invented the written word, which codified and solidified language even more. Then we created the modern world, in which we now readily assume that models of reality encoded in computer language are all there is. Modern society, as the neuroscientist Joseph Bogen already put it back in 1975, is ‘a scholastized, post-Gutenberg-industrialized, computer-happy exaggeration of the Graeco-Roman penchant for propositionizing.’
What now? Many literacy theorists and early Internet pioneers spoke of a coming ‘secondary orality,’ expecting that digital technology would relieve us of the excesses of writing by introducing new, predominantly voice- and video-based, forms of communication and entertainment. This mostly hasn’t materialized. And there is good reason why. Text is far and away the most suited to the age of data. We increasingly try to fit all our experience into a digital spreadsheet, and written words can be logged, searched, counted, isolated, and edited or deleted far more easily than anything else. They allow us to construct a simplified parallel reality alongside the unbearably ineffable one.
The anthropologist Joseph Heinrich has suggested that the rise of literacy in the West helped to produce a certain mindset that he calls WEIRD — for ‘Western, Educated, Industrialized, Rich, and Democratic’ — that excludes some aspects of reality in favor of others. The neuroscientist Stanislas Dehaene has speculated that literacy could ‘displace and dislodge’ the older functions of parts of the brain that contribute, in non-literate cultures, to a particular sensitivity to things like our immediate environment. Writing, as the Native American activist Russell Means would have it, is a way of seemingly controlling the world, a way of shearing it of the intangible. It is ‘the imposing of an abstraction over the spoken relationship of a people.’ Maybe we just don’t know anymore what to do with the experience of experience — and putting it into writing, of course, only adds to the problem.
That’s a shame, because we should be glad the walls really aren’t talking, even as we seem ever more eager to stay locked in this room.”
‡ “Reading a novel, according to a 2013 Emory University study, can activate specific parts of the brain associated with the actions one is reading about. If the protagonist in the story is being chased, for example, your brain behaves in some ways as if you were being chased. […] The researchers found that some areas of gray matter activated by reading can remain fired up for several days. Reading encourages us to put outside reality on hold, to construct a parallel world in our minds, and retreat into it.”
Fashion has abandoned human taste
When you take creative decisions out of the hands of actual humans, some funny stuff starts to happen. For most of the 20th century, designing clothes for mass consumption was still dependent in large part on the ideas and creative instincts of individuals, according to Shawn Grain Carter, a professor of fashion business management at the Fashion Institute of Technology and a former retail buyer and product developer. Even most budget-minded clothing retailers had fashion offices that sent people out into the world to see what was going on, both within the industry and in the culture at large, and find compelling ideas that could be alchemized into products for consumers. […] Development and design work still involved plenty of unglamorous business concerns — sell-through rates, product mix, seasonal sales projections — but the process relied on human taste and judgment. Designers were more likely to be able to take calculated risks.
At the end of the 1990s, things in fashion started to change. Conglomeration accelerated within the industry, and companies that had once been independent businesses with creative autonomy began to consolidate, gaining scale while sanding off many of their quirks. Computers and the internet were becoming more central to the work, even on the creative side. Trend-forecasting agencies, long a part of the product-development process for the largest American retailers, began to create more sophisticated data aggregation and analysis techniques, and their services gained wider popularity and deeper influence. As clothing design and trendspotting became more centralized and data-reliant, the liberalization of the global garment trade allowed cheap clothing made in developing countries to pour into the American retail market in unlimited quantities for the first time. That allowed European fast-fashion companies to take a shot at the American consumer market, and in 2000, the Swedish clothing behemoth H&M arrived on the country’s shores.
[…] The business model uses cheap materials, low foreign wages, and fast turnaround times to bombard customers with huge numbers of new products, gobbling up market share from slower, more expensive retailers with the promise of constant wardrobe novelty for a nominal fee,” Mull writes.
“For the average shopper, this opacity can magnify the sense that a particular style has become inescapable overnight, largely unbidden. Who asked for all these tops with holes in the sleeves? Were people’s shoulders getting too hot?An idea that would have been moderately popular a few decades ago, before petering out naturally, now sticks around in an endless present, like an unattended record that has begun to skip. Shoppers may encounter the farcical limits of algorithmic selling on a regular basis, but those limits are more plain when Amazon is trying to sell you a second new kitchen faucet, after interpreting your DIY repairs as an indicator of a potential general interest in plumbing fixtures. With clothes, the technology is less obviously stupid, and more insidious. We know you love these shirts, because you’ve already bought three like them. Can we interest you in another? Frequently enough — which may be just one in every 100,000 people who see the product — the answer is yes, and the record skips on.
This problem is not limited to fashion. As creative industries become more consolidated and more beholden to producing ever-expanding profits for their shareholders, companies stop taking even calculated risks. You get theaters full of comic-book adaptations and remakes of past hits instead of movies about adults, for adults. Streaming services fill their libraries with shows meant to play in the background while you scroll your phone. Stores stock up on stuff you might not love, but which the data predict you won’t absolutely hate. ‘You have too many fashion companies, both on the retail side and the manufacturing side, being driven by empty suits,’ Grain Carter said. Consumable products are everywhere, and maybe the most we can hope for is that their persistent joylessness will eventually doom the corporations that foist them upon us.”
The perils of smashing the past
“The fearful and fearsome reaction against growing inequality, social dislocation and loss of common identity in the midst of today’s vast wealth creation, unprecedented mobility and ubiquitous connectivity is a mutiny, really, against globalization so audacious and technological change so rapid that it can barely be absorbed by our incremental nature,” Nathan Gardels writes in The Perils Of Smashing The Past.
“In this accelerated era, future shock can feel like repeated blows in the living present to individuals, families and communities alike. In this one world, it sometimes seems, a race is on between the newly empowered and the recently dispossessed.
This emergent world appears to us as a wholly unfamiliar rupture from patterns of the past that could frame a reassuring narrative going forward. Rather, the new territory of the future is described by philosophers as ‘plastic’ or ‘liquid,’ shapelessly shifting as each disruptive innovation or abandoned certitude washes away whatever fleeting sense of meaning that was only just embraced. A kind of foreboding of the times that have not yet arrived, a wariness about what’s next, settles in. Novelists like Jonathan Franzen see a ‘perpetual anxiety’ gripping society. Similarly, Turkish novelist Orhan Pamuk, citing William Wordsworth, speaks of ‘a strangeness in my mind,’ the sense that ‘I am not of this hour nor of this place.’
Social thinkers have long noted the relationship between such anxiety or sense of threat and the reactive fortification of identity. The greater the threat — of violence, upheaval or insecurity — the more rigid and ‘solitarist’ identities become, as Amartya Sen noted in his seminal book Identity and Violence. Intense threats, or their perception, demote plural influences in the lives of persons and communities alike and elevate a singular dimension to existential importance. Conversely, stability, security and inclusivity generate adaptive identities with plural dimensions .
The lesson here is that political and cultural logic, rooted in emotion, identity and ways of life cultivated among one’s own kind, operate in an entirely different frame than the rational and universalizing ethos of economics and technology. Far from moving forward in lockstep progress, when they meet, they clash,” Gardels writes.
“Historical experience has regrettably demonstrated over and over again that, when real or perceived threats abound, practical politics departs from rational discourse and becomes about friends vs. enemies; us vs. them. It becomes about organizing the survival and sustenance of a community as defined by those who are not part of it.
What is clear now, as when the Industrial Revolution accelerated during Marinetti’s time, is that history is fast approaching an inflection point. We live either on the cusp of an entirely new era, or on the brink of a return to an all-too-familiar, regressive and darker past. How to reconcile these opposite movements is the daunting summons for governance in the decades ahead.
In open societies, that means, above all, repairing the dysfunction of democracies by updating the institutions for deliberating social choices by inviting the non-electoral collective intelligence of the broader civil society into governance as a complement to elected representative bodies. Inclusiveness mediated by the reasoned practices of negotiation and compromise can dissolve division. It is the only way to find that point of equilibrium between creation and destruction that can buffer the damage of dislocation that at first outweighs longer-term benefits.
Reestablishing a common sense of belonging and ownership of the future in this way is the precondition for ‘taking back control.’ Without that, there is little hope of reaching a governing consensus that responds to the challenges on both the near and far horizon and avoiding the kind of future the Italian futurists didn’t foresee in their own time.”
The Perils Of Smashing The Past | NOEMA
VERONA, ITALY — “Move fast and break things,” the digital dictum of today’s Silicon Valley entrepreneurs, could have…
When Futurism Led to Fascism-and Why It Could Happen Again
In 1909, a poet named Filippo Marinetti was driving along in his brand new Fiat when he came across two cyclists in the…
In the margins
“[The] basic phenomenon, in which powerful people make climate pledges that turn out to wildly outrace their genuine commitments, has now become so pervasive that it begins to look less like venality by any one person or institution and more like a new political grammar. The era of climate denial has been replaced with one plagued by climate promises that no one seems prepared to keep.
For years, when advocates lamented the ‘emissions gap,’ they meant the gulf between what scientists said was necessary and what public and private actors were willing to promise. Today that gap has almost entirely disappeared; it has been estimated that global pledges, if enacted in full, would most likely bring the planet 1.8 degrees Celsius of warming — in line with the Paris agreement’s stated target of ‘well below two degrees’ and in range of its more ambitious goal of 1.5 degrees. But it has been replaced by another gap, between what has been pledged and what is being done. In June, a global review of net-zero pledges by corporations found that fully half of them had laid out no concrete plan for getting there; and though 83 percent of emissions and 91 percent of global G.D.P. is now covered by national net-zero pledges, no country — not a single one, including the 187 that signed the Paris agreement — is on track for emissions reductions in line with a 1.5 degree target, according to the watchdog group Climate Action Tracker.
In trading denial for dissonance, a certain narrative clarity has been lost. Five years ago, the stakes were clear, to those looking closely, but so were the forces of denial and inaction, which helps explain the global crescendo of moral fervor that appeared to peak just before the pandemic. Today the rhetorical war has largely been won, but the outlook grows a lot more confusing when everyone agrees to agree, paying at least lip service to the existential rhetoric of activists.”
From: What’s Worse: Climate Denial or Climate Hypocrisy?, by David Wallace-Wells (The New York Times)
“[Michael Sandel] argues that liberalism promotes ‘an impoverished conception of the self’ rooted in the ‘conviction that ultimately we are self-made and self-sufficient.’ Widespread belief in this ‘unencumbered self’ leads people to ‘lose sight of their indebtedness’ to their families, teachers, communities, even ‘the times in which they live.’
Liberalism’s second core mistake is to believe that if we agree on fair political procedures, we can leave individuals free to disagree about substantive values. Faith in free trade is linked to this, since liberals falsely believe markets ‘can spare us the need to engage in messy controversial debates about how to value goods, messy debates about competing conceptions of the good life and the common good.’
At the same time, liberals since the Enlightenment have embraced Montesquieu’s concept of Doux commerce, the belief that international trade creates an interdependence that deters war. Sandel says he was ‘always sceptical of that claim.’ Putin’s invasion of Ukraine has completely exploded it.
For Sandel, decades of liberal optimism and individualism have led not only to ‘an erosion of the social fabric’ but ‘an impoverished, hollowed public discourse.’ He wants ‘a new framing,’ one that fully acknowledges our interdependence and doesn’t shy away from difficult questions of value. Sandel has been building that frame for decades, and the case for its ability to hold the political big picture has never been more persuasive.”
From: Michael Sandel: There is a growing tendency for those on top to believe that their success is their own doing, by Julian Baggini (Prospect Magazine)
Image by John Watson
“Under a Stoic framework, virtue is a form of knowledge that shapes your whole personality. It does not exist in a vacuum. In other words, your propensity to engage in ‘good’ or ‘bad’ behavior can only be revealed in your interactions and relationships with other living beings and the environment. The problem as I see it is that boot Carl O’Brien and Dannica Fleuss, in their respective articles [see here, and here], fail to take into account the four Stoic virtues and how they underpin Epictetus’ worldview. Their absence makes it such that both authors, despite their best intentions, end up not evaluating Stoicism or the wisdom of the Stoic philosophers per se, but rather the ‘wisdom’ of the Silicon Valley crowd, who have uprooted the virtue ethics that Stoicism is built upon and replaced them with ‘Stoic’ life hacks. The latter promote resilience and cold showers, as a means to gain personal wealth and status, rather than for the fulfillment of a Stoic-based obligation to themselves and others to ensure that the world progresses towards virtue.
I have spoken at length on the dangers of mistaking the trivialities of ‘Silicon Valley Stoicism’ (including here, and here) for the ancient school of Stoicism, but I think it is important to underline them briefly here for the sake of clarity. ‘Silicon Valley Stoicism’ is the form of ‘Stoicism’ that most people come across outside of the classroom or scholarly articles, normally in the form of mainstream and social media soundbites. This is unsurprising given the connections, power and general public interest that disruptive tech start-ups enjoy. It is also just as unsurprising that this form of ‘Stoicism’ is described as a trendy adaptation of ancient self-help that can help young entrepreneurial types ‘get ahead’ because that is in effect what this version promotes. The problem is that with the weight of an entire ancient philosophy behind them, the adherents of the Silicon Valley form gain credibility and bolster their image as affluent and hip — characteristics they also see, if they gloss over the virtue ethics, in the Roman Stoics, especially Seneca the Younger and the Emperor Marcus Aurelius. In this respect, Fleuss’ claim that Stoicism does not and cannot adequately address systemic injustices is no different to various mainstream journalists who have started out with the false premise that Silicon Valley’s philosophy is Stoicism, and in doing so were able to craft a polemic piece that dismisses Stoic ideas and the underlying philosophy as an inappropriately elitist, ‘white,’ egotistical and macho framework that does little for people who do not benefit from the status quo.
However, in order to hold such a view, you have to overlook the whole virtue ethics framework of Stoicism. Take the Silicon Valley personal development guru Tim Ferriss […]. Whilst it is true that Ferriss promotes characteristics that the ancient Stoic themselves prized, including resilience and mental fortitude, his reasons for doing so are for the egotistical, materialistic and hedonistic ends. Such an approach is hardly synonymous with Stoicism. For one thing, nothing about being a star start-up, millionaire or billionaire (which most of us will never be) implies that you will be happy or better in the things that Zeno called for i.e. four virtues of courage, justice, temperance and wisdom, delivered in the context of the social and political advocacy for the good of the Whole, as both natural and obligatory for a good human life. The essence of Zeno’s perspective is captured by Epictetus’ teacher:
‘Evil consists in injustice and cruelty and indifference to a neighbour’s trouble, while virtue is brotherly love and goodness and justice and beneficence and concern for the welfare of one’s neighbour.’ (Rufus Musonius, Lecture 14:9)
In this respect, if Fleuss’ had entitled her piece ‘Stand Up! Don’t be a Silicon Valley Stoic’ I would have fully agreed with her, as the concerns about self-help ‘mindfulness’ and the lack of collective social action that this form of ‘Stoicism’ supports would have been warranted. I also believe that the resulting nuanced piece would have led to a very much needed discussion on the problem of using philosophy to support dubious claims — something that Donna Zuckerberg has written on (Zuckerberg, 2018).”
“Artworks are not to be experienced but to be understood: From all directions, across the visual art world’s many arenas, the relationship between art and the viewer has come to be framed in this way. An artwork communicates a message, and comprehending that message is the work of its audience. Paintings are their images; physically encountering an original is nice, yes, but it’s not as if any essence resides there. Even a verbal description of a painting provides enough information for its message to be clear.
This vulgar and impoverishing approach to art denigrates the human mind, spirit, and senses. From where did the approach originate, and how did it come to such prominence? Historians a century from now will know better than we do. What can be stated with some certainty is the debasement is nearly complete: The institutions tasked with the promotion and preservation of art have determined that the artwork is a message-delivery system. More important than tracing the origins of this soul-denying formula is to refuse it — to insist on experiences that elevate aesthetics and thereby affirm both life and art.
In the popular imagination, the great corrupter of the visual arts is the art market, with its headline-making, eight-figure auction house sales of works by living artists. The secondary art market is indeed obscene, but to blame the market for all that’s wrong with contemporary art is to disregard the no less pernicious motives of the apparatus of messaging that is foisted upon artworks by nonmarket institutions and their attendant bureaucracies. Private and public museums and galleries; colleges and universities; the art media; nonprofit, for-profit, and state-run agencies and foundations: These institutions adjudicate which living artists are backed financially, awarded commissions, profiled, taught in classrooms, decorated with prizes, publicized, and exhibited.
Institutional bureaucrats, not billionaires, have the power to constrain the possibilities for aesthetic development in the present. The figure of the contemporary artist we know today is an invention of the bureaucrats. He, like them, is a managerial type: polished, efficient, a very moderate, top-shelf drinker. His CV is always up to date. He worries about climate change. The likelihood he graduated from an Ivy League university is especially high; he may himself be a tenured professor (a near given for literary artists).
The nonmarket institutions of the art world, all vanguards of the progressive movement, have telegraphed that such a profile is compulsory for artists. They should be camera-ready and, if nonwhite, eager to discuss matters of identity. Like shrapnel, the words ‘justice,’ ‘legacies,’ ‘confront,’ and ‘decenter’ ideally will litter any personal statements on their work. To conform to these expectations is to be savvy, a prerequisite for success. Such is the figure of the institutionally backed artist.
How any one individual chooses to pursue his career is not of particular interest to me. There will always be artists who are, and those who are not, corruptible, whether their patrons are the Medici, the CIA, or the Mellon Foundation. Bad art, wonderfully, is in the end forgotten. As tiresome, didactic, and predictable as much contemporary art may be, I venture that a different corruption by the institutional bureaucrats should trouble art lovers more. While the market has turned artworks into mere commodities, the vast machinery of the art world has turned artworks into artifacts, by zealously, and almost exclusively, upholding the artwork as an entity with a message to convey.”
“Shortly before Vincent van Gogh cut off his left ear and had a breakdown after quarrelling with his fellow artist, Paul Gauguin, in the French city of Arles in 1888, he created a pair of extraordinary paintings.
One, Gauguin’s Chair, depicts a couple of books and a lit candle discarded on an ornate armchair. The other, Van Gogh’s Chair, shows a tobacco pipe and pouch on a rustic wooden chair and is instantly recognisable as one of the most famous paintings in the world.
Now, the mystery of how the diptych of paintings came to be split up — and why the picture of Gauguin’s chair was kept in the family collection while Van Gogh’s Chair was sold off — has finally been solved. The answer lies in the decision of Johanna Bonger [neglected by art history for decades, Jo van Gogh-Bonger, the painter’s sister-in-law, is finally being recognized as the force who opened the world’s eyes to his genius; see also Reading notes (2021, week 18)], who inherited the paintings as the widow of Van Gogh’s brother, Theo, not to exhibit the masterpieces together in the decades after Van Gogh’s death in 1890 due to her ‘dislike of Gauguin.’
‘Johanna never showed Gauguin’s Chair, while Van Gogh’s Chair was promoted as a really important piece of art,’ said Louis van Tilborgh, senior researcher at the Van Gogh museum and professor of art history at the University of Amsterdam […]. He thinks that the reason Bonger did not want to exhibit the painting was that she disliked Gauguin after the French artist publicly belittled his former friend. ‘Gauguin, very early on, spread the word that Van Gogh was not only mad but also that he, Gauguin, had to teach Van Gogh how to paint. I think Bonger knew that and my conclusion is that, for that reason, she didn’t want to put those two pictures together.’”
From: Revealed: why Van Gogh’s ‘empty chair’ paintings were never shown together (The Observer)
Revealed: why Van Gogh's 'empty chair' paintings were never shown together
Shortly before Vincent van Gogh cut off his left ear and had a breakdown after quarrelling with his fellow artist, Paul…
“Paul McCartney never claimed to be the voice of a generation. He never portrayed himself as a rebel. He is, at heart, a song and dance man. Out of John, Paul, George, and Ringo, Paul is the only one who would have had a career in music if Elvis had never come along. At the age of 14, he wrote ‘When I’m 64’ because, as he later explained, rock and roll didn’t exist and it was the kind of song that might sell. He could imagine Sinatra covering it. Children’s songs, music hall, Scottish folk ballads, heavy rock, gospel, ragtime. It’s all grist to the mill of the supreme tunesmith.” — Christopher J. Snowdon in McCartney at 80