Reading notes is a weekly curation of my tweets. It is, as Michel de Montaigne wrote in one of his many essays, “a posy of other men’s flowers and nothing but the thread that binds them is mine own.”
In this week’s edition: Not all uncertainties are equal; why big tech’s favourite buzzword is nonsense; total agreement as the most painful characteristic of conversation; what ‘being free’ means in a pandemic; living at the ‘hinge of history’; there’s no such thing as a self-made billionaire; seeing our own reflection in Albrecht Dürer; the turmoil of Caravaggio’s paintings; a beautiful mountain retreat on the outskirts of Bogotá; and, finally, Noam Chomsky and the question ‘who do you vote against?’
The value of uncertainty
“Understanding our own relationship with uncertainty has never been more important, for we live in unusually challenging times,” Mark Miller, Kathryn Nave, George Deane and Andy Clark write in The value of uncertainty.
“By better understanding both the varieties and the value of uncertainty, and recognising the immense added value of turning our own uncertainties and expectations into concrete objects apt for test and challenge, we become better able to leverage the power of our own predictive brains,” Miller et al argue.
“The desire to escape a predictable life is a familiar theme in literature. […] Yet the decision to open oneself up to total uncertainty, merely for its own sake, is far from the norm. Humans are creatures of habit.”
But apparently, not all uncertainties are equal.
In their paper Different varieties of uncertainty in human decision-making, the the neuroscientists Amy Bland and Alexandre Schaefer distinguish three categories of uncertainty: expected uncertainty, unexpected uncertainty, and volatility.
“Expected uncertainty is task-salient uncertainty that is already predicted by an existing mental (generative) model — a set of structured knowledge that enables us to generate local predictions in ways nuanced by context and current task,” Miller et al write. “Unexpected uncertainty arises when — for example — an environmental change causes us to become uncertain about our own generative model.”
To illustrate the difference, they borrow an example from the paper by Amy Bland and Alexandre Schaefer. Suppose, “I know that a certain restaurant has dishes I like about 80 per cent of the time, so that eight out of 10 visits will tend to yield a happy outcome. The uncertainty about the offerings of the day is then an expected uncertainty: one that I can work with as I plan my outings. By contrast, if the restaurant suddenly changes its chef, my estimates are immediately unreliable. I am thrust into the land of unexpected uncertainty. […] When confronted with unexpected (salient) uncertainty, our brain reacts by increasing its learning rate, encouraging the kinds of plastic change needed to update the predictive model — for example, by starting to learn about the typical menus created by that new chef. Over time, the upshot should be a revised model, one in which (let’s imagine) I expect dishes I like to be served only about five times during any 10 visits to the restaurant. This might be my cue to move into exploratory mode and try another restaurant.
This form of uncertainty can in fact be very beneficial […] This is just the sort of uncertainty that can help us break out of bad habits and escape […] good enough solutions that fall far short of what we might achieve by pushing on through,” Miller et al write.
Volatility, the final kind of uncertainty, is the most challenging of all. “[It] names a situation in which the frequency of changes in the environmental contingencies are themselves rapidly changing. […] This is like playing dice with pieces that are sometimes fair, sometimes loaded one way, sometimes loaded another way, sometimes loaded yet another way, and so on. It is a world in which the frequency of change in the underlying ‘rule structure’ is high. This is a world that is highly resistant to informative learning, apart from learning that it is indeed such a world — and hence assigning lower confidence to all our estimates of target states,” Miller et al write. “Volatility means that [the] strategy is itself suspect, as there is little useful to learn at the target level except that things are apt to change.”
“Human experience, we believe, reflects nothing so much as the operation of predictions and uncertainty estimations along many dimensions and at many levels of processing. When all goes well, a wide range of predictions and estimations of their reliability (uncertainty) allow us to leverage everything we have been through, a whole life of experience and learning, to quickly detect those sensory patterns that matter to us, assess the reliability of our own expectations relative to the current sensory evidence, and (hence) to behave in ways that help bring about desired and beneficial patterns. But there are dangers here too. Our predictions about the world can be mistaken or misled in various ways. Our hidden biases can sculpt how we perceive and behave in the world in ways that result in the world conforming to our mistaken view. In effect, making our mistake into a reality, which only reinforces our belief in that bias,”Miller et al write.
“The many ways that we can fall prey to our own predictive brains correspond to the various ways in which we can become trapped by our own estimations of the reliability of different predictions.” But by noticing the restricted shape of our lives, we will eventually be able to break the grip of our own self-model. “This suggests that there are ways to hack our own predictive minds so as to escape at least some of the traps we have been examining. This is true in cases where people yearn for a more varied and engaging life.”
“The key to this kind of radical action lies in a process that, though seldom remarked upon, seems to us to lie at the very heart of much that is distinctively human about our relationship with the space of uncertainty. The two-step process starts by making our own predictive models and associated expectations visible, turning them into objects inspectable by ourselves and others. It then proceeds by devising tricks, schemes and ploys that can stress, challenge and sometimes productively break those models.
The first step comes ‘for free’ with symbolic culture. Spoken language, written text and a host of associated practices turn aspects of our own generative models into public (material) objects — words, books, diagrams — apt for sharing, refinement and multigenerational transmission. […]
Flexible symbolic codes, once in place, enable us to step back from our own generative models and model-based predictions, turning them into public objects apt for questioning, stress-testing and deliberate ‘breaking.’ [Once] in command of symbolic codes, the floodgates are open, and our own models and model-based expectations can become objects of scrutiny. This might be the single most transformative epistemic bonus conferred by material culture.
But there is more to come. For once our own best models are encountered as objects, we can do more than simply scrutinise them. We can take actions designed to break and rebuild the models themselves. […] Perhaps most notably of all, our own artistic, engineering and scientific practices often play just this kind of role. For example, diagrams, descriptions and scale models enable us independently to manipulate different aspects of a design, and to selectively attend to the different elements. This enables us to explore different outcomes as conditioned upon different choices in ways that ease the bonds of our own model-based expectations — much like physically shuffling a bunch of Scrabble tiles so as to help uncover new words. To enable such operations, my current beliefs and models need to exist as more than probabilistic trends in the way that I navigate the world on the basis of stored knowledge. They need to exist as concrete items apt for attention, sharing and questioning.
Art is often in this model-revealing/model-breaking business too. It can be a way of materialising and confronting our own high-level assumptions, about self, world and other, but doing so in a framework that steps back from daily concerns […] and hence is not usually experienced as genuinely threatening, even if it is subversive.”
“Whatever the story, human minds became able to go where no animal minds had gone before. We became able to encounter our own predictive models as objects,” Miller et al write.
One of the examples the authors give in their essay, is that of Max Hawkins, a computer scientist and former Google engineer, turned artist, who spent more than two years living according to a series of randomisation algorithms.
“Hawkins set out to break the grip of his own life-model. But it is notable that there is still a predictable regime in play, and one that he himself understands (indeed, one that he designed). For example, he knew that the algorithm would send him somewhere new every two months, and that it wouldn’t first send him suddenly to a new town or city every week, then (randomly) every day, then (randomly) not for 10 years.
It is interesting to speculate that it was this lack of volatility that enabled him to gain so much from his experiment, while avoiding the kinds of anxiety and fear that many of us recently felt as COVID-19 first began to turn the pattern of our lives upside down. Predictive brains expect control and, if it fails, they drive learning. Normally, the detection of high volatility in the environment should drive learning and exploration. Yet, under lockdown conditions, we were (rightly) told to stay put and do nothing.
This is very odd for us. One response was to take control of small worlds — baking, jigsaws, exercise. This is very similar to a response already seen in individuals with autism spectrum condition, which is to generate and inhabit a more controlled environment. And it is a good response, a way of restoring some sense of mastery in the face of wider volatility. Impressive bodies of work in ‘computational psychiatry’ are now devoted to better understanding our relationship with uncertainty, and the many ways it can go wrong. We humans are, it seems, uncertainty management systems — and when uncertainty management goes awry, whether due to external or internal perturbations, we can all too easily lose our grip on self, world and other.
Perhaps the most revealing comment in Hawkins’s many talks is one made towards the end of Leaning In to Entropy. He remarks on how rapidly the strangest and most ‘non-him’ situations and places became the ‘new normal,’ so much so that he could easily imagine life as that person in that once-alien place. This, we conjecture, is the predictive brain reasserting itself, reforming aspects of our own high-level self-model so as to get a grip on the new normal.
Hawkins’s takeaway message was simple: don’t let your own preferences become a trap. Yet on a kind of meta level, he remained trapped (in a good way) — his randomising algorithm simply fulfilling his new top-level preference for selecting in ways that sidestepped his first-order preference structure. We can’t help but intuit some kind of value in this experiment. Like art and science, it makes the invisible concrete, revealing the strong gravitational force of our own expectations.
It is also an object lesson in the surprising value of controlled uncertainty.”
In his latest book The Road Not Taken (Setanta Books, 2020), the New York-based photographer Arnaud Montagard captures an America emptied of its inhabitants. Captions by the photographer, taken from The loneliest road trip: travels through an empty America — in pictures, in which he talks us through some of his intriguing shots.
The disruption con
In The disruption con: why big tech’s favourite buzzword is nonsense, Adrian Daub explores how one magic word became a way of justifying Silicon Valley’s unconstrained power. The long read is an adaptation of his forthcoming book, What Tech Calls Thinking: An Inquiry into the Intellectual Bedrock of Silicon Valley (FSG Originals x Logic, October 2020).
“When we speak of disruption, we are usually thinking about the perils of continuity; we express the sense that continuity works fine until it doesn’t. To some extent, this sense that things staying the same for too long is dangerous and makes us risk falling behind, is characteristic of modernity — not in the sense of a specific time period so much as the condition of being modern, living in a modern age,” Daub writes. “More specifically, though, disruption resonates with our experience of capitalism. Think of all the companies and products that you remember treating as seemingly permanent, inextricable fixtures of your everyday life, that nevertheless slid right out and disappeared with time.”
“[The] inevitability of socialism and the instability of capitalism are two ideas one rarely hears mentioned in connection with disruption today. If anything, disruption seems to lean in the direction of more capitalism — that is, of a more untrammelled expression of market forces. But it’s important that this theory was first developed in dialogue with Marx, [who] thought that the falling rate of profit doomed capitalism to exploit labour ever more harshly (thus setting the stage for revolution),” Daub writes.
Schumpeter countered with the idea of creative destruction. “In his view, capitalism’s creative destruction — its tendency to shake up and redefine its markets — is the thing that actually accounts for its continuity. Yesterday’s monopolist is suddenly one competitor among many, and, often enough goes under entirely. The cycle begins anew.
It would have been easy enough for Schumpeter to argue that in this way, creative destruction would ensure capitalism’s long-term viability. But interestingly enough, in Capitalism, Socialism and Democracy, his magnum opus, he argues just the opposite. The second part of the book is titled Can Capitalism Survive?, and Schumpeter comes down on the side of no. After all, the constant destruction, however generative it may be from a bird’s eye view, will ultimately call forth attempts to regulate capitalism. While creative destruction is viable economically, its experience is too disorienting politically to allow capitalism to survive long-term. In the end, Schumpeter believed, creative destruction makes capitalism unsustainable: gradually and peacefully (through elections and legislative action), capitalism will yield to some form of socialism.”
According to Daub, “there is an odd tension in the concept of disruption: it suggests a thorough disrespect towards whatever existed previously, but in truth it often seeks to simply rearrange whatever exists. Disruption is possessed of a deep fealty to whatever is already given. It seeks to make it more efficient, more exciting, more something, but it never ever wants to dispense altogether with what’s out there. This is why its gestures are always radical, but its effects never really upset the apple cart.”
Besides, the use of the term ‘creative destruction’ has shifted. “[It] now has an exculpatory, at times even celebratory, side. Schumpeter wasn’t altogether horrified by creative destruction, but he thought it was as much of a problem as it was a functional rule for how capitalism operates. […] While creative destruction was neutral on whether whatever was getting creatively destroyed deserved it, anything that is getting ‘disrupted’ had it coming.”
But there is a second, less obvious shift in usage, according to Daub. While “Schumpeter proposed creative destruction as a concept that applies to the business cycle, [today’s] rhetoric of disruption frequently applies to things other than companies. This is why people such as Peter Thiel are so intent on claiming that higher education, say, or healthcare as a whole, or government, are oligopolies or even monopolies. Schumpeter would have looked at Blockbuster’s gradual defeat by Netflix […] as a textbook case of creative destruction. But is the same true for your local travel agency, record shop and pharmacist? Is it true of the postal service or the regional bus company? Disruption is a concept that draws combatants into an arena they had no sense they were entering.
Disruption depends on regarding people as participating in the business cycle who insist that they’re doing no such thing. And it depends on extending the sense in which the terms ‘monopoly’ or ‘oligopoly’ can be applied. Did big taxi companies once dominate personal transportation, or did thousands of individual cabbies who were barely making ends meet? The term ‘disruption’ makes a monolith of structures and organisations that are old, have grown up organically and are therefore pretty scattered and decentralised. Think about the peculiar alchemy involved in talking about how Google disrupted the media landscape: suddenly the hundred-billion-dollar company is a scrappy underdog and a magazine with 40 employees is a Big Bad Monopolist.”
“[Let’s] face it, Silicon Valley technology in nearly all cases isn’t so transformative that it would simply replace the existing systems on its merits. Uber isn’t better than a good mass-transit system; Facebook isn’t better than actual friendship; YouTube videos aren’t better than quality entertainment; a neighborhood littered with Airbnbs isn’t better than a community-oriented one; a computerized learning plan isn’t better than a great teacher. They may be more efficient or easier to use or less expensive, but better? Not even close.” — Noam Cohen in Seeing Through Silicon Valley’s Shameless ‘Disruption’
“The final distortion the rhetoric of disruption introduces concerns where society and the state lend their support. For Schumpeter, creative destruction issued from challengers who were able to spontaneously expand or change the playing field. But what happens when disruption becomes itself institutionalised? Today’s plucky rebels are funded by billionaires, can go into massive debt if they need to, are supported by regulatory bodies they or their business school buddies have long ago captured and are cheered on in their attacks by people who have been wanting to get rid of unions and regulation all along. They are hardly what we’d usually think of as outsiders,” Daub writes. “For the ultimate upshot of the disruptor’s super-historical impulse is the expectation that, rather than your idea conforming to the world in some manner, the world ought to accommodate the sheer genius of your idea.”
In many fields, “tech hasn’t so much changed the rules as it has captured the norms by which the field is governed. And ultimately, ‘disruption’ probably refers to this disruption of our judgments and categories as well. But only the disruptor has this privilege. Anytime the disruptees suggest that they might like to have the world adjusted to ensure their survival, they’re told that this is a sign of their weakness and resistance to change.
This double standard applies to another Silicon Valley mantra as well: do you want to fail better, do you want to fail fast? Well, whether you get to, and how your failure gets interpreted, depends a great deal on who you are.”
Montaigne’s ‘verbal jousting’
Although Michel de Montaigne despised the religious extremism of his age, he relished conversing with friends and foes alike. He saw total agreement as the most painful characteristic of any conversation.
“In On the Art of Conversation [Book III, Chapter VIII of his Essais], Montaigne argues that talking is ‘the most productive and natural exercise of our mind.’ Above all, it is a practice that entails a willingness to embrace disagreement,” Rachel Ashcroft writes in For Montaigne, verbal jousting is the only way to reach truth.
“Montaigne welcomes divergence: ‘No proposition shocks me, no belief injures me, however different it is to my own opinion.’ In fact, he compares conversation to jousting; an excellent talker is a ‘stiff jouster’ who ‘presses at my sides, pricking me left and right.’ Verbal jousting is beneficial to both parties, since it encourages two minds to push each other into new planes of understanding.
One should engage in conversation without allowing prejudice to form the basis of the dialogue. ‘I am not suspicious by nature … I refuse to believe the most awful and perverse inclinations if I don’t witness them myself, in the same way I treat monsters and miracles.’ Montaigne takes people as they come, without assuming the worst of them before they have opened their mouth. He encourages people to embrace the intellectual challenge posed by an opposing view and listen to an argument first before judging an individual. Responding to the words being spoken takes precedence over the person speaking them.
After all, there is no guarantee that we will hold a particular opinion forever. According to Montaigne, not only our physical bodies but also our inner faculties — ‘nostre jugement’ — are in constant flux. During the Reformation, […] Montaigne discovered that even people’s religious convictions could evolve with time — a fact that only heightened his disgust at the absurdity of pitching extreme opinions and acting on them with such displays of bloodshed as the St Bartholomew’s Day massacre.”
“What can we learn from the art of conversation in the Essais? Arguably, today’s bloodshed is the fallout from online discourse. Twitter and similar platforms provide a vehicle for users to bully victims into losing their jobs or suffering mental health issues. Montaigne underlines the importance of the face in polarised communication; even the tensest of situations can be defused by seeing the passivity in someone’s expression. Compare this with the anonymity of online trolling. Attacking someone online removes the power that facial expressions have in disarming abuse and moderating conversation.
People now publicly declare themselves ‘unsafe’ around co-workers who hold legal opinions different to their own. Conversation can be prickly among friends and enemies. Montaigne believed that talking should be difficult if conversation was to reach its most productive goal. This can be a troublesome idea. After all, who wants to listen to someone who disagrees with you? Yet verbal ‘jousting’ is disappearing as people prefer to block out arguments — and people — that challenge them.
Montaigne demonstrates that it’s possible to encounter radically different views and walk away unscathed. [His life-saving experiences] illustrate wider points about the importance of honesty in expression and willingness to embrace dialogue, even with our ideological ‘enemies.’ We must allow others to think differently, since we ourselves will inevitably evolve over time, and let ‘truth … be the common cause which unites us.’ The Essais embrace truth as an elusive entity. But talking to one another inches us closer towards that unattainable goal.”
For a good introduction to Montaigne, listen to a Philosophy Bites interview with Sarah Bakewell, the author of How to Live: A Life of Montaigne in one question and twenty attempts to an answer (Vintage Books, 2011).
There’s also Stefan Zweig’s Montaigne (Pushkin Press, 2015). Written shortly before his suicide in 1942 — having left Austria in 1934 after Hitler had come to power in Germany, Zweig felt increasingly depressed about the situation in Europe and the future for humanity — Montaigne is a heartfelt argument for the importance of intellectual freedom, tolerance and humanism.
And also this…
Freedom, it seems, is one of this biggest casualties of the COVID-19 pandemic. But Jean-Paul Sartre makes the British philosopher and writer Julian Baggini question whether this is a straightforward tale of loss.
“Never were we freer than under the German occupation,” Sartre wrote in The Atlantic in 1944. “We had lost all our rights, and first of all our right to speak. They insulted us to our faces. … They deported us en masse. … And because of all this we were free.”
“Sartre’s core insight was that it is only when we are physically stopped from acting that we fully realise the true extent and nature of our freedom, . If he is right, then the pandemic is an opportunity to relearn what it means to be free,” Baggini writes in In a pandemic we learn again what Sartre meant by being free.
“[It] enables us to see more clearly the difference between the hollow freedom to act without impediment and the true freedom to act in accordance with our all-things-considered judgments. The American philosopher Harry Frankfurt in 1971 illuminated the difference with his distinction between the things that we simply want and the ones that, after consideration, we want to want.” The latter kind of freedom “requires self-restraint. A person without this capacity is not truly free but is what Frankfurt calls a ‘wanton’: a slave to his desires.”
According to Baggini, we have an opportunity to reset the balance between negative and positive liberty. “There isn’t a trade-off between big government and personal freedom: many freedoms depend on the state for their very possibility. What the social scientists Neil and Barbara Gilbert in 1989 dubbed the ‘enabling state’ and the economist Mariana Mazzucato in 2013 called the ‘entrepreneurial state’ are essential for giving us the opportunity to realise the full potential of our freedom,” he writes. “One way in which we are waking up to our freedom is that our conception of what’s possible has been expanded. […] The Overton window has been flung wide open. More is possible than we imagined.
Freedom to act without a belief in the possibility to act is empty. Our eyes have been opened to more potential futures than we believed were available to us. The challenge is to respond to this opportunity without falling into naive utopianism or wishful thinking. Our realisation is not the simplistic belief that we have fewer constraints than we thought we had, but that the actual constraints we have are not the ones we believed them to be.
I am not equating the trials of living under Nazi occupation with living with the scourge of COVID-19. But despite the many and important differences, Sartre’s message of freedom in 1944 rings just as true today. Our primary experience is one of restriction, of loss of liberty. But, with thought and reflection, we can follow this with a renewed sense of what freedom really means, why it matters, and how we can use it to forge a better future. Perhaps we will soon look back and say, as Sartre did: ‘The circumstances, atrocious as they often were, finally made it possible for us to live, without pretence or false shame, the hectic and impossible existence that is known as the lot of man.’”
“While it might seem deflating to conclude that we are probably not the most important people at the most important time, it could be a good thing. If you believe the ‘time of perils’ view, then the next century will be especially dangerous to live through, potentially requiring significant sacrifices to ensure our species persists. And as [Luke Kemp, a research associate at the University of Cambridge] points out, history suggests that when fears are high that a future utopia is at stake, unpleasant things are sometimes justified in the name of protecting it.
‘States have a long history of imposing draconian measures to respond to perceived threats, and the greater the threat the more severe the emergency powers,’ he says. For example, some researchers who wish to prevent the rise of malevolent AI or catastrophic technologies have argued we may need ubiquitous global surveillance of every living person, at all times.
But if life at the hinge requires sacrifices, that does not mean that life at other times can be laissez-faire. It doesn’t absolve us of all responsibility to the future. This century we could still do remarkable damage, and it needn’t be as catastrophic as a species-ending event. Over the past century, we have found myriad new ways to leave malignant heirlooms for our descendants, from carbon in the atmosphere to plastic in the ocean to nuclear waste beneath the ground.
So, while we do not know if our time will be the most influential or not, we can say with more certainty that we have increasing power to shape the lives and well-being of billions of people living tomorrow — for better and for worse. It will be for future historians to judge how wisely we used that influence.”
“It might be a familiar progression, transpiring on many worlds — a planet, newly formed, placidly revolves around its star; life slowly forms; a kaleidoscopic procession of creatures evolves; intelligence emerges which, at least up to a point, confers enormous survival value; and the technology is invented. It dawns on them that there are such things as laws of Nature, that these laws can be revealed by experiment, and that knowledge of these laws can be made both to save and to take lives, both on unprecedented scales. Science, they recognize, grants immense powers. In a flash, they create world-altering contrivances. Some planetary civilizations see their way through, place limits on what may and what must not be done, and safely pass through the time of perils. Others are not so lucky or so prudent, perish.” — Carl Sagan, Pale Blue Dot: A Vision of the Human Future in Space (pp. 305–306)
One of the arguments for defending billionaires is that top earnings reflect top marginal productivity. But according to Ingrid Robeyns, a professor of Ethics of Institutions at Utrecht University, the Netherlands, this theoretical claim lacks empirical support.
“In reality, top incomes do not always primarily reflect the productivity of the top manager in question, but are instead the result of a set of shared norms and values of top executives who consider it normal for them to earn such high incomes. The extreme earnings of top managers do not reflect the very high added value that their work brings to the organisation or the company. They instead reflect what they are capable of negotiating in the labour market for top earners, and that negotiation also reflects what this group of people tell each other their work is worth,” Robeyns writes in There’s no such thing as a self-made billionaire.
“Of course, there are reasons why some differences in pay can be justified. In part, some limited degree of wage inequalities can be justified on efficiency grounds: we simply want to have enough surgeons as a society, and it then helps to give people who are talented in this area the prospect of good pay. If a potential surgeon can earn as much as a yoga teacher or a bookseller, the question is whether we as a society will have enough surgeons.
In professions that are intrinsically or instrumentally valuable to society, we want people to receive financial recognition for great efforts — efforts to develop their own talents and skills, bearing greater responsibility or performing key functions in the service of organisations, companies or society. It is also a matter of paying respect to a person if we recognise the effort they make. Yet, these considerations can, at best, justify some (though limited) inequality in pay and not the excessive inequality that we have seen in recent years.
We may rightly tell ourselves that if we worked hard, we deserve a reward.
But no one can say that they deserve to be a billionaire.”
In Seeing Our Own Reflection in the Birth of the Self-Portrait, Jason Farago explores how what now seems self-evident — that pictures can represent who you ‘really’ are — began in 1500 with Albrecht Dürer.
“In the eyes of us poor moderns, it seems self-evident that a picture can capture who you are. That your posed image, your face and your clothing, express something essential about your personality. It’s the myth on which every selfie stands. But the premise that an image can be an authentic representation — that you are a unique individual at all — is not self-evident. It is a historical development. It had to be invented.
More than five centuries ago, Albrecht Dürer painted images so detailed and exact that they seemed some kind of divine creation. One subject fascinated him above all: himself.”
If you do one thing today, go to the article in The New York Times and see the detail in which Dürer painted himself, particularly his eyes. It’s astonishing.
While you’re there, don’t forget to read (or listen to) In Dark Times, I Sought Out the Turmoil of Caravaggio’s Paintings, in which Teju Cole shares how the work the artist made near the end of his life has changed his understanding of both beauty and suffering.
“Here was an artist who depicted fruit in its ripeness and at the moment it had begun to rot, an artist who painted flesh at its most delicately seductive and most grievously injured. When he showed suffering, he showed it so startlingly well because he was on both sides of it: He meted it out to others and received it in his own body. Caravaggio is long dead, as are his victims. What remains is the work, and I don’t have to love him to know that I need to know what he knows, the knowledge that hums, centuries later, on the surface of his paintings, knowledge of all the pain, loneliness, beauty, fear and awful vulnerability our bodies have in common.”
Designed by L.Oberlaender Arquitectos and completed in 2018, the Keeper’s House, or Casa del Cuidandero, in Sopo, in the mountains on the outskirts of Bogotá, Colombia, is a little gem.
The architects used view materials. White-painted pavers with rounded edges for the construction of the walls. A dark brown flagstone for interior floors; its hue helps to capture the heat of the sun during the day, guaranteeing a warm and comfortable interior temperature at night. Weathered slabs were used for the exterior floors. The effect is stunning in all its simplicity.
“What the left should do is what it always should do: it should recognise that real politics is constant activism, in one form or another. Every couple of years something comes along called an election, you should take off a few minutes to decide if it’s worth voting against somebody, rarely for somebody. In the course of, say, Corbyn in England, I would have voted for him but most of the time the question is ‘who do you vote against?’
This time the answer to that question is just overwhelmingly obvious: the Trump Republicans are just so utterly outrageous, way off the spectrum, that there’s simply no question about voting against them. So you take off a few minutes, go to the voting booth, push a lever, vote against Trump, which in a two-party system means you have to push the vote for the other candidate. But then the next thing you do is to challenge them, keep the pressure on to move them towards progressive programmes.” — Noam Chomsky in the NewStatesman