Reading notes (2021, week 17) — On why we are drawn to art, our need to move, and the power of hope
Reading notes is a weekly curation of my tweets. It is, as Michel de Montaigne so beautifully wrote, “a posy of other men’s flowers and nothing but the thread that binds them is mine own.”
In this week’s edition: Gazing at a painting feels like an almost magical encounter with another mind but what real effects does art have on us?; why walking outdoors has the potential to unlock our brains; hope nudges us to think about means as well as ends; the devil’s advocate’s advocate; why the fall of the Roman Empire wasn’t a tragedy for civilisation; the unbearable burden of invention; creativity is dead, long live curation; two renovated old Kyoto dwellings; and, finally, music critic Thomas Larson on writing about music (and 50 of the finest Mozart recordings, compiled by Gramophone).
Why we are drawn to art
For Ellen Winner, a psychologist with a special interest and expertise in the arts, “our fascination with art raises two long-standing and fundamental questions, ones that have engaged philosophers, psychologists and art lovers. First, why are we so drawn to works of art? For their beauty, of course, but that can’t be all, as the thought-experiments above show us. Second, what kinds of demonstrable beneficial effects, if any, can engagement in the arts have on us?”
“As for the first question — why do we care so much? — I argue that we’re drawn to works of art because they connect us quite directly to the imagined mind of the artist. We believe that artists mean something by what they produce, even if it’s sometimes difficult to discern just what meanings were intended. And thus, whenever we take something to be art, rather than accident or functional artefact, we automatically read into it intentionality and meaning.
When we look at a Rembrandt, we feel like we’re reading a message sent to us today by this long-ago genius. The brushstrokes are clues to how his arm was moving as he painted, and how his arm moved can be read as an expression of his state of mind as he created this image. His self-portraits suggest a certain kind of self-scrutiny. We feel that we can see Rembrandt’s awareness of how he’s coming across, and understand his penetrative self-analysis in the series of self-portraits made over time as he aged,” Winner writes in Changed by art.
This is also why people dislike forgeries.
“If artworks are in fact intimately connected with what we imagine to be the mind that actually made them, then reproductions and forgeries of paintings, no matter how high-quality, don’t have that same power over us.”
People believe, albeit irrationally, “that artists imbue their works with their essence at the moment of creation — even if these works are copies by the artist of the original. The psychologist Paul Bloom has written about this kind of irrational belief in relation to what he calls essentialism. Essentialism explains our preference for objects with particular histories because of a kind of magical thinking based on a causal story of contact. The belief that certain objects have inner essences explains our preference for objects with sentimental value: if I lose my wedding ring, I’m not fully satisfied by an exact replacement; if a child loses her worn teddy bear, she’s not mollified by a gift of a new one. In the case of an artwork, the belief in the artist’s essence is what allows us to feel we’re connecting to the mind of the artist. […]
But our reverence for originals isn’t universal. Treating the original as special and sacred is a Western attitude. In China and Japan, for example, it’s acceptable to create exact replicas, which are valued as much as the original — especially because an ancient original might degrade over time, but a new replica will show us how the work looked originally. […] Perhaps our culture teaches us to respond to artworks by inferring the mind behind the art.”
“Turning to the second question — what are the potential beneficial effects of art? — it feels intuitively plausible to say that art has a positive effect on our mental health and wellbeing, and that it makes us more compassionate and empathetic human beings. It turns out that there’s some evidence for the wellbeing claim” [and here], but “the problem with the art-creates-empathy-claim is that it means more than connecting with the mind of the artist. It also means behaving more compassionately (or endorsing more compassionate policies) as a result of connecting to others’ mental states,” Winner argues.
Though the art/empathy claim is made broadly about the arts, it’s most often made about the narrative arts — fiction, film, theatre. The philosopher Martha Nussbaum believes strongly that, because literature trains our ability to stand in others’ shoes, it makes us more empathetic.
“The belief that reading fiction makes us better, more empathetic human beings sounds right. We all know from personal experience that we often empathise very strongly with fictional characters. The question is whether art can change our attitudes and behaviours once we’ve closed the pages on the novel or left the theatre, making us behave more empathetically and compassionately towards others. When we leave the fictional world, do we feel we’ve paid our empathy dues?”
A recent (unpublished) study by Winner provides some support for the claim that narratives about suffering can increase rather than deplete empathy. But the advantages are short-lived, as Winner’s study shows. This isn’t surprising, she says. “It’s exceedingly difficult to change people’s attitudes, and it would be naive to think that one reading could effect a permanent change. […]
Empathy might well work to make us behave more compassionately. But it could also do the opposite. Paul Bloom cautions that knowing what someone else is feeling also makes some people better at knowing how to make them suffer. And, of course, we’re often reminded of the fact that many of the most heinous Nazis loved art, literature and music. And so, while engaging with art might increase our empathy and compassionate behaviour, there’s no guarantee that it will do so. We ought to resist wishful thinking about the arts unless we have the evidence needed to support rosy claims.
We’re drawn to art for many reasons, but one particularly powerful reason is to experience the feeling that we’re engaging with the mind of a great artist. Take away this possibility by telling us that the work was created by an assistant, a forger or a deep-learning machine, and we’re disappointed, even angered. As far as the effects of engaging with art, artmaking has positive effects on our mood, and viewing art allows us to engage with negative emotions in a way that’s protective and leads to the pleasurable experience of feeling moved. But whether or not the empathy we feel for the artist — say, for Rembrandt as an old man, or that which we feel for the characters in a narrative — actually changes our behaviour once we leave the world of the museum or the novel or film or play, those are as-yet unanswered questions.”
Our need to move
“All other animals look downward; Man,
Alone, erect, can raise his face toward Heaven.”— Ovid, Metamorphoses (Book 1, 107–109)
Although Charles Darwin spent most of his days in his study, he did his best thinking outside, “on a lowercase d–shaped path on the edge of his property. Darwin called it the Sandwalk. Today, it is known as Darwin’s thinking path,” Jeremy DeSilva writes in On the Link Between Great Thinking and Obsessive Walking, an excerpt from his book First Steps: How Upright Walking Made Us Human.
In her two-volume biography of Darwin, Janet Browne wrote:
“As a businesslike man, he would pile up a mound of flints at the turn of the path and knock one away every time he passed to ensure he made a predetermined number of circuits without having to interrupt his train of thought. Five turns around the path amounted to half a mile or so. The Sandwalk was where he pondered. In this soothing routine, a sense of place became preeminent in Darwin’s science. It shaped his identity as a thinker.”
“Darwin circled the Sandwalk as he developed his theory of evolution by means of natural selection. He walked to ponder the mechanism of movement in climbing plants and to imagine what wonders pollinated the fantastically shaped and colorful orchids he described. He walked as he developed his theory of sexual selection and as he accumulated the evidence for human ancestry. His final walks were done with his wife Emma as he thought about earthworms and their role in gradually remodeling the soil.”
Maybe apart from Virginia Woolf, who loved a night-walk (“she wrote that the best time to walk at night is winter,” Lizzy Stewart writes in An Ode to Women Who Walk), walking has historically been the privilege of white men. William Wordsworth, Ralph Waldo Emerson, Henry David Thoreau, John Muir, Jonathan Swift, Beethoven, Immanuel Kant, Søren Kierkegaard and Friedrich Nietzsche were all obsessive walkers. “Nietzsche, who walked with his notebook every day between 11 am and 1 pm, said, ‘All truly great thoughts are conceived by walking.’ Charles Dickens preferred to take long walks through London at night. ‘The road was so lonely in the night, that I fell asleep to the monotonous sound of my own feet, doing their regular four miles an hour,’ Dickens wrote. ‘Mile after mile I walked, without the slightest sense of exertion, dozing heavily and dreaming constantly.’ More recently, walks became an important part of the creative process of Apple co-founder Steve Jobs,” DeSilva writes.
“Perhaps it’s a coincidence that so many great thinkers were obsessive walkers. There could be just as many brilliant thinkers who never walked. […] Surely the astoundingly brilliant Stephen Hawking did not walk after ALS paralyzed him. So walking is not essential to thinking, but it certainly helps.
Marilyn Oppezzo, a Stanford University psychologist, used to walk around campus with her Ph.D. advisor to discuss lab results and brainstorm new projects. One day they came up with an experiment to look at the effects of walking on creative thinking. Was there something to the age-old idea that walking and thinking are linked?
Oppezzo designed an elegant experiment. A group of Stanford students were asked to list as many creative uses for common objects as they could. A Frisbee, for example, can be used as a dog toy, but it can also be used as a hat, a plate, a bird bath, or a small shovel. The more novel uses a student listed, the higher the creativity score. Half the students sat for an hour before they were given their test. The others walked on a treadmill. The results were staggering. Creativity scores improved by 60 percent after a walk.
A few years earlier, Michelle Voss, a University of Iowa psychology professor, studied the effects of walking on brain connectivity. She recruited 65 couch-potato volunteers aged 55 to 80 and imaged their brains in an MRI machine. For the next year, half of her volunteers took 40-minute walks three times a week. The other participants kept spending their days watching Golden Girls reruns […] and only participated in stretching exercises as a control. After a year, Voss put everyone back in the MRI machine and imaged their brains again. Not much had happened to the control group, but the walkers had significantly improved connectivity in regions of the brain understood to play an important role in our ability to think creatively.”
Walking changes our brains. It impacts not only creativity, but also memory.
“In 2004, Jennifer Weuve of Boston University’s School of Public Health studied the relationship between walking and cognitive decline in 18,766 women aged 70 to 81. Her team asked them to name as many animals as they could in one minute. Those who walked regularly recalled more penguins, pandas, and pangolins than the women who were less mobile. Weuve then read a series of numbers and asked the women to repeat them in reverse order. Those who walked regularly performed the task much better than those who didn’t. Even walking as little as 90 minutes per week, Weuve found, reduced the rate at which cognition declined over time. Therefore, because cognitive decline is what occurs in the earliest stages of dementia, walking might ward off that neurodegenerative condition.
But correlation does not equal causation. […] Perhaps the arrow of causality was pointing in the wrong direction. Maybe mentally active people were simply more likely to go for a walk. Researchers had to dive deeper,” DeSilva concludes.
The power of hope
“Our task now is to work with hope.
And with hope, to rekindle the promise of big skies.”
The John Templeton Foundation published the following short essay on hope, entitled More than a Feeling: How Hope Galvanizes Us Into Action.
“Hope is the heartbeat of stories. The hero’s journey, great epics, and books that keep us reading well past bedtime all depend on hope. Without it, Harry Potter couldn’t face Voldemort. Jane Eyre couldn’t maintain her steely resolve for independence. Odysseus couldn’t endure a decade-long journey home.
While we recognize hope when we see it, it’s tricky to define. Most philosophers agree that hope requires belief and desire. To experience hope, you have to both desire something and believe that acquiring that thing is possible. (‘I want to change the world, and I think it’s possible to make an impact.’) But desire and belief alone don’t create hope. There’s a third ingredient, […] that helps explain hope’s power and potency — the kind of power that enabled Nelson Mandela to endure 27 years in prison, or Martin Luther King Jr. to dream of a more equitable world before it existed. So what is it?
Here’s where people disagree. Some philosophers argue that this third thing is trust in an external actor: a hopeful person must believe something outside of herself, like God or nature, is conspiring on her behalf. Others, like Michael Milona and Katie Stockdale of Templeton’s Hope & Optimism Initiative, say hope’s third ingredient is an emotional perception of reasons for action. In other words, hope has to include a perceived reason to move forward. A beginning runner who hopes to one day run a marathon must a) want to run it, b) believe she can run it, and c) see a reason to run it, whether that reason is physical health, mental endurance, or the 26.2 bumper sticker.
Belief, desire, and reason may seem like a feel-good recipe. But research and lived experience teach us that hope isn’t always pleasurable. This is because hope often occurs alongside deep fear and doubt.”
“It’s possible to hold onto hope even when we’ve lost confidence — for example, we might hope for racial justice even when we aren’t optimistic that reform is near. Even when we hope with confidence, our longing remains laced with doubt. The more deeply we hope for an uncertain outcome (‘I hope to one day be released from prison’), the more we fear that our hope won’t be realized (‘I fear I will never be free’). The negative sensations of fear counteract, or at least temper, hope’s pleasant feelings. As Milona writes, ‘Uncertainty is the province of hope and fear.’
In the end, the only negative emotion that hope can truly rule out is despair. When we despair, we see no reason to move forward. As long as we continue seeing reasons to go on, hope survives. After decades in prison, Nelson Mandela was freed and returned home to reform his country of South Africa. In his autobiography [Long Walk to Freedom] he wrote about this durability of home amidst doubt: ‘There were many dark moments when my faith in humanity was sorely tested, but I would not and could not give myself up to despair. That way lay defeat and death.’
We see this pattern over and over in the stories we love. Obstacles arise, chances may appear slim, fear may threaten to overcome the protagonist — yet as long as hope persists, so does our hero.
Because hope involves desire, it nudges us to think about means as well as ends. When we hope for something, we are motivated to bring about the object of our hope. In the case of our would-be marathoner, the hope of crossing the finish line makes her more likely to put in the work of training. Without hope that she can complete the race, she’s less motivated to change her diet, give up alcohol, and wake up early on Saturdays to run. Rather than tempting us to sit back and relish an imaginary outcome, hope prompts us to take action toward that end.
But wait, you might be thinking, might hope make us foolish and complacent? Can’t you still work toward change without hope, as in the case of activists who claim to be motivated by anger or bitterness?
In her 2019 speech, climate activist Greta Thunberg told world leaders, ‘I don’t want your hope. I don’t want you to be hopeful. I want you to panic. I want you to feel the fear I feel every day. … I want you to act as if our house is on fire. Because it is.’
Thunberg was rejecting the kind of naïve optimism that soothes fear and paints a falsely rosy picture of the future. But her desire for people to panic isn’t necessarily incompatible with hope. Even people who take Thunberg’s advice and claim to work hopelessly toward reform don’t completely lack hope. They might not be optimistic that the world will adopt climate policies in time to save millions of species from extinction. But as long as they maintain the desire to make things better, the belief it is possible for them to do this, and a reason to do so, hope is at play.
That’s the good news about hope: even at its most abstract, such as a longing to live up to our convictions, hope has the power to galvanize us into action and bring about change.”
And also this…
“Thinking together is riddled with pitfalls, but we can’t really claim to live together without doing it. That is why we need devil’s advocates: they safeguard group-deliberation from the inside. The devil’s advocate defends faith and justice by being in the group but not of it: by keeping the group divided against itself, she holds a space for truth against the pressure of consensus,” Agnes Callard writes in The Devil’s Advocate’s Advocate.
“In principle, devil’s advocates allow us to combine the goal of figuring things out together with the goal of commitment to the truth – and they do this by functioning as a check on group consensus. In practice, devil’s advocates often fail to adhere to this subordinate ‘checking’ role – whether it be a spiteful referee for a journal, an online troll or an attention seeking-provocateur, devil’s advocates are prone to excess. Such people are not held back and restrained by the rules of a given office, and they criticize in a way that is counterproductive and excessive.
[…]
Whenever people must decide something together – in classrooms and city council offices, in board meetings and on the internet – we find freelance devil’s advocates doing a poor job of it. The asymmetrical structure of these informal contradictoria is fragile, quick to collapse into bad forms of agreement and bad forms of disagreement. Often these come together: oratorically speaking, the cheapest forms of unity are often those purchased by vilifying some subgroup into outsider status.
Indulgence in excess is so easy, self-restraint is so hard. If social media exaggerates these problems, that is also a kind of virtue: perhaps we have never been able to see, so clearly, what it looks like when we all try to get along. It’s not pretty. You can leave Facebook or Twitter, but you can’t really leave the ugliness behind. Technical skill is essential to informing our judgments, but it cannot be the ultimate ground of our getting along with one another. We cannot ask the authorities, experts or science to do all our thinking for us. Sometimes, we need to think as a group, and that means we cannot afford to cynically dismiss ‘devil’s advocacy’ as a term of opprobrium. It has to become an honorific.”
The fall of the Roman Empire wasn’t a tragedy for civilisation. It was a lucky break for humanity as a whole, Walter Scheidel argues in The road from Rome.
“In post-Roman Europe […] the spaces for transformative economic, political, technological and scientific development that had been opened up by the demise of centralised control and the unbundling of political, military, ideological and economic power never closed again. As states consolidated, intracontinental pluralism was guaranteed. When they centralised, they did so by building on the medieval legacies of formalised negotiation and partition of powers. Would-be emperors from Charlemagne to Charles V and Napoleon failed, as did the Inquisition, the Counter-Reformation, censorship, and, at long last, autocracy. That wasn’t for want of trying, of attempts to get Europe back on track, so to speak, to the safety of the status quo and universal rule. But the imperial template, once fashioned by ancient Romans, had been too thoroughly shattered to make this possible.
This story embraces a grimly Darwinian perspective of progress — that disunion, competition and conflict were the principal selection pressures that shaped the evolution of states, societies and frames of mind; that it was endless war, racist colonialism, crony capitalism and raw intellectual ambition that fostered modern development, rather than peace and harmony. Yet that’s precisely what the historical record shows. Progress was born in the crucible of competitive fragmentation. The price was high. Bled dry by war and ripped off by protectionist policies, it took a long time even for Europeans to reap tangible benefits.
When they finally did, unprecedented inequalities of power, wealth and wellbeing began to divide the world. Racism made Western preeminence seem natural, with toxic consequences to the present day. Fossil fuel industries polluted earth and sky, and industrialised warfare wrecked and killed on a previously unimaginable scale.
At the same time, the benefits of modernity were disseminated around the world, painfully unevenly yet inexorably. Since the late 18th century, global life expectancy at birth has more than doubled, and average per-capitaoutput has risen 15-fold. Poverty and illiteracy are in retreat. Political rights have spread, and our knowledge of nature has grown almost beyond measure. Slowly but surely, the whole world changed.”
“None of this was bound to happen. Even Europe’s rich diversity need not have produced the winning ticket. By the same token, transformative breakthroughs were even less likely to occur elsewhere. There’s no real sign that analogous developments had begun in other parts of the world before European colonialism disrupted local trends. This raises a dramatic counterfactual. Had the Roman Empire persisted, or had it been succeeded by a similarly overbearing power, we would in all likelihood still be ploughing our fields, mostly living in poverty and often dying young. Our world would be more predictable, more static. We would be spared some of the travails that beset us, from systemic racism and anthropogenic climate change to the threat of thermonuclear war. Then again, we would be stuck with ancient scourges — ignorance, sickness and want, divine kings and chattel slavery. Instead of Covid-19, we would be battling smallpox and plague without modern medicine.
Long before our species existed, we caught a lucky break. If an asteroid hadn’t knocked out the dinosaurs 66 million years ago, our tiny rodent-like ancestors would have had a hard time evolving into Homo sapiens. But even once we had gotten that far, our big brains weren’t quite enough to break out of our ancestral way of life: growing, herding and hunting food amid endemic poverty, illiteracy, incurable disease and premature death. It took a second lucky break to escape from all that, a booster shot that arrived a little more than 1,500 years ago: the fall of ancient Rome. Just as the world’s erstwhile apex predators had to bow out to clear the way for us, so the mightiest empire Europe had ever seen had to crash to open up a path to prosperity.”
“Invention, always a part of architecture, was usually restricted to a few gifted individuals — the rest followed,” Witold Rybczynski writes in The Unbearable Burden of Invention.
“Yet imitation not only allowed lesser talents to learn from the masters, and in the process raised the level of workaday buildings, it also permitted great architects such as Michelangelo and Schinkel to build on the achievements of their predecessors.
The architectural Modern Movement of the early twentieth century put a stop to this practice. The credo of the movement was that the modern age required its own distinctive architecture. As J.J.P. Oud (1890–1963), a prominent Dutch modernist, put it, ‘All in all it follows that an architecture rationally based on the circumstances of life today would be in every sense opposed to the sort of architecture that has existed up till now.’ In the 1920s, opposing the past meant flat roofs without eaves or cornices, horizontal strip windows without frames, buildings raised up on stilts instead of sitting on the ground, and white walls bereft of decoration. Henceforth, history was canceled — no more looking back, no more learning from earlier trial and error.
Repudiating tradition opened a Pandora’s box. For a brief period, the stark International Style reigned supreme, but the creativity of architects — as well as the demands of clients — was irrepressible. Having banished the historical canon, all that architects had was their own invention. Le Corbusier was one of the first to exploit this new freedom, designing Notre-Dame du Haut, a pilgrimage chapel in Ronchamp, France, that resembled a nun’s coif. It was followed, at what is now John F. Kennedy International Airport in New York, by Eero Saarinen’s TWA terminal, which looked like a bird in flight, and the billowing forms of Jørn Utzon’s Sydney Opera House, which reminded some of yacht sails and others of the overlapping plates of an armadillo’s shell. To be called a ‘form-giver’ became the highest praise an architect could receive.
One of early modernism’s dictums, first voiced by the Chicago architect Louis Sullivan, was ‘Form follows function.’ But the forms of Le Corbusier’s chapel, Saarinen’s terminal, and Utzon’s opera house had nothing to do with what went on inside; in fact, the unorthodox shell shapes of Utzon’s building severely constrained the design of the performance halls within,” Rybczynski argues.
“A corollary of giving priority to invention is that imitation, once the foundation of creativity in architecture, is banished, and copying is considered the mark of a lack of imagination, or worse, plagiarism. This is evident in the notorious case of the Kimbell Art Museum, in Fort Worth, Texas. The museum, designed by Louis Kahn in the 1960s, is celebrated both for its architecture and its success as a setting for art. In 1989, Romaldo Giurgola, a friend and colleague of the now-deceased Kahn, was commissioned to expand the museum, and his modest proposal replicated the original building’s modular plan and sky-lit vaults, in much the same way that buildings in the past were extended and added on to. Giurgola’s proposal caused an outcry among architects and critics, and he was accused of ‘vulgar mimicry.’ The chastened museum shelved the plan, and twenty-five years later, when Renzo Piano designed an addition, he made sure that it was entirely separate — and different. [Eventually, it was Renzo Piano who received the commission to design the museum’s expansion.]
There is a final downside to invention. Buildings that look weird are one thing, but buildings that act weird are quite another. Invention, as [Aaron Betsky] admits, ‘sometimes stretches the technology of building to the point that it creates problems.’ That is what happened in 1978 when I.M. Pei built the East Building, a modernist addition to [John Russell Pope’s] National Gallery. Although Pei matched the Tennessee marble skin of the older building, he did not match the way the addition was built. […] Less than thirty years after the East Building was completed, the slabs started to bulge and crack. In 2011, in a process that lasted three years, the entire marble skin had to be taken down and rehung.
Another celebrated failure is Lever House on Park Avenue in New York, designed by Skidmore, Owings & Merrill in 1952 and hailed as one of the first high-rise office buildings in the city to have a so-called curtain wall. Previous skyscrapers […] had steel frames surrounded by thick masonry walls made of stone or brick, well known materials that architects had been using for centuries. The curtain wall replaced the heavy masonry with a lightweight grid of steel and glass that hung — like a curtain — from the structural frame. After only thirty years, the blue-green glass skin of Lever House showed signs of deterioration, with many of the glass spandrel panels requiring replacement due to cracks. Sixteen years later, the entire curtain wall was removed and rebuilt from scratch.”
“Failures such as the East Building and Lever House stand out for their visibility — and the resulting expense of the repairs — but it is not unusual for modernist buildings to require extensive renovation after a relatively short time. A participant in a 2013 Getty Center colloquium on conserving modern architecture casually observed that conventional buildings traditionally lasted about 120 years before major repairs were required, whereas for modernist buildings it is only half that time — sixty years. Only sixty years! […]
According to the first-century Roman architect Vitruvius Pollio, the three essential qualities of good architecture are firmitas, utilitas, and venustas: firmness, utility, and beauty. He did not include experimentum. For a long time, firmness, that is, durability, could be taken for granted. A building might be clad in marble, brick, or stucco, but with regular maintenance — cleaning, repointing, plastering, and painting — it could be expected to last. ‘Experimental architecture’ changed that. Reinforced concrete, for example, seemed almost magical; not only was it inexpensive, it allowed dramatic cantilevers, shell-thin vaults, skinny columns. Reinforced concrete proved to be useful for the structure of a building — columns and floors — but because it was porous and weathered poorly, it was a poor substitute for stone or brick as an external cladding. It took several decades to discover that steel and concrete are precarious partners — concrete cracks, steel rusts, and spalling follows. By then, Brutalism had come and gone, leaving a trail of rusting, discolored, and flaking buildings in its wake.
Evidently, experimentation and invention can be dangerous, practically as well as aesthetically. Traditionally, learning from the past ensured continuity, consistency, and material solidity. Looking back meant learning from inventive predecessors, the way Michelangelo learned from Bramante, and Christopher Wren learned from Michelangelo. Architects have always looked back in order to move forward,’ observed the British master James Stirling. But modernism has removed the rearview mirror. Now architects look only in one direction — ahead. Looking ahead, not learning from the past, inventing and not copying, means that architects are in the position of constantly starting from scratch. This can be exciting — when it works. But creative genius is rare, and the inevitable result is a small number of remarkable works and a very large number of failed attempts, not to mention many weird buildings. Cue the Bathtub.”
“Under pressure for newness, brands struggle to actually create new stuff, but by curating the old, they give it renewed meaning and purpose in consumers’ eyes,” Ana Andjelic writes in Creativity is dead, long live curation.
“Viewed through the lens of creativity, curation becomes a perfect connective tissue in our niche and micro culture. ‘The curator is a junction-maker, a catalyst, a sparring partner, somebody who builds bridges,’ says [Hans Ulrich Obrist, the artistic director at the Serpentine Galleries]. The Internet created micro-collectives of menswear, streetwear, luxury watches, artisanal coffee, and Japanese denim aficionados who share ethos, style, reference points, even a vocabulary. It is easy to link up with others who share our taste, interests and hobbies and ignore the rest. Curators bridge the gap between different taste communities and introduce them to one another. They also often connect people, products and ideas in a way that creates something that’s simultaneously new and familiar (think LVMH x Supreme, IKEA x Off-White, or Nike x Dior).”
“Consumers today are omnivorous in their cultural interests, and they treat everything as an opportunity to flex their aesthetic muscles. And so, a good starting point for any brand to think like a curator is to create your own aesthetic world instead of simply relying on your product’s aesthetic or your visual handwriting. A brand’s aesthetic world is infinite: it extends to a pair of sneakers, a piece of furniture, a playlist, and even collaborations. Telfar expresses its aesthetic world through the brand’s multimedia performances, experiential retail, Bushwick Birkin myth, artist and brand collaborations and its diverse community. Telfar’s clothes are simply one expression of Telfar’s taste and point of view.
Telfar’s example also shows how curation gives mundane objects, like White Castle’s uniforms, value by connecting them with a point of view and a subculture that makes them stand out in the vortex of speed, superficiality, and newness. Beyond making products more valuable, curation as a brand strategy extends to the entire value chain. Menswear and furniture retail platform Bombinate curates the suppliers and producers the brand works with according to its values of quality, craftsmanship, localization and sustainability. By doing that, the retail platform not only protects and preserves the work of human hands by connecting craft brands to consumers seeking high-quality lifestyle products, but also narrows down and directs consumer choice.
To curate is also to pick the right retail, distribution or collaboration partners. In 2018, [the Japanese fashion designer, who Aaron Betsky called an ‘architect of clothes’] Rei Kawakubo told Dezeen, ‘I want to create a kind of market where various creators from various fields gather together and encounter each other in an ongoing atmosphere of beautiful chaos.’ Places like Dover Street Market or Kapital don’t prioritize mass and superficial reach; instead they curate their own kind of niche customer with their elevated and informed selection of choices.
Going forward, it will be very hard to imagine a retail establishment that does not give off an impression of a gallery. Bankrupt department stores and downsized mass brands are a cautionary tale of what happens in the absence of any curation. Good news is that, unlike art galleries of old, modern culture welcomes curators of all stripes and presents endless curatorial opportunities for brands.”
Bonbonma Architecture has renovated two adjacent houses that have been co-inhabiting within the same plot in Kyoto, Japan, for more than seven decades. The transformation turns the two pre-existing structures into three, to house both private (residential) uses and semi-public (gallery/office) ones, sharing a verdant enclosed courtyard garden in tsubo-niwa style.
The pre-existing dwellings may have shared the same land for years, but their styles have always been different. To preserve the original 1920 architecture as much as possible, the architects have used traditional building techniques and materials to form the three new structures. The project was conceived using two main methods: ‘tone architecture’ and ‘building biology’. In tone architecture, a design method devised by bonbonma, architecture and space are born from sound. In case of remodeling, the sounds are ‘extracted’ from the original architecture. According to the architects, “each space vibrates at a frequency equivalent to one or a couple of musical notes. Each note in each space and as a whole give birth to the new architecture. The new spaces evoke new notes and in the end, the project becomes a latent musical composition.”
Source: Designboom
“When writing about music the affinities between these two arts are as consequential as are the ambiguities. Indeed, it’s in the ambiguities where the critic’s assessment of a performance must listen to the inner twins that hear and write simultaneously. The musician hears more than he can possibly say in words, though there’s nothing but words to convey the sensation. It’s as if the absence of the words one needs to describe what music is saying speaks as loudly as the words speak. Ultimately, it’s trust in a divided yet conciliatory self who is trying to employ, honestly and well, the musical uncertainties of his judgments, more commingled than conclusive, even with a midnight deadline at his heels.
It’s close to what Leonard Bernstein meant when he said, ‘Why do so many of us try to explain the beauty of music, thus depriving it of its mystery?’ And yet one of the beauties of music is that we can love it and argue with it and be baffled by it with words — while adding to it the mystery of language. After all, the histrionic Bernstein lectured, wrote books, sat for interviews, held concerts for young people — all while explaining the beauty of music.
Figuring out that the thing words do for music is not a musical problem. It’s an aesthetic one, which adds more ambiguity, not less. Good ears and smart criticism trail the music like a bloodhound, howling out the moving location of its prey, but more often, baying about its own doubt and inconsistency.” — Thomas Larson, from Do We Deprive Music Of Its Mystery By Writing About It?