Reading notes (2021, week 14) — On learning from Asian philosophies of rebirth, uncanny valets, and why computers won’t make themselves smarter

Mark Storm
25 min readApr 5, 2021

--

Georges Batzios Architects restores a rare example of Greek brutalist architecture, designed in the 1970s by Alexandros Tombazi — According to Batzios, the overhaul was “a moral challenge.” (Photograph by Giorgos Sfakianakis)

Reading notes is a weekly curation of my tweets. It is, as Michel de Montaigne so beautifully wrote, “a posy of other men’s flowers and nothing but the thread that binds them is mine own.”

In this week’s edition: Asian literature can show us how to live well in troubled times; ideas about machine intelligence in both East and West still reflect some key cultural divides; we fear and yearn for ‘the singularity’ but it will probably never come; the awe before there are words; Prometheus’ toolbox; when your authenticity is an act, something’s gone wrong; the opposite of placelessness is ‘place’; and, finally, Hilary Hahn and speaking to humanity.

Learning from Asian philosophies of rebirth

“Europe’s narrative trope of the ‘hero’ is often that of a well-armed monster-hunter whose main purpose is to conquer brief eruptions of chaos. Theseus and Heracles, Arthur’s knights and Marvel’s superheroes kill monsters, fight invaders, and then their quest is done. Crisis is dealt with by renewing the system as quickly as possible,” Jessica Frazier writes in Learning from Asian philosophies of rebirth.

But Asian literature tells us different stories about “technologically-adept Chinese emperors, Animist Spirit-negotiators, and Yogic deep-dives into destruction itself. Instead of simply asking how we can return to ‘normal,’ these philosophies ask: when crisis threatens, how can we survive, transform our societies, and improve them?”

The Shūjīng, one of the five classics of ancient Chinese literature, tells of the devastating Gun Yu flood that, according to myth, lasted two generations. At the beginning of the great flood the Emperor Yao, appointer of the seasons, laments:

“See! The Floods assail the heavens! … destructive in their overflow are the waters of the inundation. In their vast extent they embrace the hills and overtop the great heights, threatening the heavens with their floods, so that the lower people groan and murmur Is there a capable man to whom I can assign the correction of this calamity?”

Instead of being a European-styled superhero, this ‘capable man’ turned out to be a humble engineer called Gun Yu.

“He became famous in myth as the man who ‘tamed the waters,’ not through some feat of supernatural power but as a work of co-operative technological planning. Over thirteen years travelling the country, Gun Yu introduced an innovative project of digging.

Hydrological technology today remains one of the most important features of Chinese landscape; it ensures that the whole vast landmass is habitable as a single safe region capable of civilisation. Thus Yu the great, the wise filial inheritor of a civil project to which he devoted his life, became the model for heroes who conquer crisis through the (unglamorous but realistic) powers of intelligence and hard work. Confucius said of him, ‘I can find no flaw in the character of Yu … He lived in a low, mean house, but expended all his strength on the ditches and water channels. I can find nothing like a flaw in Yu.’

Yu is the epitome of governance in the classical Chinese worldview, where the good leader is portrayed as a variant of the Taoist sage. Like the character of Cook Ding in the Zhuangzi who could butcher an ox as smoothly as if it was butter, because he was so utterly aware of every aspect of the organism, the good leader learns the system through dedicated study and makes exactly the right, prudent adjustments. In response, the people are supposed to follow his orders, as the Tao Te Ching (49) makes clear: ‘The people all keep their eyes and ears directed to him, and he deals with them all as his children.’

But the people’s obedience was not intended to be carried out in blind trust. Corrupt leadership was heavily censured in the Taoist literature and humble conduct exalted: ‘The sage should work without claim to ownership or reward, putting his own person last, personal and private goals … When the work is done, and one’s name is becoming distinguished, to withdraw into obscurity is the way of Heaven.’ Thus China’s history of crisis points to a fragile bond of trust that plays under the surface between the people and their government. Their science-emperor leads his people in resolving crisis with deft efficiency, and in doing so gives the people new tools gained from solving the problem, so Yu’s heroic taming of the flood directly contributed to irrigation and a new phase of agricultural civilisation. The leader, who acts without ulterior motives, must also win the confidence of his people and in doing so he helps society fulfil its very essence as a collective agency.”

“Drainage from the world’s highest mountain range, the Himalayas, into the world’s largest ocean, the Pacific, leads to vast amounts of water moving across a huge area. Northern China and Mongolia’s deserts nearby mean that the rivers are regularly silted up by fine sediments. The Yangtze, the Yellow River, Haui and others are a permanent threat of disaster in China’s history. One scholar estimated that more than 1,500 floods of the Yellow River have taken place since the middle of the first millennium BCE,” Jessica Frazier writes in Learning from Asian philosophies of rebirth. (Painting: The Yellow River Breaches its Course, from a series of paintings of water, c. 1160–1225, Song dynasty, by Ma Yuan; ink on silk, 26.8 x 41.6 cm. Collection on the Palace Museum, Beijing)

But what if things are beyond repair?

“Far from being a form of fitness,” Frazier writes, “[classical Yogic philosophy] is more a practice of strategic death. Yoga, says the second sutra, is citta vrtti nirodha: ‘the stilling of the movements of consciousness.’ It depicts the mind as a moving flow-tank filled with currents of desire and fear. This movement has its own momentum, which Arthur Schopenhauer interpreted as the Will, and Sigmund Freud the Id, and it takes control of the whole mind. Meanwhile it obscures its own workings so that we are alienated from the truth of our situation and consciousness becomes blinded with its own fears and objects of desire. Put into collective, modern terms, this means that society sees only its fears (of redundancy, migrants, or infections, perhaps) and its desires (for the next beer, holiday, Amazon package or Netflix series). Rarely are we able to interrupt the patterns of life enough to look calmly below at the causes of those emotions, nor above to gain an overview and map out a plan. Instead the momentum pushes us on into the systemic dysfunctions that cause the crisis, so that we never pause to take stock and change the situation. The only solution, the text implies is to strategically ‘lockdown’ ourselves, undergoing the death of desires and fears — and even of parts of our existing identity — in order to steer in a radically new direction.

While yogic meditation was designed for individuals, it is rare that a whole society collectively, and completely, pauses. Self-immolation, hunger-striking, and sitting, kneeling or lying in public places are all localised forms of social shutdown that invite reflection and reform through their silence. 2020 was history’s first real example of shutdown on a global scale. The unique conditions of the pandemic necessitated an almost ‘yogic’ stilling of society, an isolation of its citizens, and a staunching of their habits of consumption,” Frazier writes.

These, and the other stories Frazier explores in her essay, “offer instruction in the lessons of crisis: they are lessons about knowledge-based leadership and the establishment of new lasting structures that will avoid a cycle of recurrence. They teach us about developing greater communal sensitivity to the environmental conditions that generate natural crisis. And they warn us that we may have to destroy some of the old ways to let something genuinely new and better be born. No person or society should have to feel at the mercy of the same old monsters, time and time again. Crisis can be the stone on which civilisation moves upward, and then out of their reach.”

Uncanny valets

“Mistrust of machine intelligence abounds in Anglo-American popular culture, from the world’s most famous genocidal cyborg the Terminator to the fearful anticipation of the ‘singularity,’ the point where machines become smarter than humans. Not so much in the East, however — China, Japan and South Korea are far more welcoming, with 65% of Chinese people saying that they expect robots and AI would enhance future job opportunities,” Amanda Rees, a historian of science at the University of York, writes in Uncanny Valets.

“[I]deas about machine intelligence in both East and West still reflect some key — and sometimes problematic — cross-cultural divides, which in turn rely on a number of assumptions about the nature of labor and the value of the individual. Perhaps most importantly, they are also profoundly influenced by the different ways in which ‘humanity’ and the idea of ‘the human’ are defined and understood.

Most broadly, cultures deriving from monotheistic traditions suffer far more from dualistic thinking than do other societies, which has critical implications for the assessment of machine intelligence. Both the mind/body binary and the strict policing of the human/animal division, for example, help create a situation where humans are seen as uniquely dominant over all creation, an ascendancy based on characteristics (a soul, intelligence, free will, special creation, language) that only humans possess. Machines, themselves specially created, seem to show intelligent autonomy and can very easily be seen as a threat to that unique human status.

Cultures deriving from other traditions — Shintoism, Daoism, Buddhism — don’t treat the world in that binary way. If an animal or a place can be endowed with spirit or soul, then why can’t a machine? Instead of threatening to out-compete the human occupation of a particular eco-cultural niche, machine intelligence would just add another aspect to the complex, constantly interacting world revealed by these more holistic approaches,” Reed argues.

In both West and East, scientists and artists have explored the capacity of robots to become part of a relationship. The play I, Worker for example, written and directed by Oriza Hirata, features two robots co-developed by the Intelligent Robotics Laboratory at Osaka University, one of the world’s leading authority on robotics research. Set in the near future when it has become natural for robots and humans to co-exist, the play casts a question of what work means to us as humans, by portraying a robot that cannot work, although by definition, robots were made to work. (Photograph: Scene from ‘I, Worker,’ by Oriza Hirata, performed by Sienedan Theater Company, Tokyo)

“Two of the key characteristics used to differentiate between humans and robots in Western science fiction […] are creativity and emotion. The robot Andrew Martin, in Isaac Asimov’s The Bicentennial Man, first manages to assert his humanity — or at least, his claim to be more than just a robot — by demonstrating his capacity for both artistic creativity and compassion. This reflects (again) another key stereotype in Anglo-American culture [the first is the Western idea that, in the East, bureaucratic control, whether technocratic or totalitarian, radically restricts individual freedom through enforced central planning]: despite strenuous efforts to demonstrate the abiding relationships between them, science, creativity and art are seen as separate domains. Emotional response, in particular, is generally dismissed as irrelevant to scientific engagement.”

But in other cultures the (presumed) separation between science and society on which this ethical detachment is based is not so profound. “Partly, this arises from the fact that industrial technoscience, despite deep historical roots in the Islamic world and China, arrived as a 19th-century imperial import: Precisely because they were initially alien, there was a profound need, and desire, for these skills and practices to be indigenously assimilated (even, sometimes, via science-fiction stories),” Frazier writes. This shows, she says, that cross-cultural attitudes are actually much more nuanced and ambivalent than easy distinctions between West and East might suggest.

But why should machine intelligence come in human form? We already share our lives with non-human intelligence in the form of the domestic animals. Would it matter if they were robotic, rather than biological in origin?

“What if animal intelligence was used as the model for a form of AI that enabled us to expedite our navigation of the uncanny valley [1]? During the pandemic, for example, crucial emotional support for the elderly has been provided not by humanoid robots, but by robotic animals. A therapeutic robot called Paro, a baby seal, is used to reduce stress in care homes and facilitate relationships between residents and caregivers. Companion animals play an immensely important role in many people’s lives, and their impact can be measured in the grief expressed at their loss. […] The Buddhist funerals for robot pets, ‘killed’ when their software was no longer supported by the relevant company, would suggest not.

Or perhaps what we need instead is to acknowledge that a significant part of the Western perception of AI and robots as unsettling and uncanny is linked to a deep-seated association between automata and the East, not to mention the mutilating legacy of slavery in a culture ostensibly committed to individual freedom and dignity.

From the ethical problem of Asimov’s robots kept in eternal serfdom through the imposition of the ‘three laws’ [2] to the question of how technological innovation untrammeled by moral judgment might change the human future, robots reflect the politics of racial division. In the context of the current debates between China and the U.S. about the wider weaponization of AI, and the potential for a Sino-American arms race, that’s a difficult issue that (white) Westerners urgently need to consider.”

Notes

[1] ‘Uncanny valley’ is a hypothesised relationship between the degree of an object’s resemblance to a human being and the emotional response to such an object. The concept suggests that humanoid objects which imperfectly resemble actual human beings provoke uncanny or strangely familiar feelings of eeriness and revulsion in observers. ‘Valley’ denotes a dip in the human observer’s affinity for the replica, a relation that otherwise increases with the replica’s human likeness. The concept was identified by the robotics professor Masahiro Mori as bukimi no tani genshō in 1970.
[2] A robot must not cause injury to humans, must obey human orders and must preserve itself as long as that doesn’t conflict with the first two laws.

Why computers won’t make themselves smarter

It was Saint Anselm of Canterbury, who, in the 11th century, proposed an argument for the existence of God, the American is author of science fiction Ted Chiang writes in Why Computers Won’t Make Themselves Smarter.

In his Proslogion, Anselm claimed to derive the existence of God from the concept of a being than which no greater can be conceived. He reasoned that, if such a being fails to exist, then a greater being — namely, a being than which no greater can be conceived, and which exists — can be conceived. But this would be absurd: nothing can be greater than a being than which no greater can be conceived. So a being than which no greater can be conceived — i.e., God — exists. This has become known as the ontological argument.

But according to Chiang, “God isn’t the only being that people have tried to argue into existence. ‘Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever,’ the mathematician Irving John Good wrote [in Speculations Concerning the First Ultraintelligent Machine (Advances in Computers, Volume 6, 1965)]:

‘Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an intelligence explosion, and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.’

The idea of an intelligence explosion was revived in 1993, by the author and computer scientist Vernor Vinge, who called it ‘the singularity,’ and the idea has since achieved some popularity among technologists and philosophers. Books such as Nick Bostrom’s Superintelligence: Paths, Dangers, Strategies, Max Tegmark’s Life 3.0: Being Human in the age of Artificial Intelligence, and Stuart Russell’s Human Compatible: Artificial Intelligence and the Problem of Control all describe scenarios of ‘recursive self-improvement,’ in which an artificial-intelligence program designs an improved version of itself repeatedly,” Chiang writes.

But although Good’s and Anselm’s arguments seem superficially reasonable — this is why they are generally accepted at face value — Chiang believes they deserve closer examination. The more we scrutinize the implicit assumptions of Good’s argument, the less plausible the idea of an intelligence explosion becomes, he writes.

“How much can you optimize for generality? To what extent can you simultaneously optimize a system for every possible situation, including situations never encountered before? Presumably, some improvement is possible, but the idea of an intelligence explosion implies that there is essentially no limit to the extent of optimization that can be achieved. This is a very strong claim. If someone is asserting that infinite optimization for generality is possible, I’d like to see some arguments besides citing examples of optimization for specialized tasks,” Ted Chiang writes in Why Computers Won’t Make Themselves Smarter. (Illustration by Woshibai for The New Yorker)

“Some proponents of an intelligence explosion argue that it’s possible to increase a system’s intelligence without fully understanding how the system works. They imply that intelligent systems, such as the human brain or an A.I. program, have one or more hidden ‘intelligence knobs,’ and that we only need to be smart enough to find the knobs. I’m not sure that we currently have many good candidates for these knobs, so it’s hard to evaluate the reasonableness of this idea. Perhaps the most commonly suggested way to ‘turn up’ artificial intelligence is to increase the speed of the hardware on which a program runs. Some have said that, once we create software that is as intelligent as a human being, running the software on a faster computer will effectively create superhuman intelligence. Would this lead to an intelligence explosion?

Let’s imagine that we have an A.I. program that is just as intelligent and capable as the average human computer programmer. Now suppose that we increase its computer’s speed a hundred times and let the program run for a year. That’d be the equivalent of locking an average human being in a room for a hundred years, with nothing to do except work on an assigned programming task. […]

So now we’ve got a human-equivalent A.I. that is spending a hundred person-years on a single task. What kind of results can we expect it to achieve? Suppose this A.I. could write and debug a thousand lines of code per day, which is a prodigious level of productivity. At that rate, a century would be almost enough time for it to single-handedly write Windows XP, which supposedly consisted of forty-five million lines of code. That’s an impressive accomplishment, but a far cry from its being able to write an A.I. more intelligent than itself. Creating a smarter A.I. requires more than the ability to write good code; it would require a major breakthrough in A.I. research, and that’s not something an average computer programmer is guaranteed to achieve, no matter how much time you give them,” Chiang writes.

Ultimately, he believes that “[t]here is one context in which […] recursive self-improvement is a meaningful concept, and it’s when we consider the capabilities of human civilization as a whole. Note that this is different from individual intelligence. There’s no reason to believe that humans born ten thousand years ago were any less intelligent than humans born today; they had exactly the same ability to learn as we do. But, nowadays, we have ten thousand years of technological advances at our disposal, and those technologies aren’t just physical — they’re also cognitive.

Let’s consider Arabic numerals as compared with Roman numerals. With a positional notation system, such as the one created by Arabic numerals, it’s easier to perform multiplication and division; if you’re competing in a multiplication contest, Arabic numerals provide you with an advantage. But I wouldn’t say that someone using Arabic numerals is smarter than someone using Roman numerals. By analogy, if you’re trying to tighten a bolt and use a wrench, you’ll do better than someone who has a pair of pliers, but it wouldn’t be fair to say you’re stronger. You have a tool that offers you greater mechanical advantage; it’s only when we give your competitor the same tool that we can fairly judge who is stronger. Cognitive tools such as Arabic numerals offer a similar advantage; if we want to compare individuals’ intelligence, they have to be equipped with the same tools.

Simple tools make it possible to create complex ones; this is just as true for cognitive tools as it is for physical ones. Humanity has developed thousands of such tools throughout history, ranging from double-entry bookkeeping to the Cartesian coördinate system. So, even though we aren’t more intelligent than we used to be, we have at our disposal a wider range of cognitive tools, which, in turn, enable us to invent even more powerful tools.

This is how recursive self-improvement takes place — not at the level of individuals but at the level of human civilization as a whole. I wouldn’t say that Isaac Newton made himself more intelligent when he invented calculus; he must have been mighty intelligent in order to invent it in the first place. Calculus enabled him to solve certain problems that he couldn’t solve before, but he was not the biggest beneficiary of his invention — the rest of humanity was. Those who came after Newton benefitted from calculus in two ways: in the short term, they could solve problems that they couldn’t solve before; in the long term, they could build on Newton’s work and devise other, even more powerful mathematical techniques.

This ability of humans to build on one another’s work is precisely why I don’t believe that running a human-equivalent A.I. program for a hundred years in isolation is a good way to produce major breakthroughs. An individual working in complete isolation can come up with a breakthrough but is unlikely to do so repeatedly; you’re better off having a lot of people drawing inspiration from one another. They don’t have to be directly collaborating; any field of research will simply do better when it has many people working in it.

[…]

The rate of innovation is increasing and will continue to do so even without any machine able to design its successor. Some might call this phenomenon an intelligence explosion, but I think it’s more accurate to call it a technological explosion that includes cognitive technologies along with physical ones. Computer hardware and software are the latest cognitive technologies, and they are powerful aids to innovation, but they can’t generate a technological explosion by themselves. You need people to do that, and the more the better. Giving better hardware and software to one smart individual is helpful, but the real benefits come when everyone has them. Our current technological explosion is a result of billions of people using those cognitive tools.”

And also this…

“To move from speechlessness to speech requires a person — perhaps a wiser part of ourselves — who can hear and receive our experience,” Shierry Weber Nicholsen writes in The Awe Before There Are Words, an excerpt from The Love of Nature and the End of the World (MIT Press, 2001), in which she explores dimensions of our emotional experience with the natural world that are so deep and painful that they often remain unspoken.

“As we are heard, we become able to hear our experience ourselves. In the beginning, however, is speechlessness, unformed experience no doubt both beautiful and terrifying. Silence sometimes means that there are no words yet.

Awe touches us even more deeply than a felt love, yet it is deep in darkness. It is not simply unspoken; it is speechless. A friend tells me that she cannot describe her feeling for the natural world as love. It’s not love but awe, she says. She is simply struck speechless at the sight of a heron lifting its wing. Awe-struck, she is incapable of saying more.

In part, awe does not have words because it is utterly private, not ‘for show.’ But it is more than private. It is an involuntary speechlessness. That we seldom find the sense of awe in our talk about the environment may be due in part to our diminished capacity for awe, but it is also due to the inherent speechlessness that awe brings us to. We cannot even put words to it ourselves. It is not surprising that we do not speak of it to others.

Awe is the sense of an encounter with some presence larger than ourselves, mysterious, frightening and wonderful, numinous, sacred. It is the sense of something that we are not capable of containing within our capacity for thought and speech. In awe, one’s self is felt only as something small and incapable, speechless, perhaps graced by the experience but unequal to it.”

“A friend told me that when she first saw Cézanne’s painting The Bather […] she experienced the figure in the painting as emerging from the canvas and coming toward her, then moving back, then emerging again. She was awestruck by Cézanne’s capacity to shape space in this way. ‘It was almost like a bolt of lightning went into me,’ she said in recounting the experience. ‘Some mystery, something entered into me so strongly that it shook me to the bone,’” Shierry Weber Nicholsen in The Awe Before There Are Words. (Painting: The Bather, c. 1885, by Paul Cézanne; oil on canvas, 127 x 96.8 cm. Collection of the Museum of Modern Art, MOMA, New York)

“Awe makes us feel amazed, astounded, struck dumb. Joseph Campbell’s term aesthetic arrest, which denotes something similar, conveys this sense. We are stopped in our tracks. The words amazed and astounded both suggest a blow to one’s normal mental functioning, as when one is literally stunned or struck or loses one’s normal orientation (as in a maze). In his book Dream Life, Donald Meltzer, the influential psychoanalyst, tells the story of a little boy whose therapist, in a gesture out of the ordinary, wiped his face. The boy sat there ‘amazed.’ How are we to understand this? Meltzer quotes from the Talmud, the Jewish book of law: ‘Stand close to the dying, because when the soul sees the abyss it is amazed.’ For the soul of the one dying, death seems an ‘unbearably new’ experience. When a particular emotion has never been felt before, it will not immediately yield its meaning, says Meltzer, and the psyche responds with amazement.

The notion of an experience that does not immediately yield its meaning is the key to the speechlessness of awe in the face of the natural world. While awe stops us in our tracks, this is not the end of our experiencing but rather a beginning. Somehow we intuitively sense that the experience is extraordinarily rich and will require vast transformations of our mental structures as we assimilate it. Our intuition of the long, unknown process ahead stops us cold.”

In Prometheus’ Toolbox, part of Lapham’s Quarterly’s Technology Issue (Volume XIV, Number 1: Winter 2021), Adrienne Mayor reflects on human life as technology, from Greek mythology to Mary Wollstonecraft Shelley’s Frankenstein.

“The artistic representations of Prometheus working with sections of the human body and assembling a skeleton tell us that artists and viewers of antiquity understood his creation as analogous to a sculptor beginning with an interior framework to make automatons, which would then become the original living humans. In the first stage, he builds what viewers recognize as their own anatomy, logically assembling the progenitors of the human race from the inside out. But what does it mean to imagine humans as engineered entities?

In all the variants of the Prometheus creation myth and artwork, the realistic forms of humans will become the reality they portray: they become real people. This paradoxical perspective raises the timeless question: Are humans somehow automatons of the gods? The almost subconscious fear that we could be soulless machines manipulated by other powers poses a profound philosophical conundrum pondered since ancient times. If we are the creations of the gods or unknown forces, how can we have self-identity, agency, and free will?

Plato was one of the first to consider the possibility that humans are nonautonomous, writing in Laws, ‘Let us suppose that each of us living creatures is an ingenious puppet of the gods.’ Concerns about autonomy also suffuse traditional tales about robots. In the Hindu Kathasaritsagara (c. 1050), an entire city is populated by silent townspeople and animals, later revealed to be realistic wooden puppets controlled by a solitary man on a throne inside the palace. The notion that humans arose as the automatons or playthings of an imperfect or evil demiurge (along with the ensuing questions of volition and morality) was forcefully articulated in gnosticism, a religious movement of the first through third centuries.”

“Why might Prometheus have risked the anger of the gods to help humans? One possible reason is given in the myth of Prometheus and Epimetheus, related by Plato in the fourth century bc. After earth’s creatures had been created, each was ‘programmed’ with capabilities and defenses so that they would not fall into mutual destruction but could maintain equilibrium in nature. But the animals received all the ‘apps’ first, and nothing was left over for the naked and defenseless humans. Feeling pity, Prometheus gave the puny mortals craft and fire,” Adrienne Mayor writes in Prometheus’ Toolbox. (Painting: Prometheus Bound, begun c. 1611–12, completed by 1618, by Peter Paul Rubens; oil on canvas, 242.9 x 209.7 cm. Collection of the Philadelphia Museum of Art)

“The French philosopher René Descartes, who was quite familiar with the gear- and spring-powered automatons of his day, embraced the idea that the body is a machine. He even predicted that one day we might need a way to determine whether something was a machine or human. What, he mused, “if there were machines in the image of our bodies and capable of imitating our actions?” He suggested that tests based on flexibility of behavior and linguistic abilities might be able to expose nonhuman things. More than four centuries later, one of the replicants in the 1982 film Blade Runner echoes Descartes’ famous conclusion, ‘I think, therefore I am.’

Yet the dilemma lingers: the impossibility of devising a Turing test to prove to myself that I am not an android automaton. Contemplating the ancient illustrations of human beings as products of Promethean creation, carved on precious gems more than two millennia ago, is another powerful reminder of the age-old philosophical puzzle of human autonomy. Set against the classical Greek account of Prometheus, the modern tale of Frankenstein’s automaton delivers an especially urgent question and warning today: What do creators owe their creations? Victor Frankenstein was ultimately appalled by his monster and disowned it, while Prometheus is the archetype of a creator who feels a sense of empathy and takes responsibility for his biotechnical creation. As the juggernaut of ever-more advanced technology speeds relentlessly onward, it is perhaps worth recalling that the original Greek meaning of the name Prometheus is ‘foresight.’”

In interviews for his sociological book on everyday suffering and our troubled quest for self-mastery, Joseph E. Davis found little premium placed on ‘being authentic.’

“And yet, organisational consultants inform us, in the pages of the Harvard Business Review, that ‘the term authenticity has become a buzzword among organisational leaders.’ In fact, authenticity is ‘now ubiquitous in business, on personal blogs and even in style magazines,’ according to another writer. ‘Everyone wants to be authentic,’” Davis writes in When your authenticity is an act, something’s gone wrong. So, which is it? Is authenticity fading away as a personal ethic or is it something everyone wants to be? In fact, both are true, he claims, because the meaning of authenticity is changing.

“In his powerful critique The Ethics of Authenticity (1991), [Charles Taylor] argues that our contemporary culture of self-fulfilment and unfettered choice is built, in part, on ‘trivialised’ and ‘self-centred modes’ of authenticity. ‘Properly understood,’ however, ‘authenticity is not the enemy of demands that emanate from beyond the self’ — demands of society, nature, tradition, God or the bonds of solidarity — ‘it supposes such demands.’ To bracket them off, he continues, ‘would be to eliminate all candidates for what matters’.

The performative mode is, if anything, a further flight into atomism and away from stable frameworks and sources of meaning. But the problem runs deeper, as the demonstration of specialness and optimised self-development are built into the very standards of success. The performative fosters a detached form of self-awareness that potentially measures everything in terms of its strategic value for visibility, recognition and reward. And, knowing the game, it fosters the sceptical sense that everyone else’s actions carry an ulterior, manipulative intent. Just being ourselves becomes a guise, behind which we fashion ourselves to be — in the worldly scale of values — someone who counts.

The performative mode fosters a profound isolation and sense of insecurity. This mode captures many of the normative standards against which the people I interviewed evaluated themselves and found themselves wanting — they weren’t outgoing enough, positive enough, performing highly enough, moving on from loss or defeat quickly enough, organising their intimate relations contractually enough. They weren’t ‘special’, but ordinary — in dread of being labelled ‘losers’. What I found confirms [Andreas Reckwitz’s] claim that the demand to stand out and prove your worth is ‘a systematic generator of disappointment that does much to explain today’s high levels of psychological disorder.’

Taylor suggests that, to confront false modes of authenticity and open a space to consider alternative conceptions of the good, we should remind ourselves of those features of the human condition that show these modes to be empty. A place to begin, following [Gordon Marino’s] guidance [The Existentialist’s Survival Guide: How to Live Authentically in an Inauthentic Age], is with the Danish philosopher Søren Kierkegaard. Our existential condition reveals itself to us most clearly when our lives have become unmoored, when we come face to face with our vulnerability, our dependence, our limits, the seeming meaninglessness of it all. Just here is where Kierkegaard intervenes. If we want to live authentically — properly understood — there’s no wiser guide.”

In Why Every City Feels the Same Now, Darran Anderson tells how he woke up in a hotel room unable to determine where she was in the world. The state of placelessness he experienced is what the anthropologist Marc Augé calls non-place. “In non-places, history, identity, and human relation are not on offer. Non-places used to be relegated to the fringes of cities in retail parks or airports, or contained inside shopping malls. But they have spread. Everywhere looks like everywhere else and, as a result, anywhere feels like nowhere in particular,” Anderson writes.

“The opposite of placelessness is place, and all that it implies — the resonances of history, folklore, and environment; the qualities that make a location deep, layered, and idiosyncratic. Humans are storytelling creatures. If a place has been inhabited for long enough, the stories will already be present, even if hidden. We need to uncover and resurface them, to excavate the meanings behind street names, to unearth figures lost to obscurity, and to rediscover architecture that has long since vanished. A return to vernacular architecture — the built environment of the people, tailored by and for local culture and conditions — is overdue. It can combat the placelessness that empires and corporations have imposed.”

Above and below: Aaranya (2019) is an agriculture farmstay, located in rural settings at the edge of Sasan Gir Lion Sanctuary, Gujarat. It was designed by Ahmadabad based architect Himanshu Patel from d6thD design studio with overt principle of vernacular architecture in mind. (Photography by Inclined Studio)
.
.
Above and below: Joglo Ngebo (2019) in Yogyakarta, Indonesia, by Umran Studio, is an attempt to create a humane design, combining vernacular and modern architecture. (Photograph by Bayu Atha)
.
.
.

Vernacular is an umbrella term architects and planners use to describe local styles. Vernacular architecture arises when the people indigenous to a particular place use materials they find there to build structures that become symptomatic of and attuned to that particular environment. Augé called it ‘relational, historical and concerned with identity.’ It aims for harmonious interaction with the environment, rather than setting itself apart from it,” Anderson writes.

But vernacular architecture alone won’t be able to satisfy the demands of population density, technology and standard of living. Equally, hegemonic architecture has been unsustainably detached from place with disastrous results, not just environmental but social.

“Creativity often works according to a dialectic process. Frank Lloyd Wright sought to ‘break the box’ of Western architecture by shifting geometries, letting the outside in, and designing architecture within a natural setting, as he did with Fallingwater, one of his most famous designs. Wright was inspired by a love of the Japanese woodblock prints of Hiroshige and Hokusai — an influence he would later repay by training Japanese architects such as Nobuko and Kameki Tsuchiura, who reinterpreted European modernist design in Japan. The goal is not to replace glass skyscrapers with thatch huts, but to see vernacular as the future, like Wright did, rather than abandoning it to the past.”

Casa de Vidro (1950) in Morumbi, São Paulo, by Lina Bo Bardi — “Architectural historians such as Liane Lefaivre, Alexander Tzonis, and Kenneth Frampton have given the namecritical regionalism’ to the process of re-localizing architecture. It’s a philosophy more than a movement or a style. The architects associated with it, including Minnette de Silva in Sri Lanka, Lina Bo Bardi in Brazil, and Muzharul Islam in Bangladesh, share commonality in their differences.”

“In truth, all architecture is vernacular: It is tied to place and to culture, however much glass-and-steel modernism has attempted to deny or ignore this fact. ‘Starchitects’ may design buildings that could crash-land in any city or pay cursory tokenistic homage to their surroundings. Cities may adopt a siege mentality to the environment while being deeply reliant upon it for survival. Modernity’s catastrophe is best captured in the desire to build universal citadels that separate people from the particulars: of cause and effect, of climate, of the natural world, of local culture. To counter those trends requires more than just preserving different styles of buildings. Vernacular architecture reflects who the built environment is by and for.”

Hilary Hahn’s latest album Paris (Deutsche Grammophon, 2021) “is about expression, it’s about emotion, it’s about feeling connected to a city and a cultural intersection.” (Photograph by Dana van Leeuwen)

“Art always questions itself, art is always looking for ways of representing, speaking to humanity, discussing difficult things and revealing beautiful things.” — Hilary Hahn in an interview with Gramaphone’s Andrew Mellor

Reading notes will be back next week, if fortune allows, of course. In the meantime, if you want to know more about my work with senior executives and leadership teams, please visit markstorm.nl. You can also browse through my writings and follow me on Twitter.

--

--

Mark Storm
Mark Storm

Written by Mark Storm

Helping people in leadership positions flourish — with wisdom and clarity of thought

No responses yet