Random finds (2019, week 9) — On the robots-are-stealing-our-jobs theory, what a corporation might look like in 2050, and seeing things as they are

Mark Storm
24 min readMar 2, 2019

--

A concrete staircase in House Alder in Zürich, Switzerland, by Andreas Fuhrimann Gabrielle Hächler Architekten.

I have gathered a posy of other men’s flowers, and nothing but the thread that binds them is mine own.” — Michel de Montaigne

Random finds is a weekly curation of my tweets and, as such, a reflection of my fluid and boundless curiosity.

If you want to know more about my work and how I help leaders and their teams find their way through complexity, ambiguity, paradox & doubt, go to Leadership Confidant — A new and more focused way of thriving on my ‘multitudes’ or visit my new ‘uncluttered’ website.

This week: Jill Lepore on robots competing for our jobs (or not); three scenarios for the corporation in 2050; Soetsu Yanagi on seeing things as they really are; when does our intelligence peak?; do we need to take down nudes?; everyday beauty is a matter of manners, not style; Frank Gehry at 90; and, finally, “my fame extends to heaven, but I live in Ithaca.”

The robots-are-stealing-our-jobs theory

The robots are coming and according to Andrés Oppenheimer in The Robots Are Coming: The Future of Jobs in the Age of Automation no one is safe. The old robots may have been blue-collar workers, burly and clunky, but the new ones are “white-collar robots.”

These white-collar robots aren’t exactly robots but, somehow, fall into the same category, the historian Jill Lepore writes in Are Robots Competing for Your Job? They are mainly algorithms. Except when they are people from other countries who can steal your job without even having to cross the border. They are, what the economist Richard Baldwin in The Globotics Upheaval: Globalization, Robotics, and the Future of Work calls, “Remote Intelligence,’ or R.I. “[This] international talent tidal waar,” he warns, “is coming straight for the good, stable jobs that have been the foundation of middle-class prosperity in the US and Europe.”

“Baldwin offers three-part advice: (1) avoid competing with A.I. and R.I.; (2) build skills in things that only humans can do, in person; and (3) ‘realize that humanity is an edge, not a handicap.’ What all this means is hard to say, especially if you’ve never before considered being human to be a handicap,” Lepore writes. “As for the future of humanity, Oppenheimer offers another cockamamie rule of three: ‘Society will be divided into three general groups. The first will be members of the elites, who will be able to adapt to the ever-changing technological landscape and who will earn the most money, followed by a second group made up primarily of those who provide personalized services to the elite, including personal trainers, Zumba class instructors, meditation gurus, piano teachers, and personal chefs, and finally a third group of those who will be mostly unemployed and may be receiving a universal basic income as compensation for being the victims of technological unemployment.” This is what Yuval Noah Harari calls the “useless class.”

“Fear of a robot invasion is the obverse of fear of an immigrant invasion, a partisan coin: heads, you’re worried about robots; tails, you’re worried about immigrants. There’s just the one coin. Both fears have to do with jobs, whose loss produces suffering, want, and despair, and whose future scarcity represents a terrifying prospect. Misery likes a scapegoat: heads, blame machines; tails, foreigners. But is the present alarm warranted? Panic is not evidence of danger; it’s evidence of panic. Stoking fear of invading robots and of invading immigrants has been going on for a long time, and the predictions of disaster have, generally, been bananas. Oh, but this time it’s different, the robotomizers insist,” Lepore writes.

In Rise of the Robots: Technology and the Threat of a Jobless Future and an essay in Confronting Dystopia: The New Technological Revolution and the Future of Work, Martin Ford acknowledges that earlier robot-invasion panics were unfounded. This explains why contemporary economists are generally dismissive of arguments that technological progress might lead to unemployment as well as falling wages and soaring income inequality. After all, Ford argues, “history shows that the economy has consistently adjusted to advancing technology by creating new employment opportunities and that these new jobs often require more skills and pay higher wages.”

But that was then and the reason things will be different this time, he argues, has to do with the changing pace of change. “The transformation from an agricultural to an industrial economy was linear; the current acceleration is exponential. The first followed Newton’s law; the second follows Moore’s. The employment apocalypse, when it comes, will happen so fast that workers won’t have time to adjust by shifting to new employment sectors, and, even if they did have time to adjust, there would be no new employment sectors to go to, because robots will be able to do just about everything,” Lepore writes.

““Fear of a robot invasion is the obverse of fear of an immigrant invasion, a partisan coin: heads, you’re worried about robots; tails, you’re worried about immigrants. There’s just the one coin. Both fears have to do with jobs, whose loss produces suffering, want, and despair, and whose future scarcity represents a terrifying prospect. Misery likes a scapegoat: heads, blame machines; tails, foreigners,” Jill Lepore writes in Are Robots Competing for Your Job? (Ilustration by Christoph Niemann for The New Yorker)

“It is quite possible that this thesis is correct; it is not possible to know that it is correct. Ford, an advocate of universal basic income, is neither a historian nor an economist. He is a futurist, a modern-day shaman, with an M.B.A. Everybody thinks about the future; futurists do it for a living. Policymakers make plans; futurists read omens. The robots-are-coming omen-reading borrows as much from the conventions of science fiction as from those of historical analysis. It uses ‘robot’ as a shorthand for everything from steam-powered looms to electricity-driven industrial assemblers and artificial intelligence, and thus has the twin effects of compressing time and conflating one thing with another. It indulges in the supposition that work is something the wealthy hand out to the poor, from feudalism to capitalism, instead of something people do, for reasons that include a search for order, meaning, and purpose. […]

Futurists foretell inevitable outcomes by conjuring up inevitable pasts. People who are in the business of selling predictions need to present the past as predictable — the ground truth, the test case. Machines are more predictable than people, and in histories written by futurists the machines just keep coming; depicting their march as unstoppable certifies the futurists’ predictions. But machines don’t just keep coming. They are funded, invented, built, sold, bought, and used by people who could just as easily not fund, invent, build, sell, buy, and use them. Machines don’t drive history; people do. History is not a smart car,” Lepore writes.

But not everybody falls for ‘The Robots Are Coming’ narrative. Oren Cass, an economist and the author of The Once and Future Worker: A Vision for the Renewal of Work in America, is fed up with the robot hysteria. “Technological innovation and automation have always been integral to our economic progress, and in a well-functioning labor market, they should produce gains for all types of workers,” he insists.

And whereas the historian Louis Hyman argues, in Temp: How American Work, American Business, and the American Dream Became Temporary, that in the course of the past century management consultants, taking the wheel, reinvented work by making employers more like machines, turning work into the kind of thing that robots could do long before there were any robots able to do it, Cass blames a different type of mid-twentieth-century thinkers for the current malaise. “In the middle decades of the twentieth century, he argues, economic policymakers abandoned workers and the health of the labor market in favor of a commitment to over-all economic growth, with redistribution as an adjustment and consumerism as its objective. That required quantifying prosperity, hence the G.D.P., a measure that Cass, along with other writers, finds to be disastrous, not least because it values consumers above producers. Cass sees universal basic income as the end-stage scenario of every other redistribution program, whose justification is that the poor will be fine without work as long as they can buy things,” Lepore writes.

“Cass offers a careful criticism of the robots-are-stealing-our-jobs theory. He cites four of its errors. It overestimates twenty-first-century innovations and underestimates the innovations of earlier centuries. It miscalculates the pace of change. It assumes that automation will not create new sectors. (3-D printing might replace a lot of manufacturing workers, but it could also create a lot of new small businesses.) And it fails to appreciate the complexity of many of the jobs it thinks robots can do. The 2013 Oxford study [which predicted that 47 percent of U.S. jobs are at risk of being replaced by robots and artificial intelligence over the next fifteen to twenty years] that kept Andrés Oppenheimer up at night Cass finds to be mostly silly. Its authors, Carl Frey and Michael Osborne, rated seven hundred and two occupations from least ‘computerizable’ to most. Highly vulnerable are school-bus drivers, and, while a self-driving school bus does not seem technically too far off, Cass points out, few parents can imagine putting their kids on a bus without a grownup to make sure they don’t bash one another the whole way to school.”

“Heads, the robots are coming! Accept the inevitability of near-universal unemployment! Tails, the Mexicans are coming! Close the borders!”

“The story goes that ‘automation’ or the ‘knowledge economy,’ not bad public policy, is to blame,” Cass writes. ‘Historically, economists and policy makers have led the effort to explain that technological innovation is good for workers throughout the economy, even as its ‘creative destruction’ causes dislocation for some. So why, suddenly, are they so eager to throw robots and programmers under the bus?’ One answer might be that, given the current state of American political polarization, it’s either throw the robots under the bus or throw the immigrants. Cass, not surprisingly, advocates restricting immigration,” Lepore writes.

When Donald Trump ran for President, “The Economist announced a new political fault line, not between left and right but between open and closed: ‘Welcome immigrants or keep them out? Open up to foreign trade or protect domestic industries? Embrace cultural change, or resist it?’ Barack Obama was an opener. Openers tend to talk about robots. ‘The next wave of economic dislocation won’t come from overseas,’ Obama said in his farewell address […]. ‘It will come from the relentless pace of automation that makes many good, middle-class jobs obsolete.’” Closers, on the other hand, tend to talk about immigrants. “Trump is a closer. Closers tend to talk about immigrants. Trump has tweeted the word ‘jobs’ nearly six hundred times, but not once has he tweeted the words ‘robot,’ ‘robots,’ or ‘automation.’”

Trump’s Administration mocks the fear of a robot invasion. Closers usually do. It is therefore ironic that a recent study published in the Oxford Review of Economic Policy argues that the robot caravan got Trump elected. Measuring the density of robots and comparing them with election returns, it found that electoral districts that became more exposed to automation during the years running up to the election were more likely to vote for Trump. “Indulging in a counterfactual, they suggest that a less steeply rising increase in exposure to robots would have tipped both Pennsylvania and Wisconsin toward voting for Hillary Clinton,’ Lepore writes. “According to this line of thinking, Twitter bots and fake Facebook news didn’t elect Donald Trump, but robots really might have. Or maybe it was all the talk about the wall.”

What a corporation might look like in 2050

In Wired, Stowe Boyd is “resorting to a futurist sleight-of-hand to get to an answer” to the question what a corporation might look like in 2050?

Rather than simply extrapolating from the present, he picks three forces — economic inequality, climate change, and artificial intelligence and robots — that will serve as dimensions for scenario development, which will help us think about our future in a structured way.

With regard to AI and robots, Boyd believes there will be necessary regulatory checks. “While we will allow autonomous vehicles to shuttle us around, and algorithms to select the best candidates for a job — because AI is better than us at that — people will remain wary of AIs. I don’t think we’ll be handing over control of our nuclear weapons — or the strategic direction of our companies — to artificials, no matter how smart,” he writes. As a result, in two scenarios AI is checked or limited in its impact, while in one scenario — Neo-feudalistan — AI and robots become the primary means of production.

Humania

Humania is the most egalitarian and democratic scenario. In this scenario, businesses in the year 2050 will be “egalitarian, fast-and-loose, and porous. Egalitarian in the sense that Humania workers have great autonomy: They can choose who they want to work with and for, as well as which initiatives or projects they’d like to work on.

They’re fast-and-loose in that they are organized to be agile and lean, and in order to do so, the social ties in businesses are much looser than in the 2010s. It was those rigid relationships — for example, the one between a manager and her direct reports — that, when repeated across layers of a hierarchical organization, lead to slow-and-tight company,” Boyd writes.

“Instead of a pyramid, Humania’s companies are heterarchies: They are more like a brain than an army. In the brain — and in fast-and-loose companies — different sorts of connections and groupings of connected elements can form.” (Illustration by Jenn Liv for WIRED).

Humania’s companies are [brain-shaped] heterarchies. There is no single way to organise and people can choose the relationships that make most sense. “[Careers] involve many different jobs and roles, and considerable periods of time out of work. Basic universal income is guaranteed and generous benefits for family leave are a regular feature of work. […] This is the porous side of things; the edge of the company is permeable, and people easily leave and return,” while “time-limited standard employment agreements — like Reid Hoffman’s ‘Tours of Duty’ model — allow security for workers and flexibility for companies.”

Neo-feudalistan

In Neo-feudalistan, “[corporations] are able to invest in ever-more-intelligent AI, driving down the prices of food, goods, and services across the board in almost all industries. While this yielded profits sufficient to maintain the system, it also created companies with significantly smaller staffs.

In general, businesses in Neo-feudalistan are primarily driven by carefully-managed artificials, configured to have deep expertise in narrow domains, and overseen by small teams of highly trained experts in those domains, and supported by scientists who understand the workings of the artificials.”

“In general, businesses in Neo-feudalistan are primarily driven by carefully-managed artificials, configured to have deep expertise in narrow domains, and overseen by small teams of highly trained experts.” (Illustration by Jenn Liv for WIRED).

“Many of the internal functions now performed by people are handled in Neo-feudalistan companies by artificials, as well. […] Starting in the 2020s, companies handed over ‘human resources’ to algorithms relying on big data because it had been shown that people are too cognitively biased to make good hiring and promotion decisions. Relatively quickly, other managerial functions were handed over, too. Like the companies of Humania, Neo-feudalistan companies are brain-shaped, but most of the neurons aren’t human. Of course, the people that remain — and the companies’ owners — are extremely well-paid, and very skilled at getting the most out of AI.

In order to counter the revolutionary tendencies that may arise due to the millions who are unemployed, a universal basic income has been enacted across Neo-feudalistan. It provides enough to support a decent lifestyle, and access to basic services like health care, education, and transport. This was largely underwritten by massive reductions in cost in production and energy. Basic goods sell for perhaps a tenth of their 2015 cost in Neo-feudalistan, because there are so few people in the supply chain.”

Collapseland

“Dithering by governments and corporations has allowed climate change to push the world into increased heat, drought, and violent weather. The Human Spring of the 2020s led to a conservative backlash and a suppression of the movement itself. It also led to a suppression of advancements in AI, since it became associated with the science orientation of the movement.

But governments and corporations get their act together in the late 2020s and 2030s to avert an extinction event via the global adoption of solar. However, this only comes after a serious ecological catastrophe has occurred. Inequality remains unchecked, and the poor become much poorer.”

“It’s no different from the company you work for today, except longer hours, fewer co-workers, less pay, and much more dust. To increase profits, corporations have cut staff and forced existing workers to work harder.” (Illustration by Jenn Liv for WIRED).

“Collapseland businesses are much like businesses of 2015. Most efforts are directed toward basic requirements — like desalinating water, relocating people away from low-lying or drought stricken areas, and struggling with food production challenges. As a result, little innovation has taken place. It’s no different from the company you work for today, except longer hours, fewer co-workers, less pay, and much more dust. To increase profits, corporations have cut staff and forced existing workers to work harder.”

Despite this being an exercise, there is an important takeaway: “Everything has to fall into place — inequality countered, climate change overcome, and the acceptance of the human right to work (which means limiting AI) — for something like Humania to come into existence,” Boyd writes.

“For our children and grandchildren to live happy, meaningful lives, and for civilization itself to prosper and evolve, we will have to have that Human Spring. Soon. 2050 Is Closer Than You Think.”

Seeing things as the are

Soetsu Yanagi (1889–1961) was a Japanese poet, philosopher, art historian and aesthete, who evolved a theory of why certain objects made by unknown craftsman are so beautiful. He became the founding father of the Japanese folk crafts movement, ‘mingei, which celebrates the ‘every-day-ness’ of art.

In The Beauty of Everyday Things, a collection of essays published by Penguin Modern Classics, Yanagi offers a fresh perspective on how to better appreciate the objects that surround us. In The Japanese Perspective (page 139–166), an essay from 1957, he describes the difference between ‘seeing’ and ‘knowing.’

“While everyone, of course, sees, there are many ways of seeing, so that what is seen is not always the same. What is the proper way of seeing? In brief, it is to see things as they are. However, very few people possess this purity of sight. That is, such people are not seeing things as they are, but are influenced by preconceptions. ‘Knowing’ has been added to the process of ‘seeing.’

We see something as good because it is famous; we are influenced by reputation; we are swayed by ideological concerns; or we see based on our limited experience. We can’t see things as they are. To see things in all their purity is generally referred to as intuition. Intuition means that things are seen directly, without intermediaries between the seeing and the seen; things are comprehended immediately and directly. Yet something as simple as this is not easy to do. We mostly see the world through tinted glasses, through biased eyes, or we measure things by some conceptual yardstick. All we have to do is look and see, but our thinking stands in the way. We can’t see things directly; we can’t see things as they are. Because of our sunglasses, the colour of things changes. Something stands between our eyes and things. This is not intuition. Intuition means to see immediately, directly. Something we saw yesterday can no longer be seen directly; it has already become a secondhand experience. Intuition means to see now, straight and true; nothing more or less. Since this means seeing things right in front of you without intermediaries, it could be called ‘just seeing.’ This is the commonsensical role of intuition. In Zen terms it might be expressed by the saying, ‘One receives with an empty hand.’”

“This is an era of critical evaluation, an era of conscious awareness. We have been given the fortunate role of acting as judges. We should not squander this opportunity.” — Soetsu Yanagi in his 1926-essay, ‘The Beauty of Miscellaneous Things.’ — A tea bowl from Japan’s Edo period, early 19th century (photograph courtesy of The Walters Art Museum)

“Considered as a form of activity, the seeing eye and the seen object are one, not two. One is embedded in the other. People who know with the intellect before seeing with the eyes cannot be said to be truly seeing. This is because they cannot penetrate beyond the ken of the intellect and achieve perfect perception. There is a huge difference between intellectual knowledge and
direct intuition.

With intuition, time is not a factor. It takes place immediately, so there is no hesitation. It is instantaneous. Since there is no hesitation, intuition doesn’t harbour doubt. It is accompanied by conviction. Seeing and believing are
close brothers.”

More form The Beauty of Everyday Things in Random finds (2019, week 5).

And also this …

“When does cognitive functioning peak? As we get older, we certainly feel as though our intelligence is rapidly declining,” writes Scott Barry Kaufman in When Does Intelligence Peak? “However, the nitty gritty research on the topic suggests some really interesting nuance. As a recent paper notes, ‘Not only is there no age at which humans are performing at peak on all cognitive tasks, there may not be an age at which humans perform at peak on most cognitive tasks.’

In a large series of studies, Joshua Hartshorne and Laura Germine presented evidence from 48,537 people from standardized IQ and memory tests. The results revealed that processing speed and short-term memory for family pictures and stories peak and begin to decline around high school graduation; some visual-spatial and abstract reasoning abilities plateau in early adulthood, beginning to decline in the 30s; and still other cognitive functions such as vocabulary and general information do not peak until people reach their 40s or later.”

“When Does Cognitive Functioning Peak? The Asynchronous Rise and Fall of Different Cognitive Abilities Across the Life Span”, by Joshua Hartshorne and Laura Germine, in Psychological Science, March 13, 2015

“The picture gets even more complicated, however, once we take into account the ‘dark matter’ of intelligence. As Phillip Ackerman points out, should we really be judging adult intelligence by the same standard we judge childhood intelligence? At what point does the cognitive potential of youth morph into the specialized expertise of adulthood?

In the intelligence field, there is a distinction between ‘fluid’ intelligence (indexed by tests of abstract reasoning and pattern detection) and ‘crystallized’ intelligence (indexed by measures of vocabulary and general knowledge). But domain-specific expertise — the dark matter of intelligence — is not identical to either fluid or crystallized intelligence. Most IQ tests, which were only every designed for testing schoolchildren, don’t include the rich depth of knowledge we acquire only after extensive immersion in a field. Sure, measured by the standards of youth, middle-aged adults might not be as intelligent as young adults, on average. But perhaps once dark matter is taken into account, middle-aged adults are up to par.

To dive deeper into this question, Phillip Ackerman administered a wide variety of domain-specific knowledge tests to 288 educated adults between the ages of 21 and 62 years of age. Domains included Art, Music, World Literature, Biology, Physics, Psychology, Technology, Law, Astronomy, and Electronics. Ackerman found that in general, middle-aged adults are more knowledgeable in many domains compared with younger adults. As for the implications of this finding, I love this quote from the paper:

“[M]any intellectually demanding tasks in the real world cannot be accomplished without a vast repertoire of declarative knowledge and procedural skills. The brightest (in terms of IQ) novice would not be expected to fare well when performing cardiovascular surgery in comparison to the middle-aged expert, just as the best entering college student cannot be expected to deliver a flawless doctoral thesis defense, in comparison to the same student after several years of academic study and empirical research experience. In this view, knowledge does not compensate for a declining adult intelligence; it is intelligence!”

There was an important exception to Ackerman’s finding, however. All three science-related tests (Chemistry, Physics, and Biology) were negatively associated with age. Tellingly, these three tests were most strongly correlated with fluid intelligence. This might explain why scientific genius tends to peak early.

Nevertheless, on the whole these results should be considered good news for older adults. Unless you’re trying to win the Nobel prize in physics at a very old age, there are a lot of domains of knowledge that you can continue to learn in throughout your life.”

The new feminism affects the way people look, says Charlotte Higgins in Skin in the game: do we need to take down nudes — or look at them harder?

“Museums and galleries — especially those representing a canonical European art tradition — burst with images of women disrobed and displayed for the delectation of men. Of course, there is nothing new about recognising the extent to which the spectacularisation of the female body has been part of a structure of oppression of women by men. The ‘male gaze’ has been discussed by feminist critics for decades. It became part of popular culture when John Berger’s groundbreaking TV series Ways of Seeing invited a group of women to sit around a coffee table with cigarettes and glasses of wine to discuss the portrayal of the female body in art. ‘Men look at women; women watch themselves being looked at,’ Berger intones gloomily at the start of the episode,” Higgins writes.

“Nevertheless, in the last few years, the borderlines of acceptable looking have become blurred. In 2017, the Metropolitan Museum of Art in New York received a petition to remove Balthus’s Thérèse Dreaming, a painting of a young girl reclining on a chair, one leg up on a stool to reveal her underwear. In Britain, John William Waterhouse’s Hylas and the Nymphs — which shows the seduction of Heracles’s lover by bare-breasted water nymphs — was temporarily taken down from Manchester Art Gallery, in a move that was at the time widely regarded as censorship (wrongly, the museum argued).”

The Three Graces, c. 1517–18, by Raphael. (Photograph courtesy of Royal Collection Trust/Royal Collection Trust/Her Majesty Queen Elizabeth II)

“In her forthcoming book The Italian Renaissance Nude, Jill Burke argues that because of the nude’s dominance in European art history, it has come to seem somehow natural, inevitable and universal. Some critics, such as Kenneth Clark, even regarded it as a great summation of human creative endeavour. Burke, who worked on the exhibition, argues that we might in fact see the dominance of the nude as strange: non-European artistic traditions, such as those of China and Japan, never developed a taste for the naked body in the same way.

It seems especially important, just now, not to remove nudes from walls, but to look critically, and to resist the universalising power of the familiar. Those sensuous reclining nudes of Titian — his gorgeous Danaës, raped by golden coins, his drowsing woodland nymphs, his Venuses — have an important precursor: a woodcut in an exquisitely decorated book, printed by the Venetian Aldine Press in the late 15th century. Called the Hypnerotomachia Poliphili, it shows a female figure, naked and recumbent. Over her looms a satyr — half goat, half man — his penis erect. The satyr disappears from the many subsequent depictions of similar sleeping, vulnerable women. But you could say he has never gone away.”

Antonio del Pollaiuolo’s Battle of the Nudes, 1470s. (Photograph courtesy of The Albertina Museum, Vienna)

Architects need to stop aiming for the ‘iconic’ and focus on everyday beauty, says Roger Scruton, who chairs the U.K.’s new Government Commission on Building Better — Building Beautiful.

“Our need for belonging is part of what we are and it is the true foundation of aesthetic judgment. Lose sight of it and we risk building an environment in which function triumphs over all other values, the aesthetic included. It is not that there is a war of styles — any style can prove acceptable if it generates a real settlement, and the point is recognised by a great range of contemporary architects, and not only by those committed to some traditional grammar. The issue is no longer about style wars but about a growing recognition of the deep truth that we build in order to belong. Many committed modernists begin from this truth — for example, Alain de Botton in The Architecture of Happiness and Rowan Moore in Why We Build. Among the new settlements that are proving popular there are as many built in polite modernist styles as there are in some kind of traditional vernacular. The important point is that all of us, the homeless and the disadvantaged as much as those who have invested their savings in a property, wish for a house that is also a home,” Scruton writes in Here’s what I want from modern architecture.

“It is for this reason that ordinary architecture, however adventurous in its use of materials, forms and details, cannot rely on the excuse of artistic licence in order to creep through the planning process. In art we attempt to give the most exalted expression to life and its meaning. In everyday arrangements we simply try to do what looks right. Both cases involve the pursuit of beauty. But the two kinds of beauty touch different areas of the psyche. To create art you need imagination and talent — what the Romantics called ‘genius’; to create everyday beauty you need only humility and respect. In art, we are free to explore life in all its varieties, to enter imagined realms, and to open ourselves to our highest aspirations. Artistic beauty lies at the apex of our endeavours, and to fall short is to fall flat. To rise to that peak requires a distinct artistic personality, and an original and creative use of the available forms.

In everyday life, by contrast, beauty is a matter of adjusting our arrangements so as to fit to the contours of ordinary needs and interests, as when we lay a table, plant vegetables in rows or arrange the pictures on a wall. Everyday beauty lies within reach of us all, while artistic beauty is the occupation of the few. Much architecture lies somewhere between the two, being to a great extent a matter of fitting in and doing the job, but also overcoming aesthetic problems that require imagination and even inspiration for their solution. For ordinary people, however, it is the everyday fitting in that counts, and this is manifested in all the criticisms that we hear of recent developments. In everyday building it is as risky to stand out, to dominate, to be boastful and ‘iconic’ as it is in social gatherings. Everyday beauty is a matter of manners, not style.”

“He didn’t hit his stride till he was 50, and now the architect, as inventive and bold as ever, hangs out with everyone from Harrison Ford to Jay-Z,” Rowan Moore writes in Frank Gehry at 90: ‘I love working. I love working things out.’

“Gehry’s life story has a number of cliff jumps, which take on a mythic quality in the retelling. […] The most pivotal jump concerns the house in Santa Monica where Gehry lived until recently (they now live in a house designed by their son) with his second wife, Berta, which he remodelled in the 1970s with all the disrespect for convention for which he was not yet world-famous — jagged angles, rough edges, unglamorous materials such as plywood, chain link fencing and corrugated sheet metal. At that time, Gehry had built up a successful practice designing not-bad but not-exceptional commercial projects — apartment blocks, offices, shopping centres — with a sideline doing houses and studios for his artist friends. In 1980 he hosted a dinner at his house to celebrate the opening of a not-bad shopping centre. His developer client looked around him. ‘Is this what you like?’ he asked. Gehry said he did. ‘Well, if you like this you can’t possibly like that,’ said the client, pointing in the direction of the shopping centre, ‘so why are you doing it?’ So Gehry, aged a little past 50, stopped working on all his commercial projects and shrunk his office. He chose, almost, to start over again.” Moore writes.

“He is sensitive to the criticism that his more spectacular work is a series of signature gestures. He hates the label “starchitect” that got stuck to him, along with Zaha Hadid, Rem Koolhaas and others, some time earlier in this millennium. He is, however, able to convey his feelings with humour: once, as I was preparing to take part in a symposium on the sins of starchitecture, a Fedex parcel arrived from his Los Angeles office. It contained T-shirts for the participants, designed by a friend of Gehry’s. ‘Fuck Frank Gehry,’ they said.

For myself, Gehry has always been there. As a student in the 1980s, when postmodernism was bloating, and where there seemed to be precious little inspiration beyond British hi-tech, it was a revelation to find someone who could be so inventive, so enjoyable and so free, while still to the point. His buildings do what they are supposed to do and then add more. They are generous. They are not embarrassed by inhabitation, as some architecture is, but enhanced by it.”

Danziger Studio and Residence, Los Angeles, 1965 — This early work, for the graphic artist Lou Danziger, beautifully combines workplace, home and courtyard garden. It shows Gehry’s interest in interlocking volumes, even if it has none of the irregular angles that later became his trademark. Fellow architects found it too bold, but many artists loved it. (Photograph by Michael McNamara)
Gehry Residence, Santa Monica, 1978 — “We’d been told that it was haunted,” says Gehry of the home he remodelled for himself and his family, “so I decided that the ghost was a cubist.” The result was an explosion of everyday building materials around an everyday timber house. Neighbours hated it, sued or threatened Gehry with prison, but it made his name. (Photograph by Susan Wood/Getty Images)

In How to Think Like an Entrepreneur, Philip Delves Broughton writes how Frank Gehry sought the company of artists and architects, but found himself marginalised by both groups. The architects tried to belittle him by calling him “artist,” while the artists called him a “plumber.” But Gehry drew his energy from straddling these worlds.

“There was a powerful, powerful energy I was getting from this [art] scene that I wasn’t getting from the architecture world. What attracted me to them is that they worked intuitively. They would do what they wanted and take the consequences. Their work was more direct and in such contrast to what I was doing in architecture, which was so rigid. You have to deal with safety issues — fireproofing, sprinklers, handrails for stairways, things like that. You go through training that teaches you to do things in a very careful way, following codes and budgets. But those constraints didn’t speak to aesthetics,” Gehry said.

Only in 1997, when the Bilbao Guggenheim opened to worldwide acclaim, Gehry’s blend of energy and competence, and his ability to bring a unique architectural vision to reality were finally recognized. He was sixty-nine.

Watch the Canadian-American architect talk about his life, architecture and the world today in an interview with Marc-Christoph Wagner for Louisiana Channel at Gehry’s studio in Santa Monica, Los Angeles in November 2018.

Edgemar shopping centre, Santa Monica, 1988 — Edgemar was a mixed-use development of shops, offices, restaurant, studios and a small museum. If its intimate, irregular open spaces are inspired by Tuscan hill towns, its materials are pure LA. Gehry’s Abu Dhabi Guggenheim clients liked its chain link sun screens so much that they’ll be making a comeback in the Gulf. (Photograph by View Pictures/UIG via Getty Images)
8 Spruce Street, New York, 2010 — Manhattan is the hardest stage on which to design a skyscraper — what can be added to the home of the Chrysler, Empire State and Woolworth buildings? 8 Spruce Street, or New York by Gehry (as the marketing people call it), pulls it off. It is distinctive and new, but seems to belong in its eminent company. (Photograph by Jon Hicks/Getty Images)
Bilbao Guggenheim, 1997 — The titanium-clad art museum that put its host city on the tourist map. A popular and critical success that has been much imitated but never equalled. Such objections as there are argue that the showy architecture overwhelms the art. Gehry’s response is to quote artists — Richard Serra, Cy Twombly — who say they love exhibiting there. (Photograph by Alamy)
Aeneas Erects a Tomb to His Nurse, Caieta, and Flees the Country of Circe (Virgil’s Aeneid, Book VII), by Master of the Aeneid, c. 1530–35. (Collection of The Metropolitan Museum of Art, New York)

“my fame extends all the way to heaven.
I live in Ithaca, a land of sunshine.
From far away one sees a mountain there,
thick with whispering trees, Mount Neriton,
and many islands lying around it
close together — Dulichium, Same,
forested Zacynthus. Ithaca itself,
low in the sea, furthest from the mainland,
lies to the west — while those other islands
are a separate group, closer to the Dawn
and rising Sun. It’s a rugged island,
but nurtures fine young men. And in my view,
nothing one can see is ever sweeter
than a glimpse of one’s own native land.”

— Odysseus describes Ithaca’s geography and topography in Book 9 (line 26 to 39) of the Odyssey (From: The Geography of the Odyssey — Or how to map a myth, by Elizabeth Della Zazzera)

Thank you for reading and Random finds will be back next week. If fortune allows, of course.

--

--

Mark Storm
Mark Storm

Written by Mark Storm

Helping people in leadership positions flourish — with wisdom and clarity of thought

No responses yet