Reading notes (2021, week 7) — On the critique of technology, rethinking the fundamentals of the physical workplace, and life’s stories
Reading notes is a weekly curation of my tweets. It is, as Michel de Montaigne so beautifully wrote, “a posy of other men’s flowers and nothing but the thread that binds them is mine own.”
In this week’s edition: Beyond Heidegger’s discourse on technology; balancing the shift toward remote work with the value of the physical workplace; how you arrange the plot points of your life into a narrative can shape who you are; thinking to some purpose; life’s stories; a universe without comment is a universe without meaning; the act of worldmaking; the North Pole underwater; and, finally, Yo-Yo Ma says we are more than we can measure.
From tech critique to ways of living
“In the 1950s and 1960s, a series of thinkers, beginning with Jacques Ellul and Marshall McLuhan, began to describe the anatomy of our technological society. Then, starting in the 1970s, a generation [including Ivan Illich, Ursula Franklin and Albert Borgmann] emerged who articulated a detailed critique of that society,” the author of Breaking Bread with the Dead: A Reader’s Guide to a More Tranquil Mind, Alan Jacobs, writes in From Tech Critique to Ways of Living.
But despite their “powerful, incisive, and remarkably coherent” critique of technology, which Jacobs calls the Standard Critique of Technology, or SCT, they have had no success in reversing, or even slowing, the momentum of our society’s move toward what Neil Postman called technopoly.”
According to Jacos, “[t]he basic argument of the SCT goes like this. We live in a technopoly, a society in which powerful technologies come to dominate the people they are supposed to serve, and reshape us in their image. These technologies, therefore, might be called prescriptive (to use Franklin’s term) or manipulatory (to use Illich’s). For example, social networks promise to forge connections but they also encourage mob rule. Facial-recognition software helps to identify suspects and to keep tabs on whole populations. Collectively, these technologies constitute the device paradigm (Borgmann), which in turn produces a culture of compliance (Franklin).
The proper response to this situation is not to shun technology itself, for human beings are intrinsically and necessarily users of tools. Rather, it is to find and use technologies that, instead of manipulating us, serve sound human ends and the focal practices (Borgmann) that embody those ends. A table becomes a center for family life; a musical instrument skillfully played enlivens those around it. Those healthier technologies might be referred to as holistic (Franklin) or convivial (Illich), because they fit within the human lifeworld and enhance our relations with one another. Our task, then, is to discern these tendencies or affordances of our technologies and, on both social and personal levels, choose the holistic, convivial ones.”
“The philosophical ancestor of the SCT is Martin Heidegger,” Jacobs writes, and a few points of his famous essay The Question Concerning Technology, which asks what the essence of technology is, require our attention here.
“First, because ‘technology itself is a contrivance,’ an ‘instrumentum,”’we are led to think instrumentally about it. It is a contrivance for mastery, and we therefore naturally think in terms of how we can master it. But when we look more carefully at how technology is a means that we try to master for specific ends, says Heidegger, we realize that we too, as much as the Great Externality called nature, become raw material in the process. […]
As Mark Blitz has written in [Understanding Heidegger on Technology] within the governing logic of our current moment
‘all things increasingly present themselves to us as technological: we see them and treat them as what Heidegger calls a standing reserve, supplies in a storeroom, as it were, pieces of inventory to be ordered and conscripted, assembled and disassembled, set up and set aside. Everything approaches us merely as a source of energy or as something we must organize. We treat even human capabilities as though they were only means for technological procedures, as when a worker becomes nothing but an instrument for production. Leaders and planners, along with the rest of us, are mere human resources to be arranged, rearranged, and disposed of. Each and every thing that presents itself technologically thereby loses its distinctive independence and form. We push aside, obscure, or simply cannot see, other possibilities.’
This is what Heidegger means when he speaks of the technological ‘enframing’ or ‘positionality’ — the German word is Gestell — of human life. It gradually turns us all into ‘standing-reserve,’ as when we speak with equal facility of ‘natural resources’ and ‘human resources.’
This technological enframing of human life, says Heidegger, first ‘endanger[s] man in his relationship to himself and to everything that is’ and then, beyond that, ‘banishes’ us from our home. And that is a great, great peril.”
“The philosopher Yuk Hui […] thinks that Heidegger is the most profound of recent Western thinkers on technology — but also that it is necessary to ‘go beyond Heidegger’s discourse on technology.’ In his exceptionally ambitious book The Question Concerning Technology in China (2016) and in a series of related essays and interviews, Hui argues, as the title of his book suggests, that we go wrong when we assume that there is one question concerning technology, the question, that is universal in scope and uniform in shape. Perhaps the questions are different in Hong Kong than in the Black Forest. Similarly, the distinction Heidegger draws between ancient and modern technology — where with modern technology everything becomes a mere resource — may not universally hold.
Hui explores, for instance, Kant’s notion of the cosmopolitan, and the related role of print technology. A central concept in Enlightenment models of rationality, the cosmopolitan is the ideal citizen of the world engaged in public reasoning, and Kant believed that a ‘universal cosmopolitan condition’ would one day be the natural outcome of history. But Kant’s understanding of what that means is thoroughly entangled with the rise and expansion of print culture. It is directly through print culture that the Republic of Letters, the very epitome of cosmopolitanism as Kant knew it, is formed. But, then, what might a cosmopolitan be within a society whose print culture is either nonexistent or radically other than the one Enlightenment thinkers knew?
Hui’s novel approach to the question(s) concerning technology thus begins with a pair of seemingly contradictory ideas about whether technology should be seen as universal:
Thesis: Technology is an anthropological universal, understood as an exteriorization of memory and the liberation of organs, as some anthropologists and philosophers of technology have formulated it;
Antithesis: Technology is not anthropologically universal; it is enabled and constrained by particular cosmologies, which go beyond mere functionality or utility. Therefore, there is no one single technology, but rather multiple cosmotechnics.
[Hui] claims that we are now in a position where we can see what is of value in the Thesis only after we fully dwell within the Antithesis. This leads us to the generative idea of ‘multiple cosmotechnics.’ [Cosmotechnics, Hui says,] ‘is the unification of the cosmos and the moral through technical activities, whether craft-making or art-making.’ That is, a cosmotechnics is the point at which a way of life is realized through making.
The point may be illustrated with reference to an ancient tale Hui offers, about an excellent butcher who explains to a duke what he calls the Dao, or ‘way,’ of butchering. The reason he is a good butcher, he says, it not his mastery of a skill, or his reliance on superior tools. He is a good butcher because he understands the Dao: Through experience he has come to rely on his intuition to thrust the knife precisely where it does not cut through tendons or bones, and so his knife always stays sharp. The duke replies: ‘Now I know how to live.’ Hui explains that ‘it is thus the question of living, rather than that of technics, that is at the center of the story.’
This unification — of making and living — might be said to be the whole point of Daoism. Though the same theme is woven through certain Confucian texts and the I Ching, it is particularly notable as the incessant refrain of the Daodejing, or, as it is more commonly called in the English-speaking world, the Tao Te Ching. The title means something like ‘The Classic of the Virtue of the Way’ or ‘The Classic of the Way and of Virtue.’ In both cases ‘virtue’ (Te) should be understood as something close to the Latin virtus or the Greek aretē, meaning a kind of excellence, an excellence that has power.
Hui says, in an interview with Noema magazine about his book, that he has
‘attempted to understand Chinese cosmotechnics through the dynamic relationship between two major categories of traditional Chinese thought: dao, or the ethereal life force that circulates all things (commonly referred to as the way), and qi, which means tool or utensil. Together, dao and qi — the soul and the machine, so to speak — constitute an inseparable unity.’
Hui further comments that if the fundamental concern of Western philosophy is with being and substance, the fundamental concern of Classical Chinese thought is relation. [I]t makes sense, then, that his approach to cosmotechnics would center on the inquiry into a certain relation, that between dao (the way) and qi (tools).”
“In Always Coming Home (1985) — a strange, unclassifiable book, part novel, part ethnography of an invented people of the future, the Kesh — Ursula K. Le Guin imagines a society governed by verse 80 of the Tao Te Ching [Jacobs cites from the translation by Jonathan Star: ‘Neigbouring villages are within sight of each other / Roosters and dogs can be heard in the distance / Should a man grow old and die / without ever leaving his village / let him feel as though there was nothing he missed’]. We first learn a great deal about the people of the valley of the Na — their songs and dances, their pottery, their social organization into Houses, their rites of maturation and of marriage. Then we discover that in one of the villages there is a computer terminal connected via Internet to a vast AI called the City of Mind, which also knows the very different life of a great metropolis not so far away. (Plural ways of life indeed.) People in the villages know that the terminal exists, but most of them aren’t interested in it. Occasionally someone becomes interested, which is fine. The terminal is there when needed.
But social flourishing doesn’t require the terminal. I say ‘social’ flourishing because the Kesh do not live very long. Their lifespan has been diminished by a great plague that once ravaged the world. Such plagues we cannot do very much about, nor the resulting compromise of our collective health. But to live virtuously, in accordance with Dao, and to be content — these we can do. We can only hope that it will not take a truly deadly pandemic — something far worse than the one we’ve had — to remind us of the contentment that can be found in the acceptance of limits.
Always Coming Home illustrates cosmotechnics in a hundred ways. Consider, for instance, information storage and retrieval. At one point we meet the archivist of the Library of the Madrone Lodge in the village of Wakwaha-na. A visitor from our world is horrified to learn that while the library gives certain texts and recordings to the City of Mind, some of their documents they simply destroy. ‘But that’s the point of information storage and retrieval systems! The material is kept for anyone who wants or needs it. Information is passed on — the central act of human culture.’ But that is not how the librarian thinks about it. ‘Tangible or intangible, either you keep a thing or you give it. We find it safer to give it’ — to practice ‘unhoarding.’ She continues,
‘Giving involves a good deal of discrimination; as a business it requires a more disciplined intelligence than keeping, perhaps. Disciplined people come here … historians, learned people, scribes and reciters and writers, they’re always here, like those four, you see, going through the books, copying out what they want, annotating. Books no one reads go; books people read go after a while. But they all go. Books are mortal. They die. A book is an act; it takes place in time, not just in space. It is not information, but relation.’
It is not information, but relation. This too is cosmotechnics.”
What’s an office for?
The Covid-19 pandemic has presented companies with an unprecedented opportunity to figure out how to balance what appears to be a lasting shift toward remote work with the value of the physical workplace, John Seabrook argues in Has the Pandemic Transformed the Office Forever?
“Early in the pandemic, [Microsoft’s CEO, Satya Nadella,] suggested in a conversation with editors of the Times that effective remote collaboration relied in part on ‘social capital.’ The concept that communities grow out of personal interactions was popularized in Robert Putnam’s 2000 best-seller, Bowling Alone. In a job setting, social capital is accumulated by working in the presence of others, and depleted during virtual interactions. Nadella told the Times he was concerned that ‘maybe we are burning some of the social capital we built up in this phase where we are all working remote. What’s the measure for that?’
But when [Seabrook] spoke to Nadella he allowed that when you see people in their homes, with their noisy children and importunate pets, struggling to stay focussed and upbeat, ‘you have a different kind of empathy for your co-workers.’”
“If you entered office life in the eighties, as [Seabrook] did, hierarchy was everywhere you looked. Bosses and other big shots had walled offices with views, while small fry toiled in cubicle reefs, bathed in fluorescent light. The industrial open-office setting where C. C. Baxter labors in Billy Wilder’s 1960 film, The Apartment, a kind of white-collar factory, gave way to the cube farm where Lester Burnham sits in American Beauty, from 1999. Conformity still reigned in the cubicle era, but at least an office schnook had partial visual privacy on three sides. (For sound privacy, you needed an office.) Although they are now derided, cubicles held their charms; I met and courted my wife in one. However, like Bud Baxter, my dream was to have a door with my name on it.
The cubicle evolved out of utopian notions of office flexibility and flow that were promoted in the sixties by Robert Propst, the head of research for the Herman Miller company. Propst grasped that office work was fundamentally different from factory work. Nikil Saval, in his 2014 book, Cubed: A Secret History of the Workplace, writes, ‘Propst was among the first designers to argue that office work was mental work and that mental effort was tied to environmental enhancement of one’s physical properties.’ Propst believed that, in particular, knowledge workers — a term coined by Peter Drucker in 1959 — would benefit from what he called a ‘mind-oriented living space.’ He sought to integrate a more dynamic concept of work into a program of hinged partitions and standing desks. The Action Office, as Propst called it, débuted in 1964. But by the mid-eighties it had evolved into the inert cubicle, and Propst was blamed for fathering it. What happened?
Propst’s action-oriented designs may or may not have increased productivity and collaboration, but they did enhance the bottom line, allowing office managers to add more employees without having to move to a bigger space. As density increased, partitions collapsed into the smallest possible footprint: the ever-shrinking cube. Two years before Propst’s death, in 2000, he told an interviewer, ‘The dark side of this is that not all organizations are intelligent and progressive. Lots are run by crass people who can take the same kind of equipment and create hellholes. They make little bitty cubicles and stuff people in them. Barren, rathole places.’”
“[T]he Action Office that [Robert Propst] had conceived and [George Nelson] designed might have been the first truly modern idea to enter the office — that is, the first in which the aesthetics of design and progressive ideas about human needs were truly united,” Nikil Saval writes in The Cubicle You Call Hell Was Designed to Set You Free. (Photograph: Robert Propst’s Action Office 1, 1964. Courtesy of Herman Miller)
“Growing up in the Bay Area in the seventies and eighties, Primo Orpilla got to see at first hand a new democratic design aesthetic bubbling up from the California tech scene. In the early eighties, the offices of most large tech companies were still what Orpilla calls Dilbertvilles, after the cubicle-dwelling engineer in the Scott Adams comic strip. ‘They were heavy, heavy hierarchical structures,’ he told [Seabrook] — like those of Initech, the company in Mike Judge’s 1999 satire, Office Space. ‘Cubicles, offices, meeting rooms — that was it. We hadn’t had a brainstorm room yet — collaboration wasn’t even in the conversation. You just went from meeting to meeting to meeting.’
[…]
By the late eighties, office managers started asking designers to facilitate this new, team-oriented style of work. ‘It all became about: How do we take care of the people who create this product?’ Orpilla said. ‘They need to be inspired, they need to be fed, and we need to give them the spaces to do their work.’ Free food and other amenities kept engineers in the office, coding into the night. ‘They work long hours, they tend to work in the dark,’ Orpilla went on. ‘They like to hang out for long periods of time.’
The Internet boom of the nineties, which was led in part by entrepreneurial engineers, played a role in spreading the team-based methodology to other forms of knowledge work. Creating a successful digital product such as Google’s Ad Words […] often involves cross-disciplinary teams of engineers, marketers, and product managers. As software became the engine of growth in the tech industry, and in the economy as a whole, hard-walled barriers between formerly separate divisions of workers continued to melt away.
[..]
Designers addressed complaints about the noise and the distractions by incorporating elements of ‘activity-based working,’ a term coined, in 1994, by the Dutch design consultant Erik Veldhoen. Layouts featured a mixture of open areas for team-based work, ‘living rooms,’ and ‘huddle spaces’ meant to promote casual encounters and focussed work. Activity-based design also helped introduce ‘hot desking’ (unassigned first-come, first-served seating), and ‘hoteling’ (reservable desks).”
In a survey conducted by R/GA, many of the global advertising and marketing agency’s employees expressed the fear that remote workers would lose out on opportunities that in-person workers get by virtue of proximity. “Fifty-seven per cent of respondents thought that the stigma of working remotely would linger after the pandemic. ‘When working from home people felt others saw them as unproductive, difficult to reach, and taking an unofficial day off,’ a summary found. ‘There is a lot of concern that when some return to the office, expectations and processes will shift back to favoring those who are physically present.’
The hybrid office sounds like a logical post-pandemic approach, and many companies are trying it, but mixing in-person and remote workers presents new challenges for managers. Ethan Bernstein, a professor at Harvard Business School who studies the workplace, told me that a hybrid setup is very hard to get right, and that he advises businesses to avoid it: ‘I’d say stay all virtual — hybrid is likely to deliver the worst of both worlds.’ A hybrid company still has substantial real-estate costs, and it also has to contend with the potentially serious threat to company culture posed by resentful remote workers who feel that they’ve been unfairly denied plum assignments and promotions. And what about all the people who return to work to discover that they no longer have a desk, and that the sweaters and photographs and other personal items they left behind have been packed up or, worse, placed on a table of shame? As Bernstein put it, People generally prefer a home to a hotel — in life and at work.’
By the time the pandemic hit, open-plan offices had become even more hated than cube farms. Well-heeled companies might be willing to spend money on activity-based typologies that offer respite from open-plan distractions, but, when times are hard and office budgets are cut, the yurt and the extra huddle space are often the first things to go. […]
Workers have responded to this steady erosion of personal space by building cubicles of sound with headphones. Bound in a sonic nutshell, you can feel like a king of infinite office space, as long as you don’t look up from your screen. Since most office work takes place on virtual desktops anyway, it was easy, pre-pandemic, to perform what was essentially remote work while occupying your employer’s expensive real estate.
In The Truth About Open Offices, an article published in the Harvard Business Review in December, 2019, Ethan Bernstein and Ben Waber, the president of Humanyze, a workplace-analytics firm, used smartphones and sensors to track face-to-face and digital interactions at two Fortune 500 companies before and after the companies moved from cubicles to open offices. The authors wrote, ‘We found that face-to-face interactions dropped by roughly 70% after the firms transitioned to open offices, while electronic interactions increased to compensate.’ The virtual workplace, instead of complementing the physical one, had become a refuge from it,” Seabrook writes.
In their article, Bernstein and Waber explain this 70% drop as follows:
“Why did that happen? The work of the 18th-century French philosopher Denis Diderot suggests an answer. He wrote that performers should ‘imagine a huge wall across the front of the stage, separating you from the audience, and behave exactly as if the curtain had never risen.’ He called this the ‘fourth wall.’ It prevents actors from being distracted by the audience and allows them to divorce themselves from what they cannot control (the audience) and focus only on what they can (the scene), much as a basketball player shoots the ball without really seeing the cheering (or booing) fans behind the hoop. It creates the intimacy of what some call public solitude. The larger the audience, the more important the fourth wall.
People in open offices create a fourth wall, and their colleagues come to respect it. If someone is working intently, people don’t interrupt her. If someone starts a conversation and a colleague shoots him a look of annoyance, he won’t do it again. Especially in open spaces, fourth-wall norms spread quickly.”
At the end of the article, Seabrook describes how, one day in December, he returned to The New Yorker’s office, on the twenty-third floor of One World Trade Center, in lower Manhattan, which the staff had vacated abruptly in March.
“It was a gray, blustery afternoon. The downtown sidewalks, normally lively at lunchtime, were deserted, except for construction workers, who were engaged in adding office and residential space to a market glutted with it. Like a supertanker, the ship that is New York commercial real estate is hard to turn. It keeps plowing ahead, even though it has reached the edge of the known world.
The silent lobby was empty except for masked security. A Christmas tree twinkled at the far end. I was reminded of the riotous office-party scene in The Apartment. Remote work may increase efficiency and productivity, but a virtual office holiday party is a different thing entirely. Sitting at home, watching tipsy colleagues get flirty on a screen could bankrupt one’s social capital.
The opening of King Vidor’s silent film The Crowd, from 1928, shows us the busy New York harbor, followed by the streets and sidewalks of midtown, teeming with people and traffic. Then the camera swoops in through a high window, and glides over a sea of identical desks in a vast, factory-style open office, until it stops at a single desk with a name engraved on a small metal plaque — John Sims, the film’s Everyman hero. In the ninety-second sequence, the crowded city has shrunk in scale, becoming only as big as one man at his desk.
As far as I could tell, I was the only soul in our Gensler-designed office. Post-it reminders from March were curling at the edges. The silence felt oppressive.
Following the new one-way directional signage, I eventually came to my desk. I booted up my virtual desktop, thinking I might take advantage of the rare quiet and privacy to actually do some work in the office. But I couldn’t concentrate. I missed my colleagues. Whether walled, open, or cloud-based, an office is about the people who work there. Without the people, the office is an empty shell.”
Life’s stories
How you arrange the plot points of your life into a narrative can shape who you are — and is a fundamental part of being human, Julie Beck writes in Life’s Stories.
“In the realm of narrative psychology, a person’s life story is not a Wikipedia biography of the facts and events of a life, but rather the way a person integrates those facts and events internally — picks them apart and weaves them back together to make meaning. This narrative becomes a form of identity, in which the things someone chooses to include in the story, and the way she tells it, can both reflect and shape who she is. A life story doesn’t just say what happened, it says why it was important, what it means for who the person is, for who they’ll become, and for what happens next,” Beck writes.
“But life rarely follows the logical progression that most stories — good stories — do, where the clues come together, guns left on mantles go off at the appropriate moments, the climax comes in the third act. So narrative seems like an incongruous framing method for life’s chaos, until you remember where stories came from in the first place. Ultimately, the only material we’ve ever had to make stories out of is our own imagination, and life itself.
Storytelling, then — fictional or nonfictional, realistic or embellished with dragons — is a way of making sense of the world around us.
‘Life is incredibly complex, there are lots of things going on in our environment and in our lives at all times, and in order to hold onto our experience, we need to make meaning out of it,’ [Jonathan Adler] says. ‘The way we do that is by structuring our lives into stories.’”
“It’s a dizzying problem: People use stories to make sense of life, but how much do those stories reflect life’s realities? Even allowing for the fact that people are capable of complex Joyce-ian storytelling, biases, personality differences, or emotions can lead different people to see the same event differently. And considering how susceptible humans are to false memories, who’s to say that the plot points in someone’s life story really happened, or happened the way she thought they did, or really caused the effects she saw from them?
[Monisha Pasupathi] is not convinced that it matters that much whether life stories are perfectly accurate. A lot of false memory research has to do with eyewitness testimony, where it matters a whole lot whether a person is telling a story precisely as it happened. But for narrative-psychology researchers, ‘What really matters isn’t so much whether it’s true in the forensic sense, in the legal sense,’ she says. ‘What really matters is whether people are making something meaningful and coherent out of what happened. Any creation of a narrative is a bit of a lie. And some lies have enough truth.’
[…]
So what to do, then, with all the things that don’t fit tidily? There is evidence that finding some ‘unity’ in your narrative identity is better, psychologically, than not finding it. And it probably is easier to just drop those things as you pull patterns from the chaos, though it may take some readjusting.
But Pasupathi rejects that. ‘I would want to see people do a good job of not trying to leave stuff out because they can’t make it fit,’ she says. ‘We’re not trying to make pieces of your life go away.’
And so even with the dead ends and wrong turns, people can’t stop themselves. ‘We try to predict the future all the time,’ Pasupathi says. She speculates that the reason there’s foreshadowing in fiction in the first place is because of this human tendency. The uncertainty of the future makes people uncomfortable, and stories are a way to deal with that.”
And also this…
Susan Stebbing’s Thinking to Some Purpose, had a big aim: giving everybody tools to think clearly for themselves,” Peter West writes in Pause. Reflect. Think.
“[F]irst published in 1939 in the Penguin ‘Pelican’ books series […], this little book, which could easily be slipped into a pocket and read on the train, in a lunch hour, or at a bus stop, was pitched at the intelligent general reader. In Thinking to Some Purpose, Stebbing took on the task of showing the relevance of logic to ordinary life, and she did so with a sense of urgency, well aware of the gathering storm clouds over Europe.”
According to West, “Thinking to Some Purpose is an important philosophical text for more than just historical reasons. Now, more than ever, philosophers are trying to find ways to promote their skills and ideas in the world outside university departments.”
But in a recent post, the philosopher Timothy Williamson “draws a distinction between popular philosophy and populist philosophy. Williamson argues that while the democratisation of knowledge in general should be encouraged, it should nonetheless be up to professional, academic philosophers to find ways to communicate their research and ideas to a public audience. For Williamson, philosophy is not something we can all do equally well; like any other science, it’s something that one has to be trained to do, since it involves adopting highly sophisticated research methods and familiarising oneself with a considerable amount of both historical and contemporary literature. Good popular philosophy, Williamson argues, is just like good popular science. It occurs when a specialist in the field finds a way to communicate their findings to non-specialists in an engaging and informative manner. If Williamson is right, a good popular philosopher is to philosophy what Bill Nye is to science.”
“This is a very different model of public philosophy to the approach that Stebbing adopts in Thinking to Some Purpose. According to her, one role of philosophy is to help us think clearly. This requires not only having the relevant information in front of us, but also knowing what to do with it. On Stebbing’s model of public philosophy, the aim is to train a public audience to develop practical thinking skills that are applicable in a range of contexts. This requires more than just a transfer of knowledge from an expert who has carried out the prerequisite research.
On this model, public philosophy is a two-way street. It doesn’t resemble a traditional university setting, where a lecturer delivers knowledge to a passive audience. Instead, it requires an audience that’s proactive and eager to acquire knowledge in the right kind of way. Stebbing explains:
‘An educator has two main objects: to impart information and to create those mental habits that will enable his students, or pupils, to seek knowledge and to acquire the ability to form their own independent judgment based upon rational grounds.’
Stebbing thus both advocates and practises what we could label a ‘skills and training’ approach to public philosophy; one that requires its audience (ie, the public) to actively engage rather than passively receive information, and in which the audience is equipped with learning tools (ways of thinking) that are applicable beyond any specific domain of philosophy. Thinking to Some Purpose is primarily focused on public discourse surrounding British politics in the 1930s. Yet the obstacles to clear thinking and instructions on how to overcome it identified throughout the text ought, if Stebbing is right, to be applicable in any domain of public discourse.
A lot of public philosophy today resembles the approach outlined by Williamson where knowledge is transferred from an expert to an inexpert audience. There are plenty of digestible books and accessible podcasts that set out to make the reader aware of what philosophers think (or thought) and why. There are also more recent examples of texts that, like Thinking to Some Purpose, put the emphasis on critical thinking. However, Stebbing’s book is nonetheless somewhat unique in having been written as an in-the-moment response to threats to individual liberty that she saw growing in the world around her. For Stebbing, clear thinking was the solution to a particular problem for a group of people living at a particular time and place (1930s Britain).
Stebbing’s claims will, I suspect, raise a few eyebrows. Is it really the place of trained logicians to step in and adjudicate public and political debates? Can the tools of philosophy really be instilled, successfully, in a short Pelican softback? Does Stebbing actually follow her own advice? After all, she reveals her personal political allegiances (which are loosely anti-aristocracy and against a privately owned press) via the examples she appeals to over the course of the text. If Stebbing herself can’t avoid having her thought restricted by bias and prejudices, what hope is there for the rest of us?
Despite these possible misgivings, it’s still true that, at a time when the arts and humanities, including individual philosophy departments, are under institutional and political pressure to justify their continued existence, the model of public engagement offered by Stebbing’s Thinking to Some Purposeis worth some consideration. I’m not suggesting that a ‘transfer of knowledge’ approach should be replaced by a ‘skills and training’ one. Studying and contributing to philosophy can be an end in itself. But in an era of fake news and 24-hour news cycles, if philosophers are also able to help us pause, reflect and think clearly, regardless of the subject matter at hand, then that’s surely a good thing for them to do.”
“[I]f we can manage to get outside of our usual thinking, if we can rise to a truly mind-bending view of the cosmos, there’s another way to think of existence,” Alan Lightman argues in Is Life Special Just Because It’s Rare?
“In our extraordinarily entitled position of being not only living matter but conscious matter, we are the cosmic ‘observers.’ We are uniquely aware of ourselves and the cosmos around us. We can watch and record. We are the only mechanism by which the universe can comment on itself. All the rest, all those other grains of sand on the desert, are dumb, lifeless matter.
Of course, the universe does not need to comment on itself. A universe with no living matter at all could function without any trouble — mindlessly following the conservation of energy and the principle of cause and effect and the other laws of physics. A universe does not need minds, or any living matter at all. (Indeed, in the recent ‘multiverse’ hypothesis endorsed by many physicists, the vast majority of universes are totally lifeless.) But in this writer’s opinion, a universe without comment is a universe without meaning. What does it mean to say that a waterfall, or a mountain, is beautiful? The concept of beauty, and indeed all concepts of value and meaning, require observers. Without a mind to observe it, a waterfall is only a waterfall, a mountain is only a mountain. It is we conscious matter, the rarest of all forms of matter, that can take stock and record and announce this cosmic panorama of existence before us.”
“I realize that there is a certain amount of circularity in the above comments. For meaning is relevant, perhaps, only in the context of minds and intelligence. If the minds don’t exist, then neither does meaning. However, the fact is that we do exist. And we have minds. We have thoughts. The physicists may contemplate billions of self-consistent universes that do not have planets or stars or living material, but we should not neglect our own modest universe and the fact of our own existence. And even though I have argued that our bodies and brains are nothing more than material atoms and molecules, we have created our own cosmos of meaning. We make societies. We create values. We make cities. We make science and art. And we have done so as far back as recorded history.
In his book The Mysterious Flame (1999), the British philosopher Colin McGinn argues that it is impossible to understand the phenomenon of consciousness because we cannot get outside of our minds to discuss it. We are inescapably trapped within the network of neurons whose mysterious experience we are attempting to analyze. Likewise, I would argue that we are imprisoned within our own cosmos of meaning. We cannot imagine a universe without meaning. We are not talking necessarily about some grand cosmic meaning, or a divine meaning bestowed by God, or even a lasting, eternal meaning. But just the simple, particular meaning of everyday events, fleeting events like the momentary play of light on a lake, or the birth of a child. For better or for worse, meaning is part of the way we exist in the world.
And given our existence, our universe must have meaning, big and small meanings. I have not met any of the life forms living out there in the vast cosmos beyond Earth. But I would be astonished if some of them were not intelligent. And I would be further astonished if those intelligences were not, like us, making science and art and attempting to take stock and record this cosmic panorama of existence. We share with those other beings not the mysterious, transcendent essence of vitalism, but the highly improbable fact of being alive.”
The act of worldmaking or ‘worldbuilding’ — the process of designing and describing fictional, future, or alternate worlds, societies, and cities — “is perhaps most profoundly instrumental as a tool to create collective visions, designs, or strategies for addressing the future of our planet. Diverse teams of creator-participants can assimilate contributions from a broad range of disciplines and genres including architecture and urban planning but also the sciences, information technology and programming, science fiction, gaming, industrial design, critical theory, and more,” Ryan Madson writes in Worldbuilding Forever — Bold Ideas for Our Collective Futures.
“Historical precedents featured in The World as an Architectural Project are primarily the work of individual (male) authors, while a handful of projects are collaborative and include women and non-Western voices. In the book’s afterward, [the architect and the book’s co-auther Hashim Sarkis] reflects on the necessity of multiple viewpoints in worldmaking: ‘A main premise of pluralism is that there should no longer be one source from which to seek guidance about how to live and how to organize the world. Multiple incomplete viewpoints are replacing the singularity and comprehensiveness of an ideological position. The main political questions are now located in a variety of areas outside of politics, architects included.’”
“Not everyone is a futurist, designer, inventor, scientist, or science fiction novelist. But everyone can contribute to shaping a vision. Those who possess useful tools can help to empower others, to give contours and form to a shared vision, to connect the dots from future worlds back to our present reality via policies, prototypes, narratives, and representations.
A truly collaborative approach to worldbuilding might yield unexpected results. Contradictions and complexities inherent in co-creation could more accurately reflect the imperfect and lived-in world(s) of the present moment. Through the prism of pluralism, internal inconsistencies and information gaps become assets rather than flaws — contradiction as an opportunity for reflection and reconciliation. A richly conceived social milieu for a future metropolis, for example, should be expected to accommodate vastly different sub-cultures and group identities, political and religious views, aesthetic preferences, and so forth. Collaborative worldbuilding thus encourages difference, tolerance, and dialectical exchange.
Worldbuilding allows participants to speculate about future scenarios and alternative worlds of varying scales and scope — cities, regions, nations, continents, political and economic systems, environmental conditions, outer space, extraterrestrial worlds, and others yet to be imagined. Sarkis’ research into world-making at planetary and territorial scales provides readers with numerous precedents that point to new possibilities. ‘Worldmaking is different today,’ concludes Sarkis. ‘The crucial challenge that stands before us is no longer the incomprehensibility of the scale, but rather the inhumanity of the global and how we need to imagine it otherwise, to question the boundaries that still divide it, and to reduce its pervasive inequalities. […] Our optimism no longer needs to envision futuristic scenarios; it needs to intervene critically upon the futures that are being deployed in the present.’”
Sue Flood’s photograph ‘North Pole underwater’ is this year’s climate change category winner in the Royal Photographic Society science photographer of the year competition.
“A signpost depicting the geographic north pole at 90 degrees north latitude placed on sea ice largely covered with water. Each year the ice cover over the Arctic declines, a direct result of changing global climates.”
“We live in such a measuring society, people tend to put a person in a box they can put on their mental shelf. People think of me as a cellist because they can see my performances and take my measure as a musician. I think of my life as a musician as only the tip of an iceberg. That is only the audible part of my existence. Underneath the water is the life I’m leading, the thoughts I’m thinking and the emotions that well up in me.
We all get into trouble if we think the universe only exists of the matter that we can see and measure, and not the anti-matter that is the counterpart that holds it all together.
Michelangelo famously said, ‘I liberate the statue from the marble.’ Similarly, my music emerges from the life all around me and the world we all share together. One is the condition of the other.” — Yo-Yo Ma, Behind The Cello