Random finds (2018, week 7) — On serendipity beyond ‘dumb luck,’ 21st-century skills without knowledge, and the consumer/citizen contrast
“I have gathered a posy of other men’s flowers, and nothing but the thread that binds them is mine own.” — Michel de Montaigne
Random finds is a weekly curation of my tweets and, as such, a reflection of my curiosity.
Serendipity beyond ‘dumb luck’
“If you do not expect the unexpected you will not find it, for it is not to be reached by search or trail.” — Heraclitus (c. 535 — c. 475 BC)
‘Serendipity’ was coined in 1754 by the English art historian, man of letters, antiquarian and politician Horatio Walpole, 4th Earl of Orford (1717–1797). In a letter to a friend, he explained an unexpected discovery he had made by reference to a Persian fairy tale about three princes from the Isle of Serendip (Persian and Urdu for Sri Lanka) who possess superpowers of observation. Walpole suggested that this old tale contained a crucial idea about human genius: “As their highnesses travelled, they were always making discoveries, by accident and sagacity, of things which they were not in quest of.” In his letter, he proposed a new word to describe this princely talent for detective work — ‘serendipity’.
But despite Walpole’s original meaning or the word ‘serendipity,’ today, we often see it as something like dumb luck — a random stroke of good fortune. “Scientific folklore is full of tales of accidental discovery, from the stray Petri dish that led Alexander Fleming to discover penicillin to Wilhelm Röntgen’s chance detection of X-rays while tinkering with a cathode-ray tube,” says an editorial in Nature. But is their actually any truth in this claim?
The answer to this question has long relied on anecdotal evidence. Studies rarely try to quantify how much scientific progress was truly serendipitous, how much that cost or the circumstances in which it emerged. Research has focused on serendipity in science as a philosophical concept. The European Research Council, however, aims to change that and “has given biochemist-turned-social-scientist Ohid Yaqub a sizeable €1.4-million (US$1.7-million) grant to gather evidence on the role of serendipity in science. Yaqub argues that he has found a way to do so.”
By classifying it into four basic types, Yaqub has defined serendipity in a way that goes beyond happy accidents. “The first type is where research in one domain leads to a discovery in another — such as when 1943 investigations into the cause of a mustard-gas explosion led to the idea of using chemotherapy to treat cancer. Another is a completely open hunt that brings about a discovery, such as with Röntgen’s X-rays. Then there are the discoveries made when a sought-for solution is reached by an unexpected path, as with the accidental discovery of how to vulcanize rubber. And some discoveries find a solution to a problem that only later emerges: shatterproof glass for car windscreens was first observed in a dropped laboratory flask.”
After studying hundreds of historical examples from the archive of Robert K. Merton, an American sociologist and co-author of The Travels and Adventures of Serendipity, Yaqub claims to have pinned down some of the mechanisms by which serendipity comes about, such as astute observation, errors and, what he calls, ‘controlled sloppiness,’ which lets unexpected events occur while still allowing their source to be traced. The collaborative action of networks of people can also generate serendipitous findings.
He aims to use his classification system as a framework for mining the world’s scientific grants. “By following the publications and patents that emerge from grants, he hopes to find out how often serendipity arises, and to understand its significance and nature. The hunt will start in biomedicine, but could grow to examine other disciplines.”
“If happy accidents are just as likely to occur through goal-oriented research as they are through blue-skies research — witness Roy Plunkett’s accidental discovery of non-stick Teflon while looking for non-toxic refrigerants — that could weaken the suggestion that serendipity and research-targeting are necessarily at odds with each other. […] Giving curious minds free rein to explore nature may well be the best way to generate discoveries, but there is no harm in testing that assumption. As taxpayer demands for scrutiny and accountability grow ever louder, just-so stories about Petri dishes and non-stick frying pans, however compelling, no longer make a convincing case.”
Also Sanda Erdelez, a University of Missouri information scientist, wonders how we cultivate the art of finding what we’re not seeking. According to her, serendipity is something people do, says Pagan Kennedy in How to Cultivate the Art of Serendipity (2016).
“In the mid-1990s, [Erdelez] began a study of about 100 people to find out how they created their own serendipity, or failed to do so. Her research data showed that the subjects fell into three distinct groups. First, people who never encounter serendipity; the ones with a tight focus who tend to stick to their to-do list. She aptly calls these ‘non-encounterers.’ A second group, the ‘occasional encounterers,’ stumble into moments of serendipity every now and then. But ‘super-encounterers’ reported that happy surprises popped up wherever they looked,” Kennedy writes. You can become a super-encounterer, in part because you believe that you are one. It helps to assume you possess special powers of observation, like an invisible set of antennas, that will lead you to clues, says Erdelez.
But how many big ideas actually emerge from laboratory spills, crashes, failed experiments and blind stabs?
A 2005 “survey of patent holders […] found that an incredible 50 percent of patents resulted from what could be described as a serendipitous process. Thousands of survey respondents reported that their idea evolved when they were working on an unrelated project — and often when they weren’t even trying to invent anything. This is why we need to know far more about the habits that transform a mistake into a breakthrough.”
According to Greg Lindsay, a Senior Fellow at the Work Futures Institute, we still have no idea how to pursue what former U.S. Defense Secretary Donald Rumsfeld famously described as ‘unknown unknowns.’ “It’s not enough to ask where good ideas come from — we need to rethink how we go about finding them,” Lindsay writes in How To Engineer Serendipity.
Serendipity isn’t magic or about happy accidents. “It’s a state of mind and a property of social networks — which means it can be measured, analyzed, and engineered.” It’s also a “bountiful source of good ideas. Study after study has shown how chance collaborations often trump top-down organizations when it comes to research and innovation. The challenge is first recognizing the circumstances of these encounters, then replicating and enhancing them.”
If there is indeed a hidden order to how we find new ideas and people, instead of ‘dumb luck,’ it would be possible to plan for serendipity. But how?
The first step is having a, what Louis Pasteur famously called, ‘prepared mind’ — a mind that’s open to the unexpected, to thinking in metaphors, to holding back and not jumping to conclusions, and to resist walls between domains and disciplines.
The second step is to engineer serendipity into organizations. “For all the talk of failing faster and disruptive innovation,” Lindsay writes, “an overwhelming majority of companies are still structured along predictable lines. Even Google cancelled ‘20 percent time,’ its celebrated policy of granting engineers one day a week for personal projects. To capture serendipity, the company is looking at space instead of time.” In the design of Google’s new campus, everyone is just a short ‘casual collision’ away.
The third and final piece is the network. “Google has made its ambitions clear. As far as [former] chairman Eric Schmidt is concerned, the future of search is a ‘serendipity engine’ answering questions you never thought to ask,” Lindsay explains. “The greatest threats to serendipity are our ingrained biases and cognitive limits — we intrinsically want more known knowns, not unknown unknowns. This is the bias a startup named Ayasdi is striving to eliminate in Big Data. Rather than asking questions, its software renders its analysis as a network map, revealing hidden connections between tumors or terrorist cells, which CEO Gurjeet Singh calls ‘digital serendipity.’ IBM is trying something similar with Watson, tasking its fledgling artificial intelligence software with reading millions of scientific papers in hopes of finding leads no human researcher would ever have time to spot.”
When Lindsay describes this vision, there is always someone who will reply, “But that isn’t serendipity!”
“I’m never quite sure what they mean — because it isn’t random or romantic?,” he writes. “Serendipity is such a strange word; invented on a whim in 1754, it didn’t enter widespread circulation until almost two centuries later and is still notoriously difficult to translate. These days, it means practically whatever you want it to be.” [By 1958 the term ‘serendipity’ was found in print just 135 times, between 1958 and 2000 it was used in 57 book titles, in the 1990s it appeared in newspapers 13,000 times, and by 2001 on 636,000 internet hits. Today, when you put ‘serendipity’ into a search engine, you will get well over 6 million references.]
“So, I’m staking my own claim: Serendipity is the process through which we discover ‘unknown unknowns.’ Understanding it as an emergent property of social networks, instead of sheer luck, enables us to treat it as a viable strategy for organizing people and sharing ideas, rather than writing it off as magic. And that, in turn, has potentially huge ramifications for everything from how we work to how we learn to where we live by leading to a shift away from efficiency — doing the same thing over and over, only a little bit better — toward novelty and discovery.”
If you are interested in the origins and history of serendipity, I recommend The Travels and Adventures of Serendipity. Written in the 1950s by already-eminent sociologist Robert Merton and Elinor Barber, the book — though occasionally and most tantalizingly cited — was intentionally never published. This is all the more curious because it so remarkably anticipated subsequent battles over research and funding — many of which centred on the role of serendipity in science. Finally, shortly after his ninety-first birthday, following Barber’s death and preceding his own by but a little, Merton agreed to expand and publish this major work.
Also recommended is Lennart Björneborn’s paper Three key affordances for serendipity: Toward a framework connecting environmental and personal factors in serendipitous encounters, published in Journal of Documentation (Vol. 73 Issue: 5, pp.1053–1081, October 2017).
And finally, The Secret of Buckminster Fuller’s World-Changing Ideas Was Serendipity, an interview with the writer and artist Jonathon Keats. Keats is the author of You Belong to the Universe about Richard Buckminster Fuller, whose significance can be summed up in the unwieldy title he gave himself: “comprehensive anticipatory design scientist.”
“We should stop teaching knowledge”
The idea that we can stop teaching knowledge because everything can be Googled is absurd, says cultural sociologist Ruben Jacobs.
In his article for Brainwash, an initiative of Human en The School of Life, he refers to a Meet the Leader Sessions at the World Economic Forum in Davos, January 2018. During this session, Alibaba’s flamboyant founder, Jack Ma, talked about the challenges of our time, including, of course, the future of education. According to Ma, “The knowledge-based approach of 200 years ago would fail our kids, who would never be able to compete with machines. Children should be taught soft skills like independent thinking, values and team-work. We should stop teaching knowledge.”
Apart from the hollow ‘the robots will take over everything’ rhetoric, Jacobs finds Ma’s last remark, that “We should stop teaching knowledge,” utterly bizarre and misguided. “How does Mr Ma think we can teach ‘independent thinking’ and ‘values’ without incorporating our knowledge about the history of thinking and morality?,” Jacobs wonders. “How does Mr Ma want to teach young people to think for themselves if they have no clue whatsoever about what it is they should learn to think independently? There isn’t much to learn in an ‘empty space.’”
Ma’s approach illustrates how many people, including our governments, view the role and purpose of education as nothing more than teaching skills and competencies. The transfer of knowledge is seen as something old-fashioned, out of date, in times when knowledge is abundant and can easily be ‘Googled.’ Instead, education should focus on ‘learning how to learn,’ better known as ‘21st-century skills.’
Jacobs doesn’t object these 21st-century skills, but he feels it is fundamentally wrong to suppose they can be learned without specific knowledge. As the famous French chemist, Louis Pasteur, wrote, “Chance favors invention only for minds prepared for discoveries by patient study and persevering efforts.”
In Critical Thinking: Why Is It So Hard to Teach?, the cognitive psychologist Daniel Willingham notes, “After more than 20 years of lamentation, exhortation, and little improvement, maybe it’s time to ask a fundamental question: Can critical thinking actually be taught? Decades of cognitive research point to a disappointing answer: not really. People who have sought to teach critical thinking have assumed that it is a skill, like riding a bicycle, and that, like other skills, once you learn it, you can apply it in any situation. Research from cognitive science shows that thinking is not that sort of skill. The processes of thinking are intertwined with the content of thought (that is, domain knowledge). Thus, if you remind a student to ‘look at an issue from multiple perspectives’ often enough, he will learn that he ought to do so, but if he doesn’t know much about an issue, he can’t think about it from multiple perspectives. You can teach students maxims about how they ought to think, but without background knowledge and practice, they probably will not be able to implement the advice they memorize. Just as it makes no sense to try to teach factual content without giving students opportunities to practice using it, it also makes no sense to try to teach critical thinking devoid of factual content.”
According to Jacobs, people won’t spontaneously question the assumptions which form the basis of their thinking — they won’t look at a problem from diverse angles, neither will they challenge their knowledge of ‘reality.’ They will only do so if knowledge is transferred. “You won’t be able to understand the significance of the words in a letter written by a Jewish person in hiding if you are unfamiliar with the social context of the Second World War.” Jacobs writes. “In other words,” he adds, “you can’t teach ‘generic’ and ‘knowledge free’ skills if these aren’t embedded within the context of domain-specific and relevant.”
So, 21st-century skills, yes, but not without 21st-century knowledge.
The consumer/citizen contrast
“This is how so many cultural shifts happen,” the British philosopher Julian Baggini writes in How we forgot the collective good — and started thinking of ourselves primarily as consumers. “Ways of thinking mutate gradually, helped by changes in vocabulary that we accept without question. So it was that ‘refugees’ became ‘asylum-seekers,’ not primarily framed as people in need but as people wanting something from us. Another such shift was the increased use of the word ‘consumer’ in the second part of the 20th century. Google’s Ngram viewer, which trawls a huge corpus of English-language texts, finds the word two-thirds more prevalent in 1980 than 1960.
For some, this is empowering. Seeing ourselves as consumers makes us more demanding. Rather than being passive recipients of public services, for example, we become active consumers of them, complaining when they are poor and demanding better. ‘Claiming our power in the form of the freedom of choice we have as Consumers has raised standards and accountability across the board,’ say Jon Alexander and Irene Ekkeshis, founders of The New Citizenship Project.”
But they also believe that “for all these gains, the dominance of the consumer mindset […] has encouraged us to focus on our individual, personal decisions rather than on the collective choices civil societies have to make together. It also fools us into think that everything can be solved by making better buying decisions.” But by being ‘ethical consumers’ buying fairtrade or organic foods, for example, we can’t solve the huge problems of social justice, poor nutrition or unsustainable farming, Baggini argues.
“When we think of ourselves as consumers, says The New Citizenship Project, we obscure another important identity we have: citizens. Whereas consumers are atomised, autonomous decision-makers, citizens are socialised members of society. Citizens do not see their sphere of influence as limited to who they personally trade with. Consumers value independence, citizens recognise our interdependence. Consumers demand and choose, citizens participate and create. Alexander and Ekkeshis point to research that suggests that when we are primed to think of ourselves as consumers we make more selfish choices and become less trusting than when we are primed to think of ourselves as citizens.
The simplifying device of the consumer/citizen contrast is a powerful one. Once you become aware of it, it is impossible not to notice just how often we are treated like customers, and respond like them, even though as citizens we are much more than this. The ‘citizen shift’ may not be a panacea, but it’s well worth making.”
And also this …
According to Steven Pinker in Reason is non-negotiable, an extract from his new book Enlightenment Now, in which the Harvard psychologist extols the relevance of 18th-century thinking, what the Enlightenment thinkers had in common, was an insistence that we energetically apply the standard of reason to understanding our world, and not fall back on generators of delusion like faith, dogma, revelation, authority, charisma, mysticism, divination, visions, gut feelings or the hermeneutic parsing of sacred texts.
Today, many writers confuse the Enlightenment endorsement of reason with the implausible claim that humans are perfectly rational agents. But Nothing could be further from historical reality, Pinker argues. “Thinkers such as Kant, Spinoza, Hobbes, Hume and Smith were inquisitive psychologists and all too aware of our irrational passions and foibles. They insisted that it was only by calling out the common sources of folly that we could hope to overcome them. The deliberate application of reason was necessary precisely because our common habits of thought are not particularly reasonable.”
The Enlightenment’s second ideal was science, or the refining of reason to understand the world. “That includes an understanding of ourselves. The Scientific Revolution was revolutionary in a way that is hard to appreciate today, now that its discoveries have become second nature to most of us,” Pinker writes.
“The need for a ‘science of man’ was a theme that tied together Enlightenment thinkers who disagreed about much else […] Their belief that there was such a thing as universal human nature, and that it could be studied scientifically, made them precocious practitioners of sciences that would be named only centuries later. […]
The idea of a universal human nature brings us to a third theme, humanism. The thinkers of the Age of Reason and the Enlightenment saw an urgent need for a secular foundation for morality […]. They laid that foundation in what we now call humanism, which privileges the wellbeing of individual men, women, and children over the glory of the tribe, race, nation or religion. It is individuals, not groups, who are ‘sentient’ — who feel pleasure and pain, fulfillment and anguish. Whether it is framed as the goal of providing the greatest happiness for the greatest number or as a categorical imperative to treat people as ends rather than means, it was the universal capacity of a person to suffer and flourish, they said, that called on our moral concern.”
The ideal of progress also should not be confused with the 20th-century movement to re-engineer society for the convenience of technocrats and planners, which the political scientist James Scott calls Authoritarian High Modernism. […]
Rather than trying to shape human nature, the Enlightenment hope for progress was concentrated on human institutions. Human-made systems like governments, laws, schools, markets and international bodies are a natural target for the application of reason to human betterment.
In this way of thinking, government is not a divine fiat to reign, a synonym for ‘society,’ or an avatar of the national, religious or racial soul. It is a human invention, tacitly agreed to in a social contract, designed to enhance the welfare of citizens by coordinating their behaviour and discouraging selfish acts that may be tempting to every individual but leave every one worse off.”
How should we think about future progress?
“We must not sit back and wait for problems to solve themselves, nor pace the streets with a sandwich board proclaiming that the end of the world is nigh. The advances of the past are no guarantee that progress will continue; they are a reminder of what we have to lose. Progress is a gift of the ideals of the Enlightenment and will continue to the extent that we rededicate ourselves to those ideals,” Pinker writes in The Enlightenment Is Working.
“Are the ideals of the Enlightenment too tepid to engage our animal spirits? Is the conquest of disease, famine, poverty, violence and ignorance … boring? Do people need to believe in magic, a father in the sky, a strong chief to protect the tribe, myths of heroic ancestors?
I don’t think so. Secular liberal democracies are the happiest and healthiest places on earth, and the favorite destinations of people who vote with their feet. And once you appreciate that the Enlightenment project of applying knowledge and sympathy to enhance human flourishing can succeed, it’s hard to imagine anything more heroic and glorious.”
“You could call it a Very British Modernism, the process by which the fierce inventions of continental artists became gentler and kinder; whereby Mondrian’s implacable right angles became Ben Nicholson’s more negotiable compositions, whereby the artistic inventions of a merciless metropolis like 1900s Paris were soothed by Cornish seaside air, or the dismemberments practiced on the human form by the surrealists and Picasso grew less brutal in the hands of a Henry Moore,” Rowan Moore writes Kettle’s Yard, Cambridge review — a bubble of humanity.
“Kettle’s Yard in Cambridge, the creation of the ascetic but sociable collector and curator Jim Ede (1895–1990), is one of VBM’s foremost incubators and shrines. It is discreet, internal, domestic, this row of tiny houses that was converted in the 1950s into a house for Ede and his wife, Helen, donated to the University of Cambridge in 1966 and extended in 1970 to form a gallery of modern art.” Now, Jamie Fobert Architects has completed an extension, adding a series of new exhibition and education spaces that complement the intimate and relaxed rooms of the original converted house.
“Continuity has been achieved by sensitivity to the domestic scale and calm aesthetic of the house, and by repetition of the brickwork and simple volumes of rough plaster of the existing galleries,” according to the architects.
“Every winter, a train runs across the island of Hokkaido in Japan. Known as the Okhotsk‑no‑Kaze (Wind of Okhotsk), it inspired the name of a series of snow-covered images by Shanghai-based photographer Ying Yin,” Dominic Holbrook writes in Mysterious snow scenes in Japan — in pictures.
Yin manipulates her images to blend modern photography and classical art. “There is a traditional Chinese painting skill, Liu Bai,” Yin explains. It “means you leave white space to allow others to fill it with their imaginations. I move the unnecessary details away… You can start imagining.”
“The best time [is] when you’re almost done with something, and you know it’s good, and you’re still in the process of making it. It’s not the beginning, when you don’t know. It’s not the end, when it’s over and you don’t know if you’ll ever make anything again. It’s when you’re almost at the end, it’s good, and you feel good about yourself.” — A quote from an anonymous respondent in Agony and Ecstasy in the Gig Economy: Cultivating Holding Environments for Precarious and Personalized Work Identities, by Gianpiero Petriglieri, Susan J. Ashford, and Amy Wrzesniewski