Random finds (2016, week 40) — On design thinking, the job of a milkshake, and our lost faith in reason

Mark Storm
12 min readOct 7, 2016

--

Maison Bordeaux, by Rem Koolhaas / OMA (1998) (Photograph: Hans Werlemann, courtesy OMA).

Every Friday, I run through my tweets to select a few observations and insights that have kept me thinking.

On design thinking as a solution to everything

“The next generation will need to be more and more comfortable with problems of dizzying complexity.” True, I immediately thought when I read this on FasctCoDesign. But then it said, “And design thinking can teach them that.” Can it, really?

In Teaching Kids Design Thinking, So They Can Solve The World’s Biggest Problems, Trung Le writes “our children must master systems-thinking to envision multiple methods for addressing complex challenges like renewable energy, world hunger, climate change, and ultimately, the design of a better world.”

Le, who is a principal education designer at Cannon Design, is absolutely right when he argues we can only address today’s complex issues — these ‘wicked problems’ — from a holistic, non-linear perspective. And there’s also nothing wrong with his comment that children, and, may I add, adults too, must “possess the compassion to recognize the rising human population and create a world that is inclusive, rather than exclusive.” But to me, seeing design thinking as the solution (to everything) sounds rather naive.

Le tells us about a Prototype Design Camp for students from public and private schools, and a career-technical high school. They had to “collaborate in an intense design challenge to address real world problems.” Mentors worked with mixed groups of students, allowing them to learn through behavior modeling and collaboration rather than information consumption. According to Le, “the results were a creative array of news networks, school designs, and student movements, but the most compelling outcome was the student experience itself. Reflections at the end of the conference from students included tremendous gratitude, a deep interest in the design process, and most importantly, a motivation to thoroughly create change.”

Two from 30 high school students from 14 different schools in Ohio who “trekked their ways through the snow and ice to participate in the first Prototype Design Camp.”

How can you possibly argue against this? Of course, I haven’t seen the actual results of this Prototype Design Camp. Maybe they did solve one of our many ‘wicked problems.’ But I very much doubt it. And the reason for this is not that I believe design thinking isn’t any good. It’s a great framework for pivoting your way from idea to solution. But more importantly, it’s a way of thinking — of looking at the world from different perspectives. Unfortunately, when people talk about design thinking, they mostly refer to the doing part. That is, at least, my experience with the many companies and organizations I have helped, and something tried to help to no avail, in getting better at innovation.

Too often, I have seen people going through the motions of design thinking ending up with ‘more of the same.’ Design thinking can only fully work when done by people with an unstoppable curiosity and eagerness to learn. By people who are willing and able to challenge their assumptions and change their beliefs. By people who have “a particular attention to what is neglected,” like the Dutch architect Rem Koolhaas. By people who “think like contrarians,” such as designer, artist and architect Sam Stubblefield. Of course, you could argue that all this, and more, is at the heart of design thinking. In theory, probabaly yes, but in practice, this is hardly the case. Besides, can we really design think ourselves out of climate change?

Red Bungee Twist, by Samuel Stubblefield.

Interestingly, Tim Brown, the CEO of innovation and design firm IDEO (and the one who is probably to ‘blame’ single handedly for this design thinking rage), wondered on Harvard Business Review how design thinking could still be a competitive advantage when so many companies, including the world’s most valuable one, Apple, place design at the center of everything they do?

“Now that design thinking is everywhere,” Brown writes, “it’s tempting to simply declare it dead — to ordain something new in its place. It’s a methodology always in pursuit of unforeseen innovation, so reinventing itself might seem like the smart way forward. But in practice, design thinking is a set of tools that can grow old with us. And I’d argue that in order to create sustained competitive advantage, businesses must be not just practitioners, but masters of the art. […] Getting to that kind of mastery is our challenge for the next decade.”

I’m not tempted to say design thinking is dead, or should be declared so as from today. On the contrary. I’m merely saying we shouldn’t expect miracles from it. At least, not on its own. If schools in the USA and Europe, and society as a whole, continue to sidestep the humanities and liberal arts in favor of STEM education (science, technology, engineering and mathematics), we might lose what is most important. As Steve Jobs once declared: ‘It’s in Apple’s DNA that technology alone is not enough — that it’s technology married with liberal arts, married with the humanities, that yields us the result that makes our hearts sing.” He didn’t mention design thinking, though.

On milkshakes and jobs to be done

Both The Financial Times and The Wall Street Journal wrote about Clayton Christensen’s latest book, Competing Against Luck, co-authored by Taddy Hall, Karen Dillon and David S. Duncan. In it, they expand on Christensen’s ‘jobs to be done’ theory, which he has been teaching, and selling through consulting firms he founded, for almost as long as disruptive innovation. But the book reveals interesting links between the two ideas, and with the 2014 controverse (see Disrupt no more in an earlier edition of Random finds).

Describing the genesis of ‘jobs to be done,’ Christensen writes of a meeting in the 1990s, when he realised he had been focusing on understanding why great companies fail, without actually giving the reverse problem — how do successful companies know how to grow? — much thought.

“Jobs theory aims to explain why customers ‘hire’ particular products and services,” Andrew Hill writes in Clayton Christensen moves on from the dissing of disruption. “The classic illustration is the milkshake dilemma: a fast-food chain wanted to know how to sell more milkshakes and spent months asking customers how it should improve the product, to no avail. Only when the company started asking why customers bought the drink did they work out that, in the morning, commuters purchased it as the ideal accompaniment for a long and boring drive to work. In the afternoon, though, it did a different job, for parents treating their children. By identifying the different jobs for which customers ‘hired’ the shake — and the different products against which it competed — the chain gained a better idea of how to develop it. Rather than making a series of hit-and-miss bets on innovation, it was able to ‘compete against luck’ and introduce new products that were more likely to hit the target market.”

Clayton Christensen explaining the milkshake’s job.

Despite the familiarity of the milkshake dilemma, Christensen’s ‘jobs to be done’ remains a slightly awkward concept. That may explain why it has not yet gained the high profile of the more easily absorbed disruptive innovation.

Competing Against Luck “quotes Harvard marketing professor Theodore Levitt’s insight that ‘people don’t want to buy a quarter-inch drill. They want a quarter-inch hole.’ It points out, though, that managers quickly lose sight of the context in which customers buy what they produce. The job to be done is the organizing unit in a start-up,’ the book says. But as companies grow, unnecessary bureaucracy builds up and managers define themselves by the processes they oversee. A proliferation of noisy information distracts them, amplified by new analytical tools that ‘embed all kinds of false rigour’ into the process.”

Competing Against Luck includes a section — “When the theory is ‘wrong’” — that invites readers to send Christensen and his team anomalous cases. The chapter may as well be dedicated to Jill Lepore.

According to Christensen, the beauty of this theory is that it makes complex situations far easier to understand and thus to solve. “Easier, but not necessarily easy. One reason disruptive innovation took off was that it sounded simple,” Hill writes. “In The Innovator’s Dilemma, Prof Christensen described ‘the four laws’ of disruptive technology, but some fellow academics point out the theory is only one explanation of change — and far from being an immutable law.”

In an interview with Forbes’ Susan Adams, Clayton Christensen talks about what he got wrong about disruptive innovation. When asked if he had changed his mind about Uber not being a disruptive company, Christensen acknowledges he has changed some ideas as he has learned more.

“Uber came in not at the low end of the market where disruption usually comes from,” Christensen explains, “but with a price that was competitive or even higher than taxis. But it had a business model that was almost impossible for taxis to respond to. Taxis have fixed costs and it’s an asset intensive business. They own the taxis and the medallions. They have to have taxis on the road 24/7 in order to get the return they need to be profitable. Uber comes in with a very different business model. They actually don’t have assets because they don’t own the cars and they don’t need medallions. Taxis can’t adopt the Uber model. Uber helped me realize that it isn’t that being at the bottom of the market is the causal mechanism, but that it’s correlated with a business model that is unattractive to its competitor. So yes, it is disruptive.” This doesn’t guarantee Uber’s success, but it helps us understand why taxis can’t go up against them.

A bit more …

“We have to be careful to distinguish between the irrational and non-rational,” philosopher and author Julian Baggini says in an interview with The Irish Times, titled Is humanity losing faith in reason? “Many things are non-rational in that they are not ultimately rooted in reason. Love and empathy are two examples. But they are not irrational because they are not contrary to reason. Rationality doesn’t tell us whether it is good to love or not, to empathise or not. Reason doesn’t tell us what to do at all.”

“Everyone is entitled to his own opinion, but not to his own facts.” — Daniel Patrick Moynihan (1927–2003), US democratic senator and sociologist

In response to the question in what way Baggini thinks reason is under fire, he answers: “Let me count the ways! First and most recently, the widely documented loss of faith in experts and elites assumes that having greater knowledge and experience in thinking about issues counts for nothing and could even get in the way of a superior common sense. The brain is seen as having failed us and so the gut is trusted instead.”

People “have had enough of experts”, Leave campaigner and Conservative MP Michael Gove proclaimed during the EU referendum campaign. (Photograph: Chris Ratcliffe/Bloomberg)

More on reason in Julian Baggini’s latest book The Edge of Reason. A Rational Skeptic in an Irrational World.

“Blending lucidity and passion, Baggini shows how much richer and more varied reason is than often supposed. Ultimately, he reminds us, the outcome of our reasonings has to depend not on objective truth but on what ‘we feel compelled to accept as objective.’ Reason enforces not what but that we judge; with the heavy implication that our judgement holds not just for ourselves, but everyone.” (Jane O’Grady, the co-founder of the London School of Philosophy, in a review on The Financial Times.)

“Contradiction and hypocrisy have always hovered over the utopian project, shadowing its promise of a better world with the sordid realities of human nature,” Akash Kapur, the author of India Becoming, writes in The Return of The Utopians. “Plato, in the Republic, perhaps the earliest utopian text, outlined a form of eugenics that would have been right at home in the Third Reich — which was itself a form of utopia, as were the Gulag of Soviet Communism, the killing fields of Pol Pot’s Cambodia, and, more recently, the blood-and-sand caliphate of isis. ‘There is a tyranny in the womb of every utopia,’ the French economist and futurist Bertrand de Jouvenel wrote.”

Not long ago, utopianism was a mark of naïveté or extremism; now pragmatists are denigrated for complacent cynicism. (Illustration: Golden Cosmos)

“The zealous conviction of utopians that the present must be erased, rather than built upon, fuels their denunciations of pragmatic incrementalism. It leads them to belittle the energies of reformism, and to obscure the truth that change and reform do occur, even if in a halting and often unfathomable manner. Few, if any, major improvements in recent decades — the spread of democracy, say, or the halving of extreme poverty, or the expansion of women’s and L.G.B.T. rights — can be attributed to utopianism. (In fact, the first of these was helped along by the collapse of the twentieth century’s most prominent utopian project.) Aiming not at perfection but at improvement, accepting the vagaries of human nature as a premise that policy must accommodate, rather than wish away, meliorism forces a longer, more calibrated approach. It is not a path for the impatient, but it has the verdict of history on its side. The utopian has a better story to tell; the meliorist leaves us with a better world.”

“The infinite amount of knowledge and connections that technology today brings to our disposal should allow us to set up a basis for an enlightened Renaissance, and instead we are diving deep into the darkest aspects of a new Middle Ages, in which data holds more value than work,” says Thierry de Baillon in The New Middel Ages.

“Where the preindustrial company and the industrial firm could boast about a symmetrical relationship between them and their employees (share of production outcome against means to produce in the former case, production against wages in the latter), relationship whose terms and mutual obligations were formalized in a contract, the company-as-a-platform doesn’t provide any reciprocal commitment, access to market having no guarantee value. This relationship, based upon divergent interests, is in fact quite close to the one that existed during the Middle Ages between feudal lords and serve, when the lord rented his land in return for a part of harvest, while the peasant had no guarantee if the land would give him at least the means to survive.”

“Promoting science and technology education to the exclusion of the humanities may seem like a good idea, but it is deeply misguided. Scientific American has always been an ardent supporter of teaching STEM: science, technology, engineering and mathematics. But studying the interaction of genes or engaging in a graduate-level project to develop software for self-driving cars should not edge out majoring in the classics or art history,” the editors of Scientific American write in STEM Education Is Vital — but Not at the Expense of the Humanities.

“The need to teach both music theory and string theory is a necessity for the U.S. economy to continue as the preeminent leader in technological innovation. The unparalleled dynamism of Silicon Valley and Hollywood requires intimate ties that unite what scientist and novelist C. P. Snow called the “two cultures” of the arts and sciences.”

Illustration: Nicolas Ogonosky.

“Steve Jobs, who reigned for decades as a tech hero, was neither a coder nor a hardware engineer. He stood out among the tech elite because he brought an artistic sensibility to the redesign of clunky mobile phones and desktop computers. Jobs once declared: ‘It’s in Apple’s DNA that technology alone is not enough — that it’s technology married with liberal arts, married with the humanities, that yields us the result that makes our hearts sing.’”

If engineers can figure out what makes a human brain run so well, and on so little energy relative to its processing power, they might be able to build a computer that does the same. But now, according to Adrienne LaFrance in The Human Remembering Machine, “a new mathematical model of memory could accelerate the quest to build super-powered, brain-inspired hardware systems.”

Stefano Fusi, a theoretical neuroscientist at Columbia University’s Zuckerman Institute, and his colleague, Marcus Benna, an associate research scientist at the institute, have “created a mathematical model that illustrates how the human brain processes and stores new and old memories, given the biological constraints of the human brain. Their findings, published today in a paper in the journal Nature Neuroscience, demonstrate how synapses in the human brain simultaneously form new memories while protecting old ones — and how older memories can help slow the decay of newer ones. Their model shows that over time, as a person stores enough long-term memories and accumulates enough knowledge, human memory storage becomes more stable. At the same time, the plasticity of the brain diminishes. This change helps explain why babies and children are able to learn so much so quickly: Their brains are highly plastic but not yet very stable.”

“The model allows for a ‘much more efficient way in terms of energy,’ Fusi says, ‘so if you want to integrate this [artificial] brain technology — into your mobile phone, so your mobile phone can drive your car for you, you’re probably going to need this kind of computer.’”

Illustration: Stuart McMillen.

“We are called to be architects of the future, not its victims.” — Richard Buckminster Fuller

--

--

Mark Storm
Mark Storm

Written by Mark Storm

Helping people in leadership positions flourish — with wisdom and clarity of thought