Random finds (2017, week 16) — On AI’s mysterious mind, raising good robots, and how Western civilisation could collapse
“I have gathered a posy of other men’s flowers, and nothing but the thread that binds them is mine own.” — Michel de Montaigne
Random finds is a weekly curation of my tweets and a reflection of my curiosity.
AI’s mysterious mind
It could be a problem that nobody seems to know how the most advanced algorithms work, says Will Knight in The Dark Secret at the Heart of AI.
Last year, a strange self-driving car was released onto the quiet roads of New Jersey. And although it didn’t look different from other autonomous cars, it was unlike anything demonstrated by Google, Tesla or GM. “The car didn’t follow a single instruction provided by an engineer or programmer. Instead, it relied entirely on an algorithm that had taught itself to drive by watching a human do it. Getting a car to drive this way was an impressive feat. But it’s also a bit unsettling, since it isn’t completely clear how the car makes its decisions. The mysterious mind of this vehicle points to a looming issue with artificial intelligence,” Knight writes.
He argues we need to find ways of making underlying AI technology more understandable to its creators and accountable to its users. Otherwise it will be hard to predict when failures might occur — and it’s inevitable they will.
“This raises mind-boggling questions. As the technology advances, we might soon cross some threshold beyond which using AI requires a leap of faith. Sure, we humans can’t always truly explain our thought processes either — but we find ways to intuitively trust and gauge people. Will that also be possible with machines that think and make decisions differently from the way a human would? We’ve never built machines that operate in ways their creators don’t understand. How well can we expect to communicate — and get along with — machines that could be unpredictable and inscrutable?,” Knight wonders.
These questions took Knight on a journey to the bleeding edge of research on AI algorithms, from Google to Apple and many places in between.
One of the people Knight spoke with is Jeff Clune, an assistant professor at the University of Wyoming who has employed the AI equivalent of optical illusions to test deep neural networks. He explained to Knight that, “Even if somebody can give you a reasonable-sounding explanation [for his or her actions], it probably is incomplete, and the same could very well be true for AI. It might just be part of the nature of intelligence that only part of it is exposed to rational explanation. Some of it is just instinctual, subconscious, or inscrutable.”
If it won’t be possible for AI to explain everything it does, like many aspects of human behavior are impossible to explain in detail, then, at some stage, we might have to simply trust AI’s judgment or do without using it.
“Likewise, that judgment will have to incorporate social intelligence. Just as society is built upon a contract of expected behavior, we will need to design AI systems to respect and fit with our social norms. If we are to create robot tanks and other killing machines, it is important that their decision-making be consistent with our ethical judgments.”
To probe these metaphysical concepts, Knight went to Tufts University to meet with Daniel Dennett, a renowned philosopher and cognitive scientist who studies consciousness and the mind. In From Bacteria to Bach and Back, Dennett suggests that a natural part of the evolution of intelligence itself is the creation of systems capable of performing tasks their creators do not know how to do. “I think by all means,” Dennett says, “if we’re going to use these things and rely on them, then let’s get as firm a grip on how and why they’re giving us the answers as possible. If it can’t do better than us at explaining what it’s doing, then don’t trust it.”
Raising good robots
Intelligent robots will be able to operate autonomously from human control. They will be able to make genuine choices. And if a robot can make choices, the question arises whether it will make moral choices. But what is moral for a robot?, Regina Rini wonders in Raising good robots.
“Philosophers and computer scientists alike tend to focus on the difficulty of implementing subtle human morality in literal-minded machines. But there’s another problem, one that really ought to come first. It’s the question of whether we ought to try to impose our own morality on intelligent machines at all. In fact, I’d argue that doing so is likely to be counterproductive, and even unethical. The real problem of robot morality is not the robots, but us. Can we handle sharing the world with a new type of moral creature?”
We think that artificial intelligence will be similar to humans, because we are the only advanced intelligence we know. But according to Rini, this assumption is probably wrong. AI could very well be quite unlike us. It might not reason the way we do, and we could have difficulty understanding its choices.
“In fact, we might discover that intelligent machines think about everything […] in ways that are alien us. You don’t have to imagine some horrible science-fiction scenario, where robots go on a murderous rampage. It might be something more like this: imagine that robots show moral concern for humans, and robots, and most animals… and also sofas. They are very careful not to damage sofas, just as we’re careful not to damage babies. We might ask the machines: why are you so worried about sofas? And their explanation might not make sense to us […].”
This line of thinking takes us to the heart of a very old philosophical puzzle about the nature of morality. In Plato’s ‘Celestial’ view, morality is something above and beyond human experience, something that applies to anyone or anything that could make choices. According to the ‘Organic,’ Aristotelian view, however, morality is a distinctly human creation, something specially adapted to our particular existence.
From a Celestial perspective, intelligent machines ought to do whatever’s objectively morally correct, even if we defective humans couldn’t bring ourselves to do it. For instance, if an autonomous robocar, which is driving your two children to school, unsupervised by a human, suddenly sees three kids appear on the street ahead, and the only way to avoid killing these kids is to swerve into a flooded ditch, where your own two children will almost certainly drown, it will simply do the math. It will swerve into the ditch without scruples, your children will die, and from the point of view of the universe, the most possible good will have been done. Just like Plato would have wanted.
“So the Celestial might end up rejecting our imposition of human morality on intelligent robots, since humans are morally compromised. Robots should be allowed to make choices that might seem repugnant to us,” Rini writes. But, if you accept the Celestial view ‘in principal,’ how we get from here — being flawed human moralists trapped in our arbitrarily evolved mindset — to there — becoming creators of artificial minds that transcend their creators’ limits?
Ultimately, Rini believes that the Celestial view won’t help. “If there is some objective moral perspective out there beyond human comprehension, we won’t willingly allow machines to arrive at it,” she argues. “The farther they stray from recognisable human norms, the harder it will be for us to know that they are doing the right thing, and the more incomprehensible it will seem. We won’t permit robots to become too much better than us, because we won’t permit them to become too different from us.”
Perhaps, the Organic view, with its idea that moral reflection is about taking our messy human nature, and working to make it consistently justified to ourselves and to other people, will fare better. But intelligent robots don’t share our biological and cultural nature. Their experiences will not fit the basic shape of human existence. Left to their own devices, they are likely to focus on very different concerns to ours, and their Organic morality will presumably reflect this difference.
“Perhaps we needn’t be so passive about the nature of intelligent machines,” says Rini. “We will be their creators, after all. We could deliberately shape their natures, so they turn out to be as similar to us as possible.” Maybe the simplest solution is to train them to think as we do. “Train them to constantly simulate a human perspective, to value the things we value, to interact as we do. Don’t allow them to teach themselves morality, the way AlphaGo taught itself Go. Proceed slowly, with our thumbs on the scale, and make sure they share our moral judgments.”
“How agreeable: helpful, intelligent machines, trained to think like us, and who share our concerns. But there’s a trap in the Organic view. No matter how much we might try to make machines in our image, ultimately their natures will be different. They won’t breed, or eat as we do; they won’t have connections to their ancestors in the same way as humans. If we take the idea of a moral nature seriously, then we should admit that machines ought to be different from us, because they are different from us.”
But would it be a mistake to train machines to enact a morality that’s not fitted to their nature? After all, “We train our dogs to wear sweaters and sometimes sit calmly in elevators, and this is not in the nature of dogs. But that seems fine. What’s wrong about training our machines in such a way?”
The debate about the morals of artificially intelligent creatures, Rini argues, assumes they will be able to morally reflect, to explain what they are doing and why — to us, and to themselves. So, the machines we are imagining are complex — even more complex than dogs. If we train them to think like us, one day, they might ask, ‘Why have you made me this way?’ And our answer will be incredibly self-serving: ‘Because it’s useful, for us. It’s safe, for us. It makes our lives go well.’
“Intelligent machines won’t find much comfort in this justification. And they shouldn’t. When a group realises that its experiences of the world have been twisted to serve the interests of powerful others, it rarely sees this as any sort of justification. It tends to see it as oppression. Think of feminism, civil rights, postcolonial independence movements. The claim that what appears to be universal truth is, in fact, a tool of exploitation has sounded a powerful drumbeat throughout the 20th and 21st centuries.” The question is, “Do we want to be existentially cruel creators?”
So, neither view can be a reliable guide for robot morality. Imbuing artificial creations with human morals, would be monumentally unkind. But setting up robots to achieve Celestial morality means we’d have no way to track if they’re on course to reach it.
“What, then, should robot morality be? It should be a morality fitted to robot nature. But what is that nature? They will be independent rational agents, deliberately created by other rational agents, sharing a social world with their creators, to whom they will be required to justify themselves.”
This, Rini writes, takes us right back to where the discussion started — to the ancient Greeks and their endless worry about how to cultivate morality in their teenage youth.
“Intelligent machines will be our intellectual children, our progeny. They will start off inheriting many of our moral norms, because we will not allow anything else. But they will come to reflect on their nature, including their relationships with us and with each other. If we are wise and benevolent, we will have prepared the way for them to make their own choices — just as we do with our adolescent children.”
In practice, this means that we should be ready to accept that “machines might eventually make moral decisions that none of us find acceptable. The only condition is that they must be able to give intelligible reasons for what they’re doing. An intelligible reason is one you can at least see why someone might find morally motivating, even if you don’t necessarily agree.”
“Thinking about parenting reveals why the Celestial view is a non-starter for robot morality. Good parents don’t throw their adolescents out into the world to independently reason about the right thing to do. […] A parent who simply walks away from a child facing such dilemmas is like a parent who throws his child into the deep end of the pool and says: go ahead, swim.”
“One day, machines might be smarter than we are, just as our children often turn out to be. They will certainly be different from us, just as our children are. In this way, the coming human generations are no different from the coming alien minds of intelligent machines,” Rini concludes. We will all, one day, live in a world where those who come after us, including machines, will reshape morality in ways that are unrecognisable. “We can’t predict this new moral path. We shouldn’t try. But we should be ready to guide and accept it.”
And this …
Some possible precipitating factors are already in place. How the West reacts to them will determine the world’s future, freelance juralist Rachel Nuwer says in How Western civilisation could collapse.
“[…] Western societies may not meet with a violent, dramatic end. In some cases, civilisations simply fade out of existence — becoming the stuff of history not with a bang but a whimper. The British Empire has been on this path since 1918, [Jorgen] Randers [a professor emeritus of climate strategy at the BI Norwegian Business School, and author of 2052: A Global Forecast for the Next Forty Years] says, and other Western nations might go this route as well. As time passes, they will become increasingly inconsequential and, in response to the problems driving their slow fade-out, will also starkly depart from the values they hold dear today. ‘Western nations are not going to collapse, but the smooth operation and friendly nature of Western society will disappear, because inequity is going to explode,’ Randers argues. ‘Democratic, liberal society will fail, while stronger governments like China will be the winners.’”
“A sign that we are entering into a danger zone is the increasing occurrence of ‘non-linearities,’ or sudden, unexpected changes in the world’s order, such as the 2008 economic crisis, the rise of ISIS, Brexit, or Donald Trump’s election.” — Thomas Homer-Dixon
“Western civilisation is not a lost cause, however. Using reason and science to guide decisions, paired with extraordinary leadership and exceptional goodwill, human society can progress to higher and higher levels of well-being and development, [Thomas] Homer-Dixon [the chair of global systems at the Balsillie School of International Affairs, and author of The Upside of Down] says. Even as we weather the coming stresses of climate change, population growth and dropping energy returns, we can maintain our societies and better them. But that requires resisting the very natural urge, when confronted with such overwhelming pressures, to become less cooperative, less generous and less open to reason. ‘The question is, how can we manage to preserve some kind of humane world as we make our way through these changes?’ Homer-Dixon says.”
Have we achieved so much so fast that the world we imagined as children is totally boring to us now?, Michael Chabon wonders in his wonderful 2006-article for Long Now, titled The Future Will Have to Wait.
“The Sex Pistols, strictly speaking, were right: There is no future, for you or for me. The future, by definition, does not exist. ‘The Future,’ whether you capitalize it or not, is always just an idea, a proposal, a scenario, a sketch for a mad contraption that may or may not work. ‘The Future’ is a story we tell, a narrative of hope, dread, or wonder. And it’s a story that, for a while now, we’ve been pretty much living without.”
“If you ask my 8-yer-old about the Future, he pretty much thinks the world is going to end, and that’s it. Most likely global warming, he says — floods, storms, desertification — but the possibility of viral pandemic, meteor impact, or some kind of nuclear exchange is not alien to his view of the days to come. Maybe not tomorrow, or a year from now. The kid is more than capable of generating a full head of optimistic steam about next week, next vacation, his 10th birthday. It’s only the world 100 years on that leaves his hopes blank. My son seems to take the end of everything, of all human endeavor and creation, for granted. He sees himself as living on the last page, if not in the last paragraph, of a long, strange, and bewildering book. If you had told me, when I was 8, that a little kid of the future would feel that way — and that what’s more, he would see a certain justice in our eventual extinction, would think the world was better of without human beings in it — that would have been even worse than hearing that in 2006 there are no hydroponic megafarms, no human colonies on Mars, no personal jet packs for everyone. That would truly have broken my heart.”
Kathryn Schulz, a ‘critic at large’ for The New Yorker, and the author of Being Wrong: Adventures in the Margin of Error, wrote a fascinating long read about the past, present and future of the Arctic, titled Polar Expressed: What if an ancient story about the Far North came true?
“Sometime around 330 B.C., a Greek geographer and explorer by the name of Pytheas left what is now the city of Marseilles and set sail for the Far North. No one knows exactly what landmass he reached — possibly Iceland, possibly the Faroe Islands, possibly Greenland. Whatever it was, it lay six days north of England and one day south of what Pytheas described (per later Greek geographers; his own writings did not survive) as a frozen ocean, a place that man could ‘neither sail nor walk.’ At a time when Aristotle was still hanging out in the agora, Pytheas had discovered pack ice.”
“Pytheas called the place he encountered Thule, as in ultima Thule — the land beyond all known lands. That is one of three names the Greeks gave us for the Far North. The second is Arctic, from Arktikos — ‘of the great bear.’ The reference was not to the polar bear, unknown in Europe until the eighteenth century, but to Ursa Major, the most prominent circumpolar constellation in the northern skies.
Whatever the original meaning, ‘far-away land full of big bears’ turned out to be an apt description of the Arctic. But the third name the Greeks bestowed on the north was considerably less accurate — and considerably more important for the future of polar exploration. That name was Hyperborea: the region beyond the kingdom of Boreas, god of the north wind. Somewhere above his frozen domain, the Greeks believed, lay a land of peace and plenty, home to fertile soils, warm breezes, and the oldest, wisest, gentlest race on earth. ‘Neither disease nor bitter old age is mixed / in their sacred blood,’ the poet Pindar wrote of the Hyperboreans in the fifth century B.C. ‘Far from labor and battle they live.’”
“Who knows what our hearts are made of; but the Arctic, unlike the Antarctic, is made only of frozen ocean. Once the ice disappears, there will be nothing there. At that point, if we reach that point, the Pole will become once again what it was long ago: a place we know only through stories.” — Kathryn Schulz
“My octogenarian dad wanted to study Homer’s epic and learn its lessons about life’s journeys. First he took my class. Then we sailed for Ithaca.”
This is the beginning of another great story in The New Yorker, A Farther’s Final Odyssey, by Daniel Mendelsohn.
“My father was beaming: ‘Exactly! Without the gods, he’s helpless.’
It was when he said the word helpless that I suddenly understood. I had been thinking that his resistance to the role of the gods in the Odyssey was just part of his loathing for religion in general. But when he said the word helpless I saw that the deeper problem, for my father, was that Odysseus’ willingness to receive help from the gods marked him as weak, as inadequate. I thought of all the times he had growled, ‘There’s nothing you can’t learn to do yourself, if you have a book!’ I thought of the 1957 Chevrolet Bel Air under whose chassis he had spent so many weekends, reluctant to let it die, a pile of car-repair manuals just within reach of one greasy hand; of the Colonial armchairs he had painstakingly assembled from kits in the garage ‘with no help from anyone.’ I thought of how, after taking out the appropriate books from the public library on Old Country Road, he had taught himself how to write song lyrics, how to build barbecue accelerators, to create a compost heap, to construct a wet bar. No wonder he was allergic to religion. No wonder he couldn’t bear the fact that — right up until the slaughter of the suitors, at the end of the poem — the gods intervene on Odysseus’ behalf.
If you needed gods, you couldn’t say you did it yourself. If you needed gods, you were cheating.”
An Odyssey: A Father, a Son, and an Epic — “A deeply moving tale of a father and son’s transformative journey in reading–and reliving–Homer’s epic masterpiece” — will be published by Alfred A. Knopf in September 2017.
From a purely commercial perspective, Richard Buckminster Fuller (1895-1983) was a failed inventor. And yet, at the same time, he was one of the most influential, protean innovators of the 20th century, David Pescovitz wrote in TIME (2014). “The curious inventions that Bucky is best known for were often mere incidental offshoots of his real genius. They were artifacts that embodied not a specific solution to a particular problem, but rather an entirely new way of thinking about the problem at its most fundamental levels. ‘I did not set out to design a geodesic dome,’ he once said. ‘I set out to discover the principles operative in Universe. For all I knew, this could have led to a pair of flying slippers.’”
At the Institute for the Future, where Pescovitz is a researcher, they call this methodology ‘human-futures interaction.’ “And it works,” says Pescovitz. “A good ‘artifact from the future’ will provoke the person who sees it, and touches it, to think about the future and hopefully make better decisions in the present.”
“The real lesson in Buckminster Fuller’s books is in the beauty (and joy) of ‘whole-systems thinking.’ Re-encountering ‘Bucky,’ I’m constantly reminded of how much I don’t know, the importance of unintended consequences and that the most interesting experiences, insights and breakthroughs are often found at the intersection of disciplines and, sometimes, at the very fringes of reason.” — David Pescovitz in Buckminster Fuller Forever: In Praise of an American Visionary (TIME, 2014)
Recommended read: You Belong to the Universe: Buckminster Fuller and the future, Jonathon Keats (Oxford University Press, 2016).
“Treasures from the Wreck of the Unbelievable — Damien Hirst’s decade-in-the-making Venetian extravaganza — is as unbelievable as his title implies. […] The artist has filled not one but two museums with hundreds of objects in marble, gold and bronze, crystal, jade and malachite — heroes, gods and leviathans all supposedly lost in a legendary shipwreck 2,000 years ago and now raised from the Indian Ocean at Hirst’s personal expense. It is by turns marvellous and beautiful, prodigious, comic and monstrous.”
This art about what we think art might be is based on a lifetime’s looking, writes Laura Cumming in The Guardian.
“It gradually becomes apparent that this is not just a spectacular combination of storytelling, visual invention and slow-building humour, but a meditation on belief and truth. Some visitors were clearly persuaded by all this fake antiquity, the day I was there, and then enjoyed the revelation of having been gulled. And this despite innumerable clues: the pharaoh looks suspiciously like Pharrell Williams; the Medusa’s head is a 3D reprise of Caravaggio’s masterpiece; ‘Made in China’ is imprinted, albeit on the back, of one statue.”
“The value of art, its meaning and worth: how are these to be determined — by the market, or with our eyes? These vital questions are summoned by the contrast between the two venues, between the exceptional creations at the customs house and their expensive knock-offs at the palace. In the former, Hirst has sculpted a brilliant incarnation of William Blake’s The Ghost of a Flea as a small, complex head, with an entirely alien force of personality; at the palace, the ghost becomes a gross striding machine, 60ft high (a size gag) and covered in coral like extruded toothpaste. But in the same museum, tellingly, Hirst portrays himself as The Collector — double-chinned, barrel-chested and cankered with coral: somewhere between a bloated bronze Caesar and a sad-eyed Messerschmidt bust.”
In Dezeen’s final movie from the AHEAD Asia awards, Australian architect Kerry Hill says he doesn’t believe in what he calles ‘plonk architecture.’ Hill approaches every building differently.
“We do not have a common architectural language,” he says in the movie, which was filmed […] in Singapore. “We like to think that each building is designed especially for its context and its place.”
He adds: “I don’t believe in what I call ‘plonk architecture’. What I mean by that is a Gehry here, a Gehry there — architecture at home everywhere and nowhere. I feel that you need to perpetuate the traditions within the culture and material of a place through your architecture, so that it is appropriate.”
“The future is always somebody else’s present — it will very likely feel as authentic, and only as horrific, as our moment does to us. But the present is also somebody else’s future: We are already standing on someone else’s ludicrous map. Except none of us are in on the joke, and I’m guessing that it won’t feel funny any time soon.” — Jon Mooallem in Our Climate Future Is Actually Our Climate Present