Random finds (2017, week 4) — On the future of AI, Trump’s assault on the Enlightenment, and More ‘vs.’ Malthus
On Fridays, I run through last week’s tweets to select the ones that have kept me pondering.
On the future of AI
Progress in artificial intelligence causes some people to worry that software will take jobs away from humans. Now leading researchers are finding that they can make software that can learn to do one of the trickiest parts of their own jobs — the task of designing machine-learning software.
In one experiment, researchers at the Google Brain artificial intelligence research group had software design a machine-learning system to take a test used to benchmark software that processes language. But in recent months others have also reported progress on getting learning software to make learning software. The list includes researchers at the nonprofit research institute OpenAI (which was cofounded by Elon Musk), MIT and Google’s other artificial intelligence research group, DeepMind.
“If self-starting AI techniques become practical, they could increase the pace at which machine-learning software is implemented across the economy. Companies must currently pay a premium for machine-learning experts, who are in short supply,” Tom Simonite writes in AI Software Learns to Make AI Software.
According to Simonite, the idea of creating software that learns to learn has been around for a while. However, these experiments had, until recently, never produced results that rivaled what humans could come up with. Yoshua Bengio, a professor at the University of Montreal, was one the early explorers. According to Bengio, today’s much more potent computing power, and the advent of a technique called deep learning are what’s making the approach work. Having said that, he believes it still requires such extreme computing power that it’s not yet practical to think about lightening the load, or partially replacing, machine-learning experts.
Also Tom Galeon writes about AI capable of learning things humans didn’t teach it. “Most AI systems learn to do things using a set of labelled data provided by their human programmers. Parham Aarabi and Wenzhi Guo, engineers from the University of Toronto, Canada have taken machine learning to a different level, developing an algorithm that can learn things on its own, going beyond its training.”
According to Aarabi and Guo, they have created an algorithm that identifies people’s hair in photographs. Instead of the usual method of training neural networks (exposure to existing sets of examples), their algorithm learned directly from human instructions, a model called ‘heuristic training.’ It correctly classified difficult, borderline cases — distinguishing the texture of hair versus the texture of the background. “What we saw,” Aarabi says, “was like a teacher instructing a child, and the child learning beyond what the teacher taught her initially.”
“Continuous development of this [heuristic] approach can lead to AI that judges situations it wasn’t previously exposed to, such as identifying previously unseen cancerous tissues during medical analysis or the objects surrounding an autonomous vehicle as it moves. ‘We’re keen to apply our method to other fields and a range of applications, from medicine to transportation,’ Guo explained. This could be the future of AI.”
In an article for WIRED, AI will soon ‘evolve’ like humans — and civilisation as we know it will change forever, deep-learning pioneer Jürgen Schmidhuber explains why human-level AI is near.
“Kids and some animals are still smarter than our best self-learning robots,” Schmidhuber wries. “But I think that within a few year we’ll be able to build an NN-based AI (an NNAI) that incrementally learns to become at least as smart as a little animal, curiously and creatively learning to plan, reason and decompose a wide variety of problems into quickly solvable sub-problems. Once animal-level AI has been achieved, the move towards human-level AI may be small: it took billions of years to evolve smart animals, but only a few millions of years on top of that to evolve humans. Technological evolution is much faster than biological evolution, because dead ends are weeded out much more quickly. Once we have animal-level AI, a few years or decades later we may have human-level AI, with truly limitless applications. Every business will change and all of civilisation will change.”
On Trump’s assault on the Enlightenment
“Last week, reports surfaced that President Donald Trump will propose a federal budget that would defund entirely the National Endowment for the Arts (NEA) and the National Endowment for the Humanities (NEH),” says Suzanne Nossel, the executive director of the Pen American Center and former deputy assistant secretary of state for international organizations at the U.S. State Department, in Donald Trump’s Assault on the Enlightenment. It’s not cost savings; it’s an attack on reason itself.
Donald Trump’s campaign and philosophy of governing aim to associate art and intellectualism with out-of-touch elites who have broken the trust of rural and less educated populations.
“Trump’s declaration of war on the arts and humanities must be seen in the context of his repudiation of the American ideals — grounded in the Enlightenment — of self-expression, knowledge, dissent, criticism, and truth. These proposals are an early effort to entrench within the machinery of the U.S. government his elemental disdain for intellectuals, analysts, and experts. Seen this way, they deserve to be rejected even by conservatives who have gleefully targeted these agencies in the past. If Donald Trump makes our venerable federal arts and humanities agencies disappear, it will represent a victory for his illiberal agenda, one conservatives and liberals must unite to defeat.”
Trump and his team have shown contempt for Enlightenment values shared by liberals and conservatives alike, Nossel writes. “Concepts like the search for truth, the open exchange of ideas, and the esteem for culture may read like empty platitudes etched in the walls of ivy-covered universities. But they are principles that undergird not just a liberal arts education but also the Common Core curriculum taught in hundreds of thousands of U.S. public schools. Unlike principled politicians on both sides of the aisle, Trump does not consider evidence that contradicts his views, concern himself with the lessons of history, or bring intellectual curiosity to the task of governing. He has not read any biographies of past presidents nor read much at all because, as he said last summer, ‘I’m always busy doing a lot.’ The process of exploration, evidence gathering, and reasoning that forms the basis of the quest for truth in any academic discipline seems to be alien to Trump. For him, being called out, rebutted, and even ridiculed for purveying falsehoods is cause not for remorse or retraction but rather reinforcement of the lies and reproof of those who dare challenge them.”
The study of the humanities is an antidote to the bleak, reductionist, and insular worldview proffered by Trump in his inaugural speech.
“Trump’s assault on truth, though novel and shocking to many Americans, is a tactic that has been tested and proved effective in repressive countries around the world, as many Russian thinkers and other experts on authoritarianism have recently pointed out. ‘Autocratic power requires the degradation of moral authority — not the capture of moral high ground, not the assertion of the right to judge good and evil, but the defeat of moral principles as such,’ wrote Russian-American journalist Masha Gessen in the New York Review of Books. The principles that Trump aims to defeat include the bedrock tenets of the Enlightenment and of American democracy — that rational thought, informed debate, and measured discourse form the basis of good government.”
A bit more …
“Utopianism and Malthusianism are not just synonyms for optimism and pessimism. They stand for divergent views of whether the obstacle to increasing human happiness is human nature, or rather the rest of nature.”
According to Joyce E Chaplin, the James Duncan Phillips Professor of Early American History at Harvard University, and author of Round About the Earth: Circumnavigation from Magellan to Orbit (2012), in an essay for Aeon Magazine, Is greatness finite?, Thomas More thought that humans were the problem. Thomas Robert Malthus acknowledged the problem of humans, but concluded that material nature itself presented a greater impediment.
“Today, we live in an odd moment. Both More’s pessimism about human nature and Malthus’s pessimism about nature itself reign, but few want to admit it,” Chaplin writes. The world feels caught between infinite possibility and limited resources.
“The UK and US voting publics have responded to promises to restore some past greatness — utopias for some, consequent dystopias for others. In fact, most people in either nation (or beyond) prefer to outsource their small measure of utopianism not to politicians but to technical experts, the folks who make matter, non-human nature, serve humans. These technocrats include the computer wizards in Silicon Valley who run Google, which can record our chatter about utopias, but not our suppressed Malthusian worries about human life on the planet. We want to live in futuristic betterment but at the same time dread being shot back into premodern bare want. For every promise that some place can be great, there is a parallel claim that if some people are not excluded from it, the greatness will falter and fail. Somehow, the greatness is finite.”
Also interesting: How Silicon Valley Utopianism Brought You the Dystopian Trump Presidency in WIRED.
How did the minority party of Hitler and Goebbels take over and break the will of the German people so thoroughly that they would allow and participate in mass murder? Hannah Arendt asked that question over and over, for several decades afterward.
Hannah Arendt looked closely at the regimes of Hitler and Stalin and their functionaries, at the ideology of scientific racism, and at the mechanism of propaganda in fostering “a curiously varying mixture of gullibility and cynicism with which each member… is expected to react to the changing lying statements of the leaders.” So she wrote in her 1951 Origins of Totalitarianism, going on to elaborate that this “mixture of gullibility and cynicism […] is prevalent in all ranks of totalitarian movements”:
“In an ever-changing, incomprehensible world the masses had reached the point where they would, at the same time, believe everything and nothing, think that everything was possible and nothing was true… The totalitarian mass leaders based their propaganda on the correct psychological assumption that, under such conditions, one could make people believe the most fantastic statements one day, and trust that if the next day they were given irrefutable proof of their falsehood, they would take refuge in cynicism; instead of deserting the leaders who had lied to them, they would protest that they had known all along that the statement was a lie and would admire the leaders for their superior tactical cleverness.” — Hannah Arendt in Origins of Totalitarianism (1951).
(Source: Hannah Arendt Explains How Propaganda Uses Lies to Erode All Truth & Morality: Insights from The Origins of Totalitarianism)
“It’s always been somewhat confusing,” Benjamin G Martin, the director of the Euroculture MA programme at Uppsala University in Sweden, and the author of The Nazi-Fascist New Order for European Culture (2016), writes in ‘European culture’ is an invented tradition.
“If Europe has a culture, is there a European nation? Are the cultures of Finland or Poland as European as the cultures of, say, France or Germany? Who gets to decide which works of art are representative of European culture? Must the continent be homogenised to foster a unified European culture? Wasn’t this fine arts and high culture vision of Europe socially elitist and politically conservative?
Some of these questions are again relevant today, amid the newly intense conflict between Europeanist and nationalist visions of what Europe and its culture (or cultures) really are. To answer them, it helps to remember that ancient verities are few. The old world is defined by relatively new ideas.”
According to Catarina Dutilh Novaes, a professor of philosophy and the author The Cambridge Companion to Medieval Logic (2016), “the history of logic should be of interest to anyone with aspirations to thinking that is correct, or at least reasonable.”
In her essay What is logic?, Dutilh Novaes illustrates the different approaches to intellectual enquiry and human cognition more generally. “Reflecting on the history of logic forces us to reflect on what it means to be a reasonable cognitive agent, to think properly. Is it to engage in discussions with others? Is it to think for ourselves? Is it to perform calculations?” What follows is an interesting exploration into the rise and fall, and rise, of logic.
“The history of logic,” Dutilh Novaes writes, “leads us to question the overly individualistic conception of knowledge and of our cognitive lives that we inherited from Descartes and others, and perhaps to move towards a greater appreciation for the essentially social nature of human cognition.”
“We don’t need to teach our kids to code, we need to teach them how to dream,” says Tom Goodwin in We need to teach our children how to dream.
“If we accept that the role of education is to furnish our children with the best understanding, skills and values for a prosperous and happy life, then how do we arm them for a future that we can’t imagine?,” Goodwin wonders.
“If we foster creativity, fuel curiosity and help people relate via relationships and empathy, then we empower kids to be totally self-reliant. They will be agile: adaptable to change in a world that we can’t yet foresee.
The reality of the modern age is that I learned more in one year of a well-curated Twitter feed than in my entire masters degree. I have better relationships from LinkedIn than from university.
We don’t need to change everything now, but we do need to start forgetting the assumptions that we have made. The future is more uncertain than ever, but we need to make our kids as balanced, agile, and as self-reliant as ever in order to thrive in it.”
In last week’s Think Again podcast (#82), The Mirror of Our Better Selves, French philosopher Bernard-Henri Lévy, “BHL” for short, addresses torture, the question of evil, and the tipping point at which democracy becomes something else.
BHL is among the last of a quintessentially French breed, the 20th century intellectuel engagé. As a ‘nouveau philosophe’ disenchanted with Marxism, communism and the excesses of 1968, when civil unrest roiled France, Levy has enjoyed a long and theatrical career since the 1970s, embracing journalism, philosophy, film and an outspoken advocacy for human rights.
In Against Empathy, Paul Bloom, psychologist and Yale professor, argues that empathy is a bad thing — that it makes the world worse. While we’ve been taught that putting yourself in another’s shoes cultivates compassion, Bloom believes it actually blinds you to the long-term consequences of your actions.
In an animated interview from The Atlantic, he explains his case for why the world needs to ditch empathy.
“The full humanization of man requires the breakthrough from the possession-centered to the activity-centered orientation, from selfishness and egotism to solidarity and altruism.” — German social psychologist and philosopher Erich Fromm (The Art of Living: The Great Humanistic Philosopher Erich Fromm on Having vs. Being and How to Set Ourselves Free from the Chains of Our Culture)
“I believe there is a very powerful case to be made for optimism — I think that we’re actually on the threshold of a rare era of technological innovation in cities that has the potential to fundamentally alter quality of life across almost every dimension.” — Daniel Doctoroff, the CEO of Sidewalk Labs, a subsidiary of Alphabet, in The urban optimist.
“There are upwards of a million buildings in New York City, so you’d be forgiven for not even noticing the hundreds of the earliest skyscrapers that still dot the city today. After all, a ‘skyscraper’ in 1890 might have meant 10 stories. Today, it means 100,” Co.Design’s deputy editor, Kelsey Campbell-Dollaghan, writes in Rediscovering the early skyscrapers of New York.
“While some of these buildings became iconic, many of these early wonders have been largely forgotten — overlooked as the fabric of the city wove itself around them over the past century.”
Between 1874 and 1900, 253 buildings topped 10 stories in New York; about half of them are still standing today, while 120 have since been demolished, largely lost to memory — until now. The Skyscraper Museum has published a set of exploration tools that surface these structures as an interactive map, a timeline and, remarkably, a grid that included a photo of each building — even the long-demolished ones.
“Knowledge consists in the search for truth. It is not the search for certainty.” — Karl Popper (In Search of a Better World: Karl Popper on Truth vs. Certainty and the Dangers of Relativism)