“I have gathered a posy of other men’s flowers, and nothing but the thread that binds them is mine own.” — Michel de Montaigne
Random finds is a weekly curation of my tweets, and reflection of my curiosity.
In an excellent long read, David Weinberger writes that our machines now have knowledge we will never be able to understand.
In 2008, Chris Anderson wrote, “The new availability of huge amounts of data, along with the statistical tools to crunch these numbers, offers a whole new way of understanding the world. Correlation supersedes causation, and science can advance even without coherent models, unified theories, or really any mechanistic explanation at all.”
At the time, Anderson kicked up a little storm. For example, in article in a journal of molecular biology, philosophy professor Massimo Pigliucci asked, “[…] if we stop looking for models and hypotheses, are we still really doing science?” Apparently, the answer to this question was supposed to be: “No,” Weinberger writes. “But today — not even a decade since Anderson’s article — the controversy sounds quaint. Advances in computer software, enabled by our newly capacious, networked hardware, are enabling computers not only to start without models — rule sets that express how the elements of a system affect one another — but to generate their own, albeit ones that may not look much like what humans would create. It’s even becoming a standard method, as any self-respecting tech company has now adopted a ‘machine-learning first’ ethic.”
Plato has learned us that true beliefs ought to be justifiable. However, “[o]ur new reliance on inscrutable models as the source of the justification of our beliefs puts us in an odd position. If knowledge includes the justification of our beliefs, then knowledge cannot be a class of mental content, because the justification now consists of models that exist in machines — of models that human mentality cannot comprehend,” Weinberger argues. “But the promise of machine learning is that there are times when the machine’s inscrutable models will be far more predictive than the manually constructed, human-intelligible ones. In those cases, our knowledge, if we choose to use it, will depend on justifications that we simply cannot understand.
But, for all the success of machine learning models, we are now learning to be skeptical as well. The paradigmatic failures seem to be ones in which the machine justification has not escaped its human origins enough.”
Weinberger gives the example of a system that was trained to evaluate the risks posed by individuals up for bail, which let hardened white criminals out while keeping African Americans with less of a criminal record in jail. The system’s algorithm was learning from the biases of the humans whose decisions were part of the data. Mike Williams, a research engineer at Fast Forward Labs, told Weinberger that we need to be especially vigilant about the prejudices that often, and perhaps always, make their way into which data sets are considered important and how those data are gathered.
Also Cathy O’Neill, author of the recent book Weapons of Math Destruction, points to implicit biases in the values that determine which data sets we use to train a computer. She gave Weinberger “an example of someone looking for the best person to fill a job, with one desired trait being, ‘someone who stays for years and gets promotions.’ Using machine learning algorithms for this, you might end up always hiring men, since women tend to stay at jobs for shorter intervals.” The same can be said for identifying bad teachers in public school systems. But what constitutes a ‘bad teacher?’ The average class score on standardized tests? How many students go on to graduate? Attend college? Make money? Live happy and fulfilled lives? Humans might work this out, but machine learning algorithms well may reinstitute biases implicit in the data we have chosen to equip them with, Weinberger writes.
“So, we are likely to go down both tracks simultaneously. On the one hand, we will continue our tradition of forbidding some types of justification in order to avoid undesirable social consequences. Simultaneously, we are likely to continue to rely ever more heavily on justifications that we simply cannot fathom.
And the issue is not simply that we cannot fathom them, the way a lay person can’t fathom a string theorist’s ideas. Rather, it’s that the nature of computer-based justification is not at all like human justification. It is alien.
But ‘alien’ doesn’t mean ‘wrong.’ When it comes to understanding how things are, the machines may be closer to the truth than we humans ever could be.”
David Weinberger ends his article by saying:
“As long as our computer models instantiated our own ideas, we could preserve the illusion that the world works the way our knowledge — and our models — do. Once computers started to make their own models, and those models surpassed our mental capacity, we lost that comforting assumption. Our machines have made obvious our epistemological limitations, and by providing a corrective, have revealed a truth about the universe.
The world didn’t happen to be designed, by God or by coincidence, to be knowable by human brains. The nature of the world is closer to the way our network of computers and sensors represent it than how the human mind perceives it. Now that machines are acting independently, we are losing the illusion that the world just happens to be simple enough for us wee creatures to comprehend.
It has taken a network of machines that we ourselves created to let us see that we are the aliens.”
Recommended reading (long reads)
The Myth of a Superhuman AI, by Kevin Kelly (Backchannel)
The Dark Secret at the Heart of AI, by Will Knight (MIT Technology Review)
Raising good robots, by Regina Rini (Aeon)
For short versions of the latter two, see Random finds (2017, week 16) — On AI’s mysterious mind, raising good robots, and how Western civilisation could collapse
What makes a genius
“The genius,” wrote German philosopher Arthur Schopenhauer, “lights on his age like a comet into the paths of the planets.” Why they soar above the rest of us, we don’t exactly know, but science offers us clues, says Claudia Kalb in What Makes a Genius?
“Philosophers have long been pondering the origins of genius. Early Greek thinkers believed an overabundance of black bile — one of the four bodily humors proposed by Hippocrates — endowed poets, philosophers, and other eminent souls with ‘exalted powers,’ says historian Darrin McMahon, author of Divine Fury: A History of Genius. Phrenologists attempted to find genius in bumps on the head; craniometrists collected skulls — including philosopher Immanuel Kant’s — which they probed, measured, and weighed.”
Yet, none of them discovered a single source of genius, and such a thing is unlikely to be found. “Genius is too elusive, too subjective, too wedded to the verdict of history to be easily identified. And it requires the ultimate expression of too many traits to be simplified into the highest point on one human scale. Instead we can try to understand it by unraveling the complex and tangled qualities — intelligence, creativity, perseverance, and simple good fortune, to name a few — that entwine to create a person capable of changing the world.”
“Genius is seemingly everywhere today, hailed in our newspapers and glossy magazines, extolled in our television profiles and Internet chatter. Replete with publicists, hashtags, and ‘buzz,’ genius is now consumed by a celebrity culture that draws few distinctions between a genius for fashion, a genius for business, and a genius for anything else. If the ‘problem of genius’ of yesteryear was how to know and how to find it, ‘our genius problem’ today is that it is impossible to avoid. Genius remains a relationship, but our relationship to it has changed. All might have their fifteen minutes of genius. All might be geniuses now. … [But] a world in which all might aspire to genius is a world in which the genius as a sacred exception can no longer exist. Einstein, the ‘genius of geniuses,’ was the last of the titans. The age of the genius is gone. Should citizens of democracies mourn this passing or rejoice? Probably a bit of both. The genius is dead: long live the genius of humanity.” — Darrin McMahon in Divine Fury: A History of Genius
Intelligence has often been considered the default yardstick of genius — a measurable quality generating tremendous accomplishment. But as Lewis Terman, the Stanford University psychologist who helped pioneer the IQ test, and his collaborators would discover, monumental intelligence alone doesn’t guarantee monumental achievement.
“Charles Darwin recalled being considered ‘a very ordinary boy, rather below the common standard in intellect.’ As an adult he solved the mystery of how the splendid diversity of life came into being,” Kalb writes. “Scientific breakthroughs like Darwin’s theory of evolution by natural selection would be impossible without creativity, a strand of genius that Terman couldn’t measure.”
According to Scott Barry Kaufman, the scientific director of the Imagination Institute, “Great ideas don’t tend to come when you’re narrowly focusing on them.” Unexpected flashes of insight — so-called ‘aha moments’ — often emerge after a period of contemplation. This creative process relies on the dynamic interplay of neural networks operating in concert and drawing from different parts of the brain at once. One of these networks cultivates internal thought processes, including daydreaming and imagining. Richer communication between various areas of the brain may also help to ‘connect the dots’ — to make connections between seemingly disparate concepts.
But while neuroscientists are trying to understand how the brain fosters the development of paradigm-shifting thought processes, other researchers are wrestling with the question of when and from what this capacity develops, says Kalb.
“Over the past several decades, scientists have been searching for genes that contribute to intelligence, behavior, and even unique qualities like perfect pitch. In the case of intelligence, this research triggers ethical concerns about how it might be used; it is also exceedingly complex, as thousands of genes may be involved — each one with a very small effect. […]
Genetic potential alone does not predict actual accomplishment. It also takes nurture to grow a genius. Social and cultural influences can provide that nourishment, creating clusters of genius at moments and places in history: Baghdad during Islam’s Golden Age, Kolkata during the Bengal Renaissance, Silicon Valley today.”
However, natural gifts and a nurturing environment can still fall short of producing a genius, without motivation and tenacity propelling one forward. These personality traits inspire the work of psychologist Angela Duckworth. She believes that a combination of passion and perseverance — ‘grit’ — drives people to achieve. No matter how brilliant a person, fortitude and discipline are critical to success. “When you really look at somebody who accomplishes something great, it is not effortless,” Duckworth says.
“If there were ever an individual who personified the concept of genius in every aspect, from its ingredients to its far-reaching impact, it would be Leonardo da Vinci. [His] intellect and artistry soared like Schopenhauer’s comet. The breadth of his abilities — his artistic insights, his expertise in human anatomy, his prescient engineering — is unparalleled.”
[See Why it’s so hard to recognize the geniuses around you by Anne Quito for two recently published books about Leonardo: Leonardo da Vinci by Walter Isaacson, and Becoming Leonardo by Mike Lankford.]
Now, “an international group of scholars and scientists is tracing Leonardo’s genealogy and hunting down his DNA to learn more about his ancestry and physical characteristics, to verify paintings that have been attributed to him — and, most remarkably, to search for clues to his extraordinary talent.”
One of their early goals is to explore the possibility that Leonardo’s genius stemmed not only from his intellect, creativity, and cultured environment but also from his exemplary powers of perception. “In the same way that Mozart may have had extraordinary hearing, Leonardo appears to have had extraordinary visual acuity,” says Jesse Ausubel, an environmental scientist who is coordinating the Leonardo Project.
“The Leonardo Project team doesn’t yet know where to look for answers to other questions, such as how to explain Leonardo’s remarkable ability to visualize birds in flight. ‘It’s as if he was creating stroboscopic photographs of stop-action,’ says [Thomas] Sakmar [a specialist in sensory neuroscience]. It’s not far-fetched that there would be genes related to that ability.’ He and his colleagues view their work as the beginning of an expedition that will lead them down new pathways as DNA gives up its secrets.
The quest to unravel the origins of genius may never reach an end point. Like the universe, its mysteries will continue to challenge us, even as we reach for the stars. For some, that is as it should be. ‘I don’t want to figure it out at all,’ says [jazz pianist] Keith Jarrett when I ask if he is comfortable not knowing how his music takes hold. ‘If someone offered me the answer, I’d say, Take it away.’ In the end it may be that the journey is illuminating enough and that the insights it reveals along the way — about the brain, about our genes, about the way we think — will nurture glimmers of genius in not just the rare individual but in us all.”
“I’m bypassing the brain completely. I am being pulled by a force that I can only be thankful for. … It’s a vast space in which I trust there will be music.” — Keith Jarrett
And this …
On the occasion of the Earth Day, philosopher Koert van Mensvoort wrote his Letter to Humanity. Mensvoort urges humans not to be slaves or victims of their own technology, but instead to use technology to enhance humanity. His hope is to encourage a new perspective on the role of man on Earth.
“Technology has become so omnipresent on our planet that it has ushered in a new environment, a new setting, that is transforming all life on earth. A technosphere — an ecology of interacting technologies that evolved after your arrival — has developed on top of the existing biosphere. Its impact on life on earth can hardly be underestimated and is comparable to, and perhaps even greater than, that of the emergence of animals 500 million years ago.”
“We cannot imagine the future of humanity without thinking about the future of technology. You must move forward — even though you only just got here. You’re a teenager, but it’s time to grow up. Technology is humanity’s self-portrait. It’s the materialisation of human ingenuity in the physical world. Let’s make it an artwork we can be proud of. Let’s use technology to build a more natural world and map out a path to the future that works not only for humanity but for all the other species, the planet and ultimately the universe as a whole.”
The yearning to see communications technology as a remedy for social ills remains strong, despite numerouws psychological and sociological studies showing otherwise, writes Nicholas Carr in How tech created a global village — and put us at each other’s throats.
“Despite Facebook’s well-publicized recent struggle to control hate speech, propaganda, and fake news, [Mark] Zuckerberg seems more confident than ever that a ‘global community’ can be constructed out of software. The centerpiece of his new project is a computerized ‘social infrastructure’ that will use artificial-intelligence routines to manage information flows in a way that makes everyone happy. The system will promote universal self-expression while at the same time shielding individuals from ‘objectionable content.’”
“The problem with such geeky grandiosity goes beyond its denial of human nature. It reinforces the idea, long prevalent in American culture, that technological progress is sufficient to ensure social progress. If we get the engineering right, our better angels will triumph. It’s a pleasant thought, but it’s a fantasy. Progress toward a more amicable world will require not technological magic but concrete, painstaking, and altogether human measures: negotiation and compromise, a renewed emphasis on civics and reasoned debate, a citizenry able to appreciate contrary perspectives. At a personal level, we may need less self-expression and more self-examination.
Technology is an amplifier. It magnifies our best traits, and it magnifies our worst.
What it doesn’t do is make us better people. That’s a job we can’t offload on machines.”
Online food-delivery companies such as Deliveroo and Uber Eats have made having specially prepared food brought to your desk seem like the height of app-based luxury. Similar start-ups are gaining popularity in India too. But here, dabbawalas have been doing it for 125 years, and the newcomers have much to learn, writes Edd Gent in The unsurpassed 125-year-old network that feeds Mumbai.
Even so, as the convenience of app-based delivery services catches on, will the dabbawalas keep up? Food delivery is firmly in the sights of India’s tech entrepreneurs, says Pankaj Jain, a partner at US start-up accelerator 500 Start Ups. But he thinks any threat is some way off, and part of the problem is the assumption that Silicon Valley’s business models could be transplanted to India. Many burned through investor cash making fancy apps and offering discounts in pursuit of market share rather than building reliable supply chains and a solid business plan. He thinks food-tech businesses could learn from the “strong fundamentals” of the dabbawalas. “I think food delivery 2.0 in India is going to be dabbawalas on tech,” Jain says.
The dabbawalas’ low-tech approach could be a strength. “New companies give their customers good offers but they’re just interested in capturing the market. The dabbawalas have deeper reasons for doing it. Serving their customers is like serving their god,” says Subodh Sangle, who coordinates the Mumbai dabbawalas.
Having seen the Mumbai dabbawalas at work, I can only hope Sangle is right, and that India’s tech entrepreneurs will think twice before replacing this humane and human-scale community with something like this …
In Beyond Belief (ArtForum, May 2015) an article by David Huber about the architecture of Lacaton and Vassal. They may be architects, he writes, but their real métier is doubt. As Anne Lacaton explained in a 2003 interview, “The work of an architect is not only to build. The first [thing] to do is to think, and only after that are you able to say whether you should build or not.”
Lacaton and Vassal have relentlessly questioned architecture’s orthodoxies, disrupting force-fed assumptions about the economies and practices that drive the design, construction, and inhabitation of space. Sometimes even, defying a global culture that seems to value iconic architecture at any cost, they deem building itself to be altogether unnecessary.
But although skeptical about the presumed role of architecture, Lacaton and Vassal don’t reject building altogether. In 2009, a design competition was launched to convert a postwar shipbuilding workshop in Dunkirk, which had been anointed ‘the cathedral,’ into the FRAC Nord-Pas de Calais, an affiliate of France’s Regional Contemporary Art Fund. On a visit, however, Lacaton and Vassal immediately sensed dissonance in the competition mandate: the preservationist instinct that had spared the building’s concrete shell, was, by installing a museum inside, about to destroy the space. Filling the cathedral wasn’t just senseless, it was unnecessary, the architects argued. Instead, they proposed to create a new structure of the same volume and shape next to the original building, which would, as a result, be freed up to do what it did best: to be empty. In other words, to accomodate not only a flexible range of art installations but also other kinds of activities.
“‘Architecture is not so important in life,’ remarked Lacaton in 2003. ‘We can have a life without architecture.’ This may, to some, sound a forlorn note. Yet the weird, marvelous work of Lacaton and Vassal points to something else: a great expansion of what architecture and architects can be and do. In place of a knee-jerk will to form, their hesitations urge a poetics of appraisal. Imagine a posture in which, no longer ensnared by dubious orthodoxies and a priori obligations to design, architecture’s impulse to accomodate — it’s reflexive Yes — assumes the dexterity of Perhaps, I prefer not to and Why?”
“New York still has so much pizzazz, because people make it new every day. Like all cities, it’s self-organizing. People looking for a date on Third Avenue make it into a place full of hope and expectation, and this has nothing to do with architecture. Those are the emotions that draw us to cities, and they depend on things being a bit messy. The most perfectly designed place can’t compete. Everything is provided, which is the worst thing we can provide. There’s a joke that the father of an old friend used to tell, about a preacher who warns children, ‘In Hell there will be wailing and weeping and gnashing of teeth.’ ‘What if you don’t have teeth?’ one of the children asks. ‘Then teeth will be provided,’ he says sternly. That’s it — the spirit of the designed city: Teeth Will Be Provided for You.” — Jane Jacobs in City and Songs by Adam Gopnik (The New Yorker, 2004)