Random finds (2016, week 48) — On onlyness, trusting technology, and our fear of being irrelevant
On Fridays, I run through my tweets to select a few observations and insights that have kept me thinking during the week.
Last November, Nilofer Merchant was one of the guests at the fifth annual The 3% Conference. During her opening keynote, she shared a new construct for how to triumph over the status quo. Liz Fairchild wrote a short session recap.
“Most of us want to make a difference in the world, to right the wrong, to fix what we know needs fixing. But most of us don’t have the power to do so. The more unusual our idea, the less likely we are to have the power. And the ways typically used to create change — for example, climbing to the top of the organization — have often meant we adapt ourselves to fit in, rather than be the change-maker to stand out,” Fairchild writes.
“As a brown woman,” Merchant told her audience, “your chances to be seen and heard in the world are next to nothing.” But in 20 years working in the tech industry, it wasn’t just being a brown woman that created roadblocks, it was her very way of thinking. She was told, “You don’t fit into the shape of how I expect ideas to come.” And she was told she was, “too shrill to be heard,” “too young to have the experience necessary,” and “too old to be relevant.”
In some ways, how we feel about power, title and rank is external to us. It’s often predicated not on our own feelings of self-worth, but on the way that others choose to codify us. And so, we hide ourselves to protect what we truly are. Almost everyone does. Fitting in often means disguise. And this isn’t just for women. According to a recent study from Deloitte, 69% of people were covering themselves to fit in. As a punctuation to this find, Merchant asked how many people in the audience where actually covering some of who you are. And almost everyone raised their hand. Yet, when we fight to keep back who we are, the power of our ideas dwindles.
“We can never be non-conformists if we can’t take the risk to be our true selves.” — Nilofer Merchant
Conforming is costing us our ideas. But, says Merchant, our ‘onlyness’ can help. It’s “that thing that only you have, coming from that spot in the world in which you stand, a function of your history and experience, visions and hopes. It is everything that you have coming from your past, that only you can see.”
Onlyness is “from that place that all new ideas come, grow, and become real. The fact that others don’t see it doesn’t make it any less valuable.”
In an interview with Kyle O’Brien for The Drum, Merchant said she “ended up coining a term in order to get to this nugget, which was, ‘If value creation used to be about the means of production or capital, now it’s about ideas. But where do ideas come from? They come from that spot in the world only you stand in, which is a function of your history and experience, visions and hopes.’ My argument was each of us has something that only each of us can offer, that when connected together using networks — not hierarchical organization constructs — that when connected together can now scale in a way that we’ve never had before. Now ideas born of an ‘only’ can scale and make a dent in the world. ‘Onlyness’ is that thesis.”
Earlier this year, Lisa Gansky had a conversation on onlyness, openness and originality with Nilofer Merchant, who was, at the time, living in Paris, where she had taken herself out of circulation in order to sink deeply into the idea of onlyness.
More on onlyness in Onlyness: A Trillion Dollar Opportunity. In this research paper for the Martin Prosperity Institute, Nilofer Merchant and research associate Darren Karn ask and answer the question, “What is untapped capacity of talent in our modern economy?” They provide both a framework for measuring the potential capacity of workers as well as global economic sizing, two important dimensions that will forward our understanding of economic prosperity.
On trusting technology
“The early days of artificial intelligence have been met with some very public hand wringing. Well-respected technologists and business leaders have voiced their concerns over the (responsible) development of AI. And Hollywood’s appetite for dystopian AI narratives appears to be bottomless. This is not unusual, nor is it unreasonable. Change, technological or otherwise, always excites the imagination. And it often makes us a little uncomfortable,” Guru Banavar writes in What It Will Take for Us to Trust AI.
According to Banavar, who is IBM’s chief science officer of cognitive computing, “we have never known a technology with more potential to benefit society than artificial intelligence. […] However, if we are ever to reap the full spectrum of societal and industrial benefits from artificial intelligence, we will first need to trust it.” He believes it’s our obligation to develop AI in such a way that engenders trust and safeguards humanity. In other words, building trust is essential to the adoption of artificial intelligence.
“We are building the engines, so the question is not will AI rise up and kill us, but will we give it the tools to do so?” — Genevieve Bell in ‘Humanity’s greatest fear is about being irrelevant’
For us to trust AI, Banavar believes it’s important to recognize and minimize bias. “Bias could be introduced into an AI system through the training data or the algorithms,” he says. “The curated data that is used to train the system could have inherent biases, e.g., towards a specific demographic, either because the data itself is skewed, or because the human curators displayed bias in their choices. The algorithms that process that information could also have biases in the code, introduced by a developer, intentionally or not. The developer community is just starting to grapple with this topic in earnest.”
Managing bias is an element of the larger issue of algorithmic accountability. “AI systems must be able to explain how and why they arrived at a particular conclusion so that a human can evaluate the system’s rationale. Many professions, such as medicine, finance, and law, already require evidence-based audit ability as a normal practice for providing transparency of decision-making and managing liability. In many cases, AI systems may need to explain rationale through a conversational interaction (rather than a report), so that a person can dig into as much detail as necessary.”
“The technology we thought we were using to make life more efficient started using us some time ago. It is now attempting to reshape our social behaviour into patterns reminiscent of the total surveillance culture of the medieval village, East Germany under the Stasi, or the white supremacist state of South Africa in which I grew up.’ — Rachel Holmes in We let technology into our lives. And now it’s starting to control us.
A bit more …
According to Genevieve Bell, “humanity’s greatest fear is about being irrelevant.” In an interview with Ian Tucker for The Guardian, Bell, an Australian anthropologist who has been working at tech company Intel for 18 years, where she is currently head of sensing and insights, explains why being scared about AI has more to do with our fear of each other than killer robots.
“Western culture has some anxieties about what happens when humans try to bring something to life, whether it’s the Judeo-Christian stories of the golem or James Cameron’s The Terminator.”
“So what is the anxiety about? My suspicion is that it’s not about the life-making, it’s about how we feel about being human. What we are seeing now isn’t an anxiety about artificial intelligence per se, it’s about what it says about us. That if you can make something like us, where does it leave us? And that concern isn’t universal, as other cultures have very different responses to AI, to big data. The most obvious one to me would be the Japanese robotic tradition, where people are willing to imagine the role of robots as far more expansive than you find in the west. For example, the Japanese roboticist Masahiro Mori published a book called The Buddha in the Robot, where he suggests that robots would be better Buddhists than humans because they are capable of infinite invocations. So are you suggesting that robots could have religion? It’s an extraordinary provocation.”
As the business world has hurried to get up to speed on storytelling, its advantages over other forms of communication and persuasion have been widely touted. But like any powerful tool, Jonathan Gottschall argues in Theranos and the Dark Side of Storytelling, humans can wield stories for good or ill.
“Establishing a culture of honest storytelling is not only a moral imperative for companies and workers, it is better business in a long-term, bottom-line sense. No matter the genre or format, the ancient prime directive of storytelling is simple: tell the truth. This applies even to the fantasy worlds of fiction. ‘Fiction,’ as Albert Camus put it, ‘is the lie through which we tell the truth.’ The world’s greatest storytellers don’t eschew falsehood and inauthenticity because they are morally superior to the rest of us (anyone who’s read around in their biographies knows this is not the case). They do so in recognition that truth-telling is better business for them as well.”
Economists believe in full employment. Americans think that work builds character. But what if jobs aren’t working anymore?
“But, wait, isn’t our present dilemma just a passing phase of the business cycle? What about the job market of the future? Haven’t the doomsayers, those damn Malthusians, always been proved wrong by rising productivity, new fields of enterprise, new economic opportunities?,” says James Livingston, a professor of history at Rutgers University in New Jersey and the author of No More Work: Why Full Employment is a Bad Idea (2016), in his essay Fuck work for Aeon.
“Well, yeah — until now, these times. The measurable trends of the past half-century, and the plausible projections for the next half-century, are just too empirically grounded to dismiss as dismal science or ideological hokum. They look like the data on climate change — you can deny them if you like, but you’ll sound like a moron when you do.
“When work disappears, the genders produced by the labour market are blurred. When socially necessary labour declines, what we once called women’s work — education, healthcare, service — becomes our basic industry, not a ‘tertiary’ dimension of the measurable economy. The labour of love, caring for one another and learning how to be our brother’s keeper — socially beneficial labour — becomes not merely possible but eminently necessary, and not just within families, where affection is routinely available. No, I mean out there, in the wide, wide world.”
I know that building my character through work is stupid because crime pays. I might as well become a gangster.
“And yet, and yet. Though work has often entailed subjugation, obedience and hierarchy (see above), it’s also where many of us, probably most of us, have consistently expressed our deepest human desire, to be free of externally imposed authority or obligation, to be self-sufficient. We have defined ourselves for centuries by what we do, by what we produce.
But by now we must know that this definition of ourselves entails the principle of productivity — from each according to his abilities, to each according to his creation of real value through work — and commits us to the inane idea that we’re worth only as much as the labour market can register, as a price. By now we must also know that this principle plots a certain course to endless growth and its faithful attendant, environmental degradation.”
“Our thinking about simplicity and luxury, frugality and extravagance, is fundamentally inconsistent,” writes Emrys Westacott, a professor of philosophy at Alfred University in New York, in Why the simple life is not just beautiful, it’s necessary. “We condemn extravagance that is wasteful or tasteless and yet we tout monuments of past extravagance, such as the Forbidden City in Beijing or the palace at Versailles, as highly admirable. The truth is that much of what we call ‘culture’ is fuelled by forms of extravagance.”
If our current methods of making, getting, spending and discarding prove unsustainable, then there could come a time — and it might come quite soon — when we are forced towards simplicity. In which case, a venerable tradition will turn out to contain the philosophy of the future.
“Somewhat paradoxically, then, the case for living simply was most persuasive when most people had little choice but to live that way. The traditional arguments for simple living in effect rationalise a necessity. But the same arguments have less purchase when the life of frugal simplicity is a choice, one way of living among many. Then the philosophy of frugality becomes a hard sell. That might be about to change, under the influence of two factors: economics and environmentalism. When recession strikes, as it has done recently (revealing inherent instabilities in an economic system committed to unending growth) millions of people suddenly find themselves in circumstances where frugality once again becomes a necessity, and the value of its associated virtues is rediscovered.”
“Non-coincidentally, in line with this shift from print to digital there’s been an increase in the number of scientific studies of narrative forms and our cognitive responses to them,” Will Self writes in Are humans evolving beyond the need to tell stories?
“There’s a nice symmetry here: just as the technology arrives to convert the actual into the virtual, so other technologies arise, making it possible for us to look inside the brain and see its actual response to the virtual worlds we fabulate and confabulate. In truth, I find much of this research — which marries arty anxiety with techno-assuredness — to be self-serving, reflecting an ability to win the grants available for modish interdisciplinary studies, rather than some new physical paradigm with which to explain highly complex mental phenomena. Really, neuroscience has taken on the sexy mantle once draped round the shoulders of genetics. A few years ago, each day seemed to bring forth a new gene for this or that. Such ‘discoveries’ rested on a very simplistic view of how the DNA of the human genotype is expressed in us poor, individual phenotypes — and I suspect many of the current discoveries, which link alterations in our highly plastic brains to cognitive functions we can observe using sophisticated equipment, will prove to be equally ill-founded.”
Le Corbusier started developing colour systems in the 1920s, creating a ‘colour keyboard’ to find and match colours for use in interiors. A rare first edition of Le Corbusier’s 1931 interactive design guide Polychromie Architecturale: Die Farbenklaviaturen is up for auction in New York.
“Le Corbusier believed that specific shades could affect rooms and their inhabitants, creating a welcoming atmosphere or altering their perception of space. He identified functions that could be applied to different shades — including psychological effects, weight, depth, perception and unity — and then chose groups of colours to attribute to each of these. Paler colours with natural pigments could be used to create warmth and light, while deeper shades could enhance or camouflage elements in a room. He believed synthetic pigments had a more invigorating effect, and would have a dramatic influence on architecture.” (Source: Rare Le Corbusier pop-up book showcases his colour theories, by Emma Tucker in dezeen)
“While short in length, Are We Human? The Design of the Species 2 seconds, 2 days, 2 years, 200 years, 200,000 years, was big on vision. By just asking ‘are we human?’ it opened up a dialogue that could be as short as ‘yes’ or considerably protracted. In either case, it put forward that the key to any discussion of this topic is a relationship to, and the act of, design. In effect, it raised the discourse of design above mere products and objects while grounding it in the very fabric of humanity,” writes Matthew Messner in his article on the 3rd Istanbul Design Biennial for the The Architects Newspaper.
In order to provoke a response to this instigation, co-curators Beatriz Colomina and Mark Wigley set out eight interlinked propositions to which the participating 250 designers, architects, scholars, and scientists reacted, such as ‘Design is even the design of neglect’ and ‘Good Design is an anesthetic.’
“These propositions set up a standing provocation: What defines a human is the act of design. The resulting show investigated this claim, presenting evidence in support of, and questioning of, these eight statements. The array of work ranged from very physical infrastructures of resources, power, and movement around the world, to the ephemeral space of social media. The show specifically rejected the construct of looking at the immediate past and future, usually two years before and after a biennial, and instead looked back to the beginning of humanity and the path to its current state.”
“Thinking about genres in that way — about working along the borders of genres — was very helpful to me in trying to reintegrate my original motivations for wanting to write, and the kinds of things I thought I would be writing, with the writer I am now, and have become.” — Michael Chabon in “It’d be a lot better if you let me lie.” — Michael Chabon on Tricksters & Sleuths