Random finds (2017, week 33) — On Secrets of Silicon Valley (part 2), Eliminating the Human, and cognitive diversity
“I have gathered a posy of other men’s flowers, and nothing but the thread that binds them is mine own.” — Michel de Montaigne
Random finds is a weekly curation of my tweets, and reflection of my curiosity. With this week …
In The Persuasion Machine, the second part of Secrets of Silicon Valley, Jamie Bartlett tells the story of how Silicon Valley’s mission to connect all of us is plunging us into a world of political turbulence that no-one can control.
Like Y Combinator’s President Sam Altman in last week’s episode, Alexander Nix, CEO of Cambridge Analytica, seems to have a similar unshakeable faith in the inevitability of the future: “It’s going to be a revolution, and that is the way the world is moving. And, you know, I think, whether you like it or not, it is an inevitable fact.” In an ‘Etonesque’ way he repeats what Altman said last week: “I think if you continue this thrust of, shouldn’t we stop progress, no-one’s going to take you seriously […].”
From Secrets of Silicon Valley to David Byrne’s Eliminating the Human is just a small step. “The consumer technology I am talking about doesn’t claim or acknowledge that eliminating the need to deal with humans directly is its primary goal, but it is the outcome in a surprising number of cases,” Byrne writes. But it doesn’t have to be. “There are other possible roads we could be going down, and the one we’re on is not inevitable or the only one; it has been (possibly unconsciously) chosen.”
And also … Silicon Valley’s lack of viewpoint diversity, a few moral moves by Facebook, why regulators won’t be able to keep up with technology, and, to finish off with, beautiful architecture in Melbourne.
Secrets of Silicon Valley (part 2) — The Persuasion Machine
“The tech gods believe the election of Donald Trump threatens their vision of a globalised world. But in a cruel twist, is it possible their mission to connect the world actually helped Trump to power?,” Jamie Bartlett wonders in part two of Secrets of Silicon Valley, The Persuasion Machine.
To answer that question, you need to understand how Silicon Valley’s tech industry rose to power. And for that, you have to go back 20 years to a time when the online world was still in its infancy. A time when people feared the new internet was like the Wild West, anarchic and potentially harmful.
The Telecommunications Act of 1996 was designed to civilise the internet, including protecting children from pornography. But hidden within the act, was a secret whose impact no-one foresaw, Bartlett tells us. Section 230 as it is known, says, “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” It holds the key to the internet’s freedom and has been an enabler for Silicon Valley’s accelerated growth.
Jeremy Malcolm, an analyst at the Electronic Frontier Foundation, a civil liberties group for the digital age, believes that without Section 230, “We probably wouldn’t have the same kind of social media companies that we have today. They wouldn’t be willing to take on the risk of having so much unfettered discussion.”
Section 230 “allowed a new kind of business to spring up — online platforms that became the internet giants of today,” Bartlett tells. “Facebook, Google, YouTube […] encouraged users to upload content, often things about their lives or moments that mattered to them, onto their sites for free. And in exchange, they got to hoard all of that data but without a real responsibility for the effects of the content that people were posting. […] At first, the tech firms couldn’t figure out how to turn that data into big money.”
But that changed when a secret within that data was unlocked. Allowing Facebook users to be targeted using data about what they do on the rest of the internet, opened up vast profits and has propelled Silicon Valley to the pinnacle of the global economy. “The secret of targeting us with adverts is keeping us online for as long as possible. […] Our time is the Holy Grail of Silicon Valley,” Bartlett tells us. So what is it that is keeping us hooked to Silicon Valley’s global network?
Bartlett is off to Seattle to meet Nathan Myhrvold, who saw first-hand how the tech industry embraced new psychological insights into how we all make decisions. A decade ago, Myhrvold “brought together Daniel Kahneman, the pioneer of the new science of behavioural economics [and author Thinking Fast and Slow], and Silicon Valley’s leaders for a series of meetings.”
“A lot of advertising is about trying to hook people in these type-one things to get interested one way or the other,” Myhrvold explains. “You’re putting a set of triggers out there that make me want to click on it.” He adds, “Tech companies both try to understand our behaviour by having smart humans think about it and increasingly also having machines think about it.”
Of course, trying to grab the consumer’s attention is nothing new. It is essentially what advertising is all about. But insights into how we make decisions have helped Silicon Valley to shape the online world. And little wonder, their success depends on keeping us engaged.
As Silicon Valley became more influential, it also started to attract powerful friends in politics, starting with Barack Obama, who was regarded by people in Silicon Valley as a kindred spirit. Just like them, Obama believed that “we can solve problems if we would work together and take advantage of these new capabilities that are coming online,” tells Aneesh Chopra, Obama’s first Chief Technology Officer. And by the time he won his second term, Obama was was feted for his mastery of social media’s persuasive power.
“Facebook’s mission to connect the world went hand-in-hand with Obama’s policies promoting globalisation and free markets. And Facebook was seen to be improving the political process itself,” according to Bartlett. “But across the political spectrum, the race to find new ways to gain a digital edge was on. The world was about to change for Facebook.”
“That data-driven decisionmaking played a huge role in creating a second term for the 44th President and will be one of the more closely studied elements of the 2012 cycle. It’s another sign that the role of the campaign pros in Washington who make decisions on hunches and experience is rapidly dwindling, being replaced by the work of quants and computer coders who can crack massive data sets for insight. As one official put it, the time of ‘guys sitting in a back room smoking cigars, saying We always buy 60 Minutes’ is over. In politics, the era of big data has arrived.” — Michael Sherer in How Obama’s data crunchers helped him win
He subsequently takes us to the heart of Silicon Valley, Stanford University, where he meets Michal Kosinski, a psychologist specialised in psychometrics (the science of predicting psychological traits, such as personality) who is investigating just how revealing Facebook’s hoard of information about each of us could really be.
Kosinski explains how you can measure psychological traits using the digital footprints we leave behind on the internet. “An algorithm that can look at millions of people and […] hundreds of thousands […] of your likes can extract and utilise even those little pieces of information and combine it to a very accurate profile,” he tells Bartlett. “It can also use your digital footprint and turn it into a very accurate prediction of your intimate traits, like religiosity, political views, personality, intelligence, sexual orientation and a bunch of other psychological traits.”
This algorithm can also predict people’s political persuasions. People who score high on ‘openness to experience’ tend to be liberal; those who score low, more conservative. If you would then use another algorithm to adjust the messages those people will receive, this “obviously gives you a lot of power,” according to Kowinski.
It’s powerful way of understanding people, but Bartlett “can’t help fearing that there is that potential, whoever has that power, whoever can control that model will have sort of unprecedented possibilities of manipulating what people think, how they behave, what they see, whether that’s selling things to people or how people vote, and that’s pretty scary too.”
Next, Bartlett tries to uncover how the expertise of Cambridge Analytica in personality prediction played a part in Donald Trump’s presidential win, and how his campaign exploited Silicon Valley’s social networks. In San Antonio, Texas, he meets Theresa Hong, Trump’s former Digital Content Director, to get an understanding of what they actually did — “who they were working with, who was helping them, what techniques they used.”
“Cambridge Analytica were using data on around 220 million Americans to target potential donors and voters. Armed with Cambridge Analytica’s revolutionary insights, the next step in the battle to win over millions of Americans was to shape the online messages they would see. Adverts were tailored to particular audiences, defined by data. Now the voters Cambridge Analytica had targeted, were bombarded with adverts” delivered through Silicon Valley’s vast social networks, Bartlett tells us.
People from Facebook, YouTube and Google, who were working alongside Donald Trump’s digital campaign team, were “basically our kind of hands-on partners as far as being able to utilise the platform as effectively as possible,” Hong tells Bartlett. “When you’re pumping in millions and millions of dollars in these social platforms [The Trump campaign spent the lion share of its advertising budget, around 85 million, on Facebook], you’re going to get white-glove treatment, so they would send people […] to ensure that all our needs were being met.” Adding, “Without Facebook, we wouldn’t have won. I mean, Facebook really and truly put us over the edge. Facebook was the medium that proves most successful for this campaign.”
Trump’s digital strategy was built on Facebook’s effectiveness as advertising medium. “It’s become a powerful political tool that’s largely unregulated.” Facebook didn’t want to meet him but “made it clear that, like all advertisers on Facebook, also political campaigns must ensure their ads comply with all applicable laws and regulations,” Bartlett tells us. “The company also said that no personally identifiable information can be shared with advertising, measurement or analytics partners unless people give permission.”
Off to London, where Bartlett meets Alexander Nix, Cambridge Analytica’s CEO, to find out how the company used psychographics to target voters for the Trump campaign.
When he asks Nix if he can understand why some people might find using big data and psychographics “a little bit creepy,” Nix replies, “No, I can’t. Quite possibly the opposite. I think the move away from blanket advertising towards ever more personalised communication, is a natural progression. I think it is only going to increase.” People should “understand the reciprocity that is going on here — you get points [in case of a supermarket loyalty card], and in return, they gather your data on your consumer behaviour.”
But Bartlett wonders whether shopping or politics are really the same thing.
“The technology is the same,” according to Nix. “In the next ten years, the sheer volumes of data that are going to be available, that are going to be driving all sorts of things, including marketing and communications, is going to be a paradigm shift from where we are now. It’s going to be a revolution, and that is the way the world is moving. And, you know, I think, whether you like it or not, it is an inevitable fact.”
“Cambridge Analytica’s rise has rattled some of President Trump’s critics and privacy advocates, who warn of a blizzard of high-tech, Facebook-optimized propaganda aimed at the American public, controlled by the people behind the alt-right hub Breitbart News. Cambridge is principally owned by the billionaire Robert Mercer, a Trump backer and investor in Breitbart. Stephen K. Bannon, the former Breitbart chairman who is Mr. Trump’s senior White House counselor, served until last summer as vice president of Cambridge’s board. But a dozen Republican consultants and former Trump campaign aides, along with current and former Cambridge employees, say the company’s ability to exploit personality profiles — ‘our secret sauce,’ Mr. Nix once called it — is exaggerated.” — Nicholas Confessore and Danny Hakim in Data Firm Says ‘Secret Sauce’ Aided Trump; Many Scoff
“The election of Donald Trump was greeted with barely concealed fury in Silicon Valley. But Facebook and other tech companies had made millions of dollars by helping to make it happen. Their power as advertising platforms had been exploited by a politician with a very different view of the world. But Facebook’s problems were only just beginning. Another phenomenon of the election was plunging the tech titan into crisis,” says Bartlett.
“Fake news had provoked a storm of criticism over Facebook’s impact on democracy. [Mark Zuckerberg], claimed it was extremely unlikely fake new had changed the election’s outcome. But he didn’t address why it had spread like wildfire across the platform. Meet Jeff Hancock, a psychologist who has investigated a hidden aspect of Facebook that helps explain how the platform became weaponised in this way. It turns out the power of Facebook to affect our emotions is key, something that had been uncovered in an experiment the company itself had run in 2012. The news feeds of nearly 700,000 users were secretly manipulated so they would see fewer positive of negative posts.”
Hancock, who helped interpret these results, found that people who were seeing less negative emotion words in their posts, would write with less negative and more positive emotion in their own posts, and vice versa. “This is consistent with the emotional contagion theory,” he adds. “Basically, we were showing that people were writing in a way that was matching the emotion that they were seeing in the Facebook news feed.” Furthermore, “The more intense the emotion in content, the more likely it is to spread, to go viral. And it doesn’t matter whether it is sad or happy, like negative or positive, the more important thing is how intense the emotion is.”
“The process of emotional contagion helps explain why fake news has spread so far across social media,” Bartlett tell us. The problem with social networks however, is that all information is treated equally. “[Y]ou have good, honest, accurate information sitting alongside and treated equally to lies and propaganda. And the difficulty for citizens is that it can be very hard to tell the difference between the two,” as also Barack Obama pointed out during a press conference with Germany’s Chancellor Angela Merkel.
“In an age where there is so much active misinformation, and it’s packaged very well, and it looks the same when you see it on a Facebook page or you turn on your television, if everything seems to be the same and no distinctions are made, the we won’t know what to protect. We won’t know what to fight for.” — Barack Obama
But data scientist Simon Hegelich has discovered an even darker side to the way Facebook is being manipulated. Hegelich has found evidence the debate about refugees on Facebook is being skewed by anonymous political forces. “One statistic among many used by Facebook to rank stories in your news feed is the number of likes they get.” In the example Hegelich gives, only a handful of people, 25 to be precise, each liked more than 30,000 comments over six months. These hyperactive accounts could be run by real people, or software.
“This is evidence that the number of likes on Facebook can be easily gamed as part of an effort to try to influence the prominence of anti-refugee content on the site,” says Bartlett.
When asked if this worries him, Hegelich answers, “It’s definitely changing [the] structure of public opinion. Democracy is built on public opinion, so such a change definitely has to change the way democracy works.”
According to Facebook, “they are working to disrupt the economic incentives behind false news, removing tens of thousands of fake accounts, and building new products to identify and limit the spread of false news.” But it is still trying to hold the line, based on Section 230, that it isn’t a publisher.
“Facebook now connects more than two billion people around the world, including more and more voters in the West,” Bartlett tells us. “In less than a decade, it has become a platform that has dramatic implications for how our democracy works.”
“Old structures of power are falling away. Social media is giving ordinary people access to huge audiences. And politics is changing as a result” across the entire spectrum as shown by The Canary, an online political news outlet that supported Labour candidate Jeremy Corbyn during the 2017 elections. “During the campaign, their stories got more than 25 million hits on a tiny budget.” About 80 percent of its readership comes through Facebook, says Kerry-Anne Mendoza, The Canary’s Editor in Chief.
Using emotions, its presentation of its pro-Corbyn news is tailored to social media. “We’re trying to have a conversation with a lot of people, so it is on us to be compelling,” Mendoza tells. “Human beings work on facts, but they also work on gut-instincts. They work on emotions, feelings and fidelity and community. All of these issues.” When Bartlett points out that The Canary’s headlines are very “clickbait-y,” she says, “Of course [the headlines] are there to get clicks. We don’t want to have a conversation with ten people. You can’t change the world talking to ten people.”
Bartlett’s finishing words to an intriguing series …
“The tech gods are giving all of us the power to influence the world. Social media’s unparalleled power to persuade, first developed for advertisers, is now being exploited by political forces of all kinds. Grassroots movements are regaining their power, challenging political elites. Extremists are discovering new ways to stoke hatred and spread lies. And wealthy political parties are developing the ability to manipulate our thoughts and feelings using powerful psychological tools, which is leading to a world of unexpected political opportunity and turbulence.
I think the people that connected the world really believed that somehow, just by us being connected, our politics would be better. But the world is changing in ways that they never imagined and they are probably not happy about anymore. But in truth, they are no more in charge of this technology than any of us are now.
Silicon Valley’s philosophy is called disruption. Breaking down the way we do things and using technology to improve the world. In this series, I have seen how sharing platforms like Uber and Airbnb are transforming our cities. And how automation and artificial intelligence threaten to destroy millions of jobs. Now, the technology to connect the world unleashed by a few billionaire entrepreneurs is having a dramatic influence on our politics.
The people who are responsible for building this technology, for unleashing this disruption onto all of us, don’t ever feel like they are responsible for the consequences of any of that. They retain this absolute religious faith that technology and connectivity is always going to make things turn out for the best. And it doesn’t matter what happens, it doesn’t matter how much that’s proven not to be the case, they still believe.”
Eliminating the Human
“I have a theory that much recent tech development and innovation over the last decade or so has an unspoken overarching agenda. It has been about creating the possibility of a world with less human interaction. […]
The consumer technology I am talking about doesn’t claim or acknowledge that eliminating the need to deal with humans directly is its primary goal, but it is the outcome in a surprising number of cases. I’m sort of thinking maybe it is the primary goal, even if it was not aimed at consciously. Judging by the evidence, that conclusion seems inescapable.
[…] I am simply noticing a pattern and wondering if, in recognizing that pattern, we might realize that it is only one trajectory of many. There are other possible roads we could be going down, and the one we’re on is not inevitable or the only one; it has been (possibly unconsciously) chosen.”
From an engineer’s mind-set, Byrne writes, “Human interaction is often perceived […] as complicated, inefficient, noisy, and slow. Part of making something ‘frictionless’ is getting the human part out of the way. The point is not that making a world to accommodate this mind-set is bad, but that when one has as much power over the rest of the world as the tech sector does over folks who might not share that worldview, there is the risk of a strange imbalance. The tech world is predominantly male — very much so. Testosterone combined with a drive to eliminate as much interaction with real humans as possible for the sake of ‘simplicity and efficiency’ — do the math, and there’s the future.”
“We have evolved as social creatures, and our ability to cooperate is one of the big factors in our success. I would argue that social interaction and cooperation, the kind that makes us who we are, is something our tools can augment but not replace.”
“For us as a society, less contact and interaction — real interaction — would seem to lead to less tolerance and understanding of difference, as well as more envy and antagonism. As has been in evidence recently, social media actually increases divisions by amplifying echo effects and allowing us to live in cognitive bubbles. We are fed what we already like or what our similarly inclined friends like (or, more likely now, what someone has paid for us to see in an ad that mimics content). In this way, we actually become less connected — except to those in our group.
I’d argue there is a danger to democracy as well. Less interaction, even casual interaction, means one can live in a tribal bubble — and we know where that leads.”
“With humans being somewhat unpredictable (well, until an algorithm completely removes that illusion), we get the benefit of surprises, happy accidents, and unexpected connections and intuitions. Interaction, cooperation, and collaboration with others multiplies those opportunities.”
“We’re a social species — we benefit from passing discoveries on, and we benefit from our tendency to cooperate to achieve what we cannot alone. In his book Sapiens, Yuval Harari claims this is what allowed us to be so successful. He also claims that this cooperation was often facilitated by an ability to believe in ‘fictions’ such as nations, money, religions, and legal institutions. Machines don’t believe in fictions — or not yet, anyway. That’s not to say they won’t surpass us, but if machines are designed to be mainly self-interested, they may hit a roadblock. And in the meantime, if less human interaction enables us to forget how to cooperate, then we lose our advantage.
Our random accidents and odd behaviors are fun — they make life enjoyable. I’m wondering what we’re left with when there are fewer and fewer human interactions. Remove humans from the equation, and we are less complete as people and as a society.
‘We’ do not exist as isolated individuals. We, as individuals, are inhabitants of networks; we are relationships. That is how we prosper and thrive.”
And this …
According to Christopher Haley in Silicon Valley’s other diversity problem, diversity can strengthen innovation. But the evidence about what types of diversity promote innovation, and in what circumstances, turns out to be rather complicated. Besides, not all is positive. Some research finds that diverse organisations may suffer from poorer communication and weaker trust, with the net effect of hindering innovation.
“Where innovation is enhanced, however, some types of diversity seem to matter more than others. For instance, at least one study found that age diversity was irrelevant to innovation. The same also concluded that, while gender diversity had a positive effect on innovation, it was less significant than differences in industry background or country of origin. Furthermore, the benefits seem to be more apparent in some types of company than others: factors such as organisational size, complexity of operations, and seniority of staff seem to influence whether or not there are innovation gains from gender diversity.”
“If cognitive diversity is what we need to succeed in dealing with new, uncertain, and complex situations, we need to encourage people to reveal and deploy their different modes of thinking. We need to make it safe to try things multiple ways. This means leaders will have to get much better at building their team’s sense of psychological safety.” — From: Teams Solve Problems Faster When They’re More Cognitively Diverse by Alison Reynolds and David Lewis
“Importantly, researchers such as Scott E Page suggest that the visible elements of diversity — identity characteristics such as race, gender, sexual orientation, age or religion — are really proxies for (and often determinants of) the core thing that actually matters: cognitive diversity or viewpoint diversity. According to researchers like Page and others, what is important in many circumstances is that members of a group think differently, leading them to bring different perspectives, try a wider variety of approaches, test a wider range of potential solutions and, in some cases, cancel each other’s biases more effectively.”
“Even as its own employees decried [Facebook’s] role in Donald Trump’s election — a role about which government investigators have also raised questions — Mark Zuckerberg contended, however reluctantly, that the company was still, ultimately, a neutral tech platform,” writes Cale Guthrie Weissman in The Myth Of The Neutral Silicon Valley Platform Is Crumbling.
“Now, in the year of our Lord 2017, things have changed. But it wasn’t until this week that we glimpsed a deeper shift afoot in Silicon Valley. In the wake of the deadly Charlottesville rallies, where white supremacists were able to come together primarily online, technology platforms are finally cracking down.”
Does this mean that Silicon Valley companies have an ideology? “[P]robably not,” says Guthrie Weissman. “These are necessary, moral moves — and they are surely driven by a marked uptick in public outcry. But the platforms’ recent decisions do show that the techno-utopian idea of a neutral platform can some create a dystopia for others. More importantly, it does away with the pesky neutrality science fiction, and forces these companies to take more responsibility over how their carefully designed platforms work, and how they’re actually used.”
In the wake of last year’s US presidential election, “Facebook dismantled a popular anonymous discussion board for employees that had become a forum for conservative political debate that sometimes degenerated into racist or sexist comments, people familiar with the matter said, a rare move to censor speech internally,” writes Deepa Seetharaman in Facebook Shut Down Employee Chat Room Over Harassing Messages.
In a statement, Facebook’s head of people Lori Goler, said, “A cornerstone of our culture is being open. The FB Anon internal Facebook group violated our terms of service, which require people who use Facebook (including our employees) to use an authentic identity on our platform.”
“The disabling of the board illustrates Facebook’s struggle to cultivate open, freewheeling debate,” Seetharaman argues, “while still following company rules of decency to not alienate employees with racist and sexist views. The internal challenges mirror the social-media company’s difficulties in policing speech and extremist views on its broader platform, used by more than two billion people a month.
Some employees disagreed with Facebook’s move, even if they found some views expressed on FB Anon offensive. There was ‘lots of information that you would not have had otherwise,’ one of the people said.
The clampdown on the anonymous forum echoes the recent controversy at […] Google after an engineer was fired for suggesting in a lengthy memo that men are better suited for tech jobs than women. The engineer, James Damore, has said he felt Google suppressed discussion of his views.”
“Humanity has a method for trying to prevent new technologies from getting out of hand: explore the possible negative consequences, involving all parties affected, and come to some agreement on ways to mitigate them,” writes Mark Buchanan in How Technology Might Get Out of Control. “New research, though, suggests that the accelerating pace of change could soon render this approach ineffective.”
“People use laws, social norms and international agreements to reap the benefits of technology while minimizing undesirable [effects]. But what if technology becomes so complex and starts evolving so rapidly that humans can’t imagine the consequences of some new action? This is the question that [Dimitri Kusnezov and Wendell Jones] explore in a recent paper. Their unsettling conclusion: The concept of strategic equilibrium [in game theory this is called, after mathematician John Nash, the ‘Nash equilibrium’— a set of strategies that, once discovered by a set of players, provides a stable fixed point at which no one has an incentive to depart from their current strategy] as an organizing principle may be nearly obsolete.
[…] Below a certain level of complexity, the Nash equilibrium is useful in describing the likely outcomes. Beyond that lies a chaotic zone where players never manage to find stable and reliable strategies, but cope only by perpetually shifting their behaviors in a highly irregular way. What happens is essentially random and unpredictable.”
In their paper, the authors argue that “emerging technologies — especially computing, software and biotechnology such as gene editing — are much more likely to fall into the unstable category. In these areas, disruptions are becoming bigger and more frequent as costs fall and sharing platforms enable open innovation. Hence, such technologies will evolve faster than regulatory frameworks — at least as traditionally conceived — can respond.”
“One clear implication is that it’s probably a mistake to copy techniques used for the more slowly evolving and less widely available technologies of the past. This is often the default approach, as illustrated by proposals to regulate gene editing techniques. Such efforts are probably doomed in a world where technologies develop thanks to the parallel efforts of a global population with diverse aims and interests. Perhaps future regulation will itself have to rely on emerging technologies, as some are already exploring for finance.
We may be approaching a profound moment in history, when the guiding idea of strategic equilibrium on which we’ve relied for 75 years will run up against its limits. If so, regulation will become an entirely different game.”
Circular windows dominate the staggered brick facades of an apartment block in Melbourne designed by BKK Architects. The six porthole windows provide the focal point for the Cirqua Apartments, which are made up from staggered grey brick boxes that break up the bulk of the development.
“The building facades are highly articulated to reduce the overall building’s mass and present a smaller scale,” explained the architects. “The design draws on the materiality and expression of local, historical housing types that are reinterpreted in a contemporary manner.”
Also located in the neighbourhood is a timber-framed extension by Austin Maynard Architects, while in an adjacent suburb, the same studio used scalloped, brick-shaped and diamond shingles to cover a multigenerational residence. (Source: Dezeen)
“The people who are responsible for building this technology, for unleashing this disruption onto all of us, don’t ever feel like they are responsible for the consequences of any of that. They retain this absolute religious faith that technology and connectivity is always going to make things turn out for the best. And it doesn’t matter what happens, it doesn’t matter how much that’s proven not to be the case, they still believe.” — Jamie Bartlett in Secrets of Silicon Valley, The Persuasion Machine
“Devotees of freedom and liberalism do not dwell as much on ‘community.’ Except to urge that everybody be included, and treated fairly. But beliefs about ‘community’ have always been vital to human societies. In many ways, the last 200 years have been battles about how local communities try to adapt or fight back against growing global pressures — especially economic and cultural, but often political and even military. So much of the divide between anti-liberals or liberals is cultural. Little has to do with ‘policy’ preferences. Mass politics are defined around magnetic poles of cultural attraction.” — Philip Zelikow in Is the World Slouching Toward a Grave Systemic Crisis?