Random finds (2018, week 52) — On the world as Everything Store, Google and the price of being connected, and busyness

Mark Storm
24 min readDec 29, 2018
Franco-Brazilian architect Elizabeth de Portzamparc has completed a museum of Roman history in the French city of Nîmes featuring glazed facades intended to recall the draped fabric of a toga. (Photograph by Wade Zimmerman via Dezeen)

I have gathered a posy of other men’s flowers, and nothing but the thread that binds them is mine own.” — Michel de Montaigne

Random finds is a weekly curation of my tweets and, as such, a reflection of my fluid and boundless curiosity.

I am also building an of little pieces of wisdom, art, music, books and other things that have made me stop and think. #TheInfiniteDaily

This week: How Amazon turns us into constant consumers; how Google is helping the state spy on us; how a historical perspective may cure us from busyness; are humans truly blind to the obvious; questioning truth, reality and the role of science; Sidewalk Labs and the Jane Jacobs of the smart city; Frank Lloyd Wright; and, finally, Earthrise.

The world as Everything Store

“The mall was, as Ian Bogost noted in [When Malls Saved the Suburbs From Despair], where ‘consumerism roared and swelled but, inevitably, remained contained.’ Freeing consumerism from that containment was one of the internet’s earliest applications, streamlining the process of shopping at home, and later, on phones,” Drew Austin writes in The Constant Consumer. “Recent technologies have enabled the role of customer to be fused with the newer role of user, who inhabits an entire system rather than a specific transaction.”

In Design as Participation, Kevin Slavin describes how the experience of app-based food delivery narrows one’s perspective. “For users,” Slavin writes, “this is what it means to be at the center: to be unaware of anything outside it.” According to Austin, the minimal interfaces of these apps, which require little more than the push of a button to order food, conceal the labor and logistical sophistication that make it possible. Understanding the messy complexity that supports this “simplified solipsism” wouldn’t help people to order more food, “so the user experience excludes it,” Austin argues.

“Amazon, as much as any single company, is transforming the environments in which we live and embedding itself within the fabric of daily existence. Beyond individual experience, those changes also manifest themselves in the physical environment. Many physical retail stores have been rendered obsolete as Amazon and other online retailers started undercutting them on price and offering a wider selection. (Bookstores experienced this first but it eventually spread to almost every form of retail.) Sidewalks and building lobbies have become staging areas for packages, with delivery vehicles exacerbating traffic and obstructing bike lanes as piles of brown Amazon boxes increasingly take up space. As Amazon and food delivery apps eliminate some of the most common reasons to leave one’s house one wonders what sort of neighborhood life will be sustainable in affluent urban areas.

In light of Amazon’s all-encompassing ambitions, the strategy behind several of the company’s most important product initiatives — Alexa, Amazon Prime, physical retail stores (including Amazon Go and Whole Foods), and Amazon Key — becomes clearer. These products seek to redefine what being a customer means by immersing us more completely within the Amazon universe. Formerly, being a customer was a role one assumed upon physically entering a store or ordering something from a company. Amazon promises to create a newer type of environment, a hybrid of the digital and the physical, that lets us permanently inhabit that role: the world as Everything Store, which we are always inside.”

“If the customer is always right, then you are never wrong when you are consuming. No contemporary company has offered that Faustian bargain more broadly and aggressively than Amazon,” Drew Austin writes in The Constant Consumer — Amazon Go, the cashier-less Amazon Go store next to company headquarters in Seattle. (Photograph by Jason Del Rey)

Amazon’s “‘customer obsession’ is a happier narrative for this dominance than one of aggressive market capture, anti-competitive tactics, and ruthless labor exploitation. Like ‘support the troops,’ or ‘what about the children,’ caring about the customer seems like an impregnable position to take. It’s a more specific iteration of Google’s ‘Don’t Be Evil’: How could a consumer-focused company be evil, when we are all consumers? What could be wrong with the company being focused on our needs?,” Austin wonders.

Amazon’s constant praise of the customer implies that we are all already customers and nothing more — that we should understand ‘consumer’ as our core identity. And this, Austin argues, is the fundamental problem. It is part of the company’s “intent to disarm us, to invite us to enter its universe of deals and recommendations and to internalize the status of permanent customer — and specifically, Amazon’s customer. […] Amazon’s most important product is how it creates and refines a world in which the Everything Store converges with just plain everything and, being ubiquitous, becomes invisible.”

But very few people dream of being customers. Most of us want to be creators, friends, neighbors or citizens. Our role as customer used to be temporary and specific — buying something from a seller — and not an aspirational identity. So what happened?

“If being a customer feels so great, as the past century has trained us, what happens when the consumer experience encompasses us so completely that we forget we are customers at all?” — Selfridges Building in Birmingham, by Future Systems, “is a grand gesture of a building and a testament to the vision and courage of client and design team alike.”

According to Austin, industrialization and mass production in the 19th century yielded an unprecedented flood of goods. “Commerce was suddenly no longer constrained by supply but demand. Stimulating consumption became crucial; making customer a primary and perpetual identity was a key solution. To achieve this, retailers worked to make feelings of agency and significance available to people, but only on condition of being a customer. This approach is articulated by a slogan often attributed to department store magnate Henry Gordon Selfridge in 1909: ‘The customer is always right.’”

“Rather then sell products on their basic utility, advertising began to orient itself toward identity, selling the idea that individuals could reveal their unique selves through purchases. […] wanting more things corresponded to greater personal depth. Being a customer gave one access to not only a cornucopia of goods but also the rich recesses of one’s psyche,” Austin writes.

In A Brief History of Neoliberalism, David Harvey “diagnoses a late 20th-century shift toward ‘a market-based populist culture of differentiated consumerism and individual libertarianism.’ In this culture, at the structural level, businesses must cater to the customer identity to survive. In lieu of tolerating moderate inconvenience, higher prices, and some potentially awkward human interactions in order to support local businesses, the logic of efficient and limitless customer service offers fast food chains and big-box stores, which offer cheaper goods and routinized retail interactions. These, in turn, are now in the process of being supplanted by Amazon.”

“The transition toward consumerism across so many domains exemplifies a phenomenon that writer Sarah Perry calls a tiling structure, a system that ‘tiles the world with copies of itself.’ Tiling structures flourish because they solve certain problems well enough that they become more or less mandatory, and block alternate solutions.” They have also “introduced customer-service logic to cultural spaces that were once sheltered from markets. Communities based on common interests, shared identity, or physical proximity, from neighborhoods to political groups to religious institutions, must now respond to their constituents’ increased mobility and access to information by treating them like the empowered customers […] — customers who will leave if they find something better elsewhere. Individualized, personal-identity-based appeals replace collective orientations. As a tiling structure, this shift occurs because it works for the group implementing it, not because it is best for everyone,” Austin notes.

“As globalized platform consumerism erases more of what preceded it, replacing intricate social arrangements with individual links to large impersonal systems, it’s harder to remember what we have lost. Less and less equipped to imagine ourselves as anything but customers or users within those systems, we adopt the desires that companies like Amazon can best satisfy: convenience, choice, and frictionless consumption. These developments may be replacing another consumer system that wasn’t necessarily worth preserving itself, but beyond those visible changes, we face a new risk: becoming users offline, in the physical world. The more Amazon can control our experience of that environment, the less we will care what is outside the system it creates.”

According to Austin, “Amazon’s true objective, it seems, is a full infiltration of the world rather than ongoing refinement of a walled garden confined to the internet. Instead of scaring its customers with its totalizing ambition, the company has successfully marketed this arrangement as desirable. To permanent customers, further gains in convenience, choice, price, and delivery speed are pure benefits. If life is meant to be a series of consumer experiences, they might as well happen as seamlessly as possible.”

In realizing this, “Amazon faces an obstacle: If being a customer feels so great, as the past century has trained us, what happens when the consumer experience encompasses us so completely that we forget we’re customers at all? The minor friction of 1-click ordering pleasantly reminds us how easy it is to be one of Amazon’s empowered customers, the object of the company’s obsession. Will we remember that feeling if ‘smart’ devices can effectively read our minds and our desires subtly manifest themselves in our homes?

This quandary returns us to the definition of user. A user isn’t just an evolved customer but a qualitative transformation of that role: one who occupies a system and creates value for the system’s owner by merely being there, just as Google and Facebook’s users generate valuable data by partaking of their services. Those platforms, for all their seeming omnipresence, haven’t figured out how to expand beyond their digital containers. This is Amazon’s ambitious vision: The world is its platform, and instead of being customers, we will just become users whether we are looking at screens or not.”

Google and the price of being connected

“We knew that being connected had a price — our data. But we didn’t care. Then it turned out that Google’s main clients included the military and intelligence agencies,” writes Yasha Levine in Google’s Earth: how the tech giant is helping the state spy on us, an abstract from Surveillance Valley: The Secret Military History of the Internet (Icon Books, 2019).

“Google has pioneered a whole new type of business transaction. Instead of paying for its services with money, people pay with their data. And the services it offers to consumers are just the lures, used to grab people’s data and dominate their attention — attention that is contracted out to advertisers. Google has used data to grow its empire. […] Meanwhile, other internet companies depend on Google for survival. Snapchat, Twitter, Facebook, Lyft and Uber have all built multi-billion-dollar businesses on top of Google’s ubiquitous mobile operating system. As the gatekeeper, Google benefits from their success as well. The more people who use their mobile devices, the more data it gets on them,” Levine writes.

“One of the things that eventually happens … is that we don’t need you to type at all,” Eric Schmidt, Google’s former CEO, said in 2010. “Because we know where you are. We know where you’ve been. We can more or less guess what you’re thinking about.”

A scary thought, says Levine, especially “considering that Google is no longer a cute startup but a powerful global corporation with its own political agenda and a mission to maximise profits for shareholders.”

As “Google grew to dominate the consumer internet, a second side of the company emerged, one that rarely got much notice: Google the government contractor.” It turned out that the same platforms and services it deploys to monitor people’s lives and grab their data could be put to use running huge swaths of the US government, including the military and spy agencies.

The key to this transformation, Levine writes, was the acquisition of Keyhole. This San Francisco based startup company had created a programme “that stitched satellite images and aerial photographs into seamless 3D computer models of the Earth that could be explored as if they were in a virtual reality game world. Keyhole gave intelligence analysts, field commanders, air force pilots and others the kind of capabilities we take for granted today when we use digital mapping services on our computers and smartphones to look up restaurants, cafes, museums, traffic or subway routes,” Levine writes.

In 2004, Google bought Keyhole, including CIA investors who had poured an unknown amount of money into the company. It was absorbed into Google’s growing internet applications platform and reborn as Google Earth.

“The purchase of Keyhole was a milestone for Google, marking the moment the company stopped being a purely consumer-facing internet company and began integrating with the US government.”

Google’s entry into the government market makes sense, says Levine. “It’s a huge market [by 2017, the federal government was spending $90bn a year on information technology] — one in which Google seeks to maintain a strong presence. And its success has been all but guaranteed. Its products are the best in the business.”

“Even as it expanded into a transnational multi-billion-dollar corporation, Google had managed to retain its geekily innocent ‘Don’t Be Evil’ image. So while Google’s PR team did its best to keep the company wrapped in a false aura of altruism, company executives pursued an aggressive strategy to become the Lockheed Martin of the internet age.” (Composite by Alamy/Guardian Design)

“Google didn’t just work with intelligence and military agencies, but also sought to penetrate every level of society, including civilian federal agencies, cities, states, local police departments, emergency responders, hospitals, public schools and all sorts of companies and nonprofits.” For example, “in 2016, New York City tapped Google to install and run free wifi stations across the city. California, Nevada and Iowa, meanwhile, depend on Google for cloud computing platforms that predict and catch welfare fraud. Meanwhile, Google mediates the education of more than half of America’s public school students,” writes Levine.

“This mixing of military, police, government, public education, business and consumer-facing systems — all funnelled through Google — continues to raise alarms. Lawyers fret over whether Gmail violates attorney-client privilege. Parents wonder what Google does with the information it collects on their kids at school. What does Google do with the data that flows through its systems? Is all of it fed into Google’s big corporate surveillance pot? What are Google’s limits and restrictions? Are there any? In response to these questions, Google offers only vague and conflicting answers.

Of course, this concern isn’t restricted to Google. Under the hood of most other internet companies we use every day are vast systems of private surveillance that, in one way or another, work with and empower the state. On a higher level, there is no real difference between Google’s relationship with the US government and that of these other companies. It is just a matter of degree. The sheer breadth and scope of Google’s technology make it a perfect stand-in for the rest of the commercial internet ecosystem.”

According to Levine, “Google’s size and ambition make it more than a simple contractor. It is frequently an equal partner that works side by side with government agencies, using its resources and commercial dominance to bring companies with heavy military funding to market.” One of those companies is Jigsaw, which uses technology to tackle thorny foreign geopolitical problems, ranging from terrorism to censorship and cyber warfare. In an article for The Guardian, Julia Powles gave two reasons for its existence. “At its best,” she writes, “Jigsaw would […] make available information — particularly aspects of its own, internal, zealously-guarded data stores — to researchers and public interest organisations. But there’s another reading of this development — one that is borne out by the history of Google’s international policy interventions and its uncomfortably close ties with Washington. And it’s one that, jarringly, harks back to the origins of jigsaws in the late eighteenth century, as dissected maps of the British empire — cultural objects of imperial ideology.”

“What is the missing piece in the coverage of Jigsaw? Only the most important thing: that human development and flourishing is too important, too complex, and too culturally diverse to be left to profit-driven companies, acting of their own initiative,” Julia Powles writes in Google’s Jigsaw project has new ideas, but an old imperial mindset. (Photograph by SPIN)

“Jigsaw seemed to blur the line between public and corporate diplomacy, and at least one former state department official accused it of fomenting regime change in the Middle East. ‘Google is getting [White House] and state department support and air cover. In reality, they are doing things the CIA cannot do,’ wrote Fred Burton, an executive at global intelligence platform Stratfor and a former intelligence agent at the security branch of the state department. But Google rejected the claims of its critics. ‘We’re not engaged in regime change,’ Eric Schmidt [Google’s former CEO] told Wired. ‘We don’t do that stuff. But if it turns out that empowering citizens with smartphones and information causes changes in their country … you know, that’s probably a good thing, don’t you think?’” Levine writes.

Jigsaw’s work with the state department has raised eyebrows, but its function is a mere taste of the future if Google gets its way. In a rare interview with the Financial Times, Google co-founder Larry Page looked a hundred years into the future and saw Google at the centre of progress. “The societal goal is our primary goal. We’ve always tried to say that with Google. Some of the most fundamental questions people are not thinking about … how do we organise people, how do we motivate people? It’s a really interesting problem — how do we organise our democracies?,” he said. “We could probably solve a lot of the issues we have as humans.”

Our obsession with busyness

Obsessed with being busy? “A historical perspective may help you out,” says Clare Holdsworth, Professor of Social Geography at Keele University.

“Being busy has become so ubiquitous it has come to mean everything and nothing. As more people identify with the problem of busyness, some of us seek advice from time management experts about how to manage our busy lives. But data suggests that we are not as busy as we think we are. Social scientists who specialise in researching everyday time-use can compare trends in how we spend time from the 1960s onwards. The UK expert on time use, Jonathan Gershuny, claims that actual time spent in work has not increased since the 1960s — but what we mean by busyness has changed over time. In his view, busyness has become a badge of honour.”

But according to Holdsworth, we should not assume that tensions about how to spend time are only relevant in the modern era. “The diverse ways in which societies of the past negotiated their use of time is something I am discovering as part of my research on the social life of ‘busyness,’” she writes.

“The busy person sows and harvests and rests upon these gains. But what is the purpose of this rest? Only to begin once more…nothing is gained by the cycle other than rest from the labor it requires.” — Søren Kierkegaard

“Deliberations about how time should be spent are fundamental in philosophy and religion. The division between work and leisure is a key theme of both ancient Greek and early Judo-Christian philosophy. Aristotle, for example, argued that virtue was obtainable through contemplation, and not through endless activity,” Holdsworth writes.

But the tension between activity and reflection is not one that can be easily resolved. It takes on different forms at different historical times. “One of the most celebrated accounts of this is the German sociologist Max Weber’s explanation of the diligence of the Protestant work ethic and its importance in the development of capitalism. Weber describes the value of work as thought of by Calvinists as not a punishment for sin, but an expression of virtue and closeness to God.”

“Nobody works out the value of time: men use it lavishly as if it cost nothing. We have to be more careful in preserving what will cease at an unknown point.” — Seneca in On the Shortness of Life

“Returning to contemporary times, it is clear that our obsession with busyness is different from the Protestant work ethic: we find it difficult to balance hard work with our domestic chores and taxing social calendars. No longer do we counterpoise intensive work activity with time for repose — now ‘rest’ is often just as hard work. […]

The difference in how time is managed in the 21st compared to the 19th century, is that for many people, discipline around time is not imposed through organisational structures. The flexibility of contemporary work, including when and where we work, is unlike the imposition of clock-time discipline during the industrial revolution. Instead, we are expected to take control of our own lives, and managing this responsibility and our relationships with others is busy work.”

According to Holdsworth, the solution to our busyness may not necessarily be about how we manage our time, but rather how we manage our relationships with others: “The biblical commandment to mark time well is a collective act, not an individual one.” This, she concludes, is an important lesson about time we can take from the past.

And also this …

In his book Thinking, Fast and Slow (2011), Daniel Kahneman argues that the ‘Gorillas in Our Midstexperiment reveals something fundamental about the human mind, namely, that humans are “blind to the obvious, and that we also are blind to our blindness.” The notion of blindness captures much of the current zeitgeist in the cognitive sciences but it also “fuels excitement about artificial intelligence, especially its capacity to replace flawed and error-prone human judgment,” says Teppo Felin in The fallacy of obviousness.

But are humans truly blind to the obvious?

No, says recent research. It suggests that this claim — so important to much of the cognitive sciences, behavioural economics, and now AI — is wrong.

But how could such an influential claim get it so wrong?

“From the perspective of psychophysics, obviousness — or as it is called in the literature, ‘salience’ — derives from the inherent nature or characteristics of the environmental stimuli themselves: such as their size, contrast, movement, colour or surprisingness. In his Nobel Prize lecture in 2002, [Daniel] Kahneman calls these ‘natural assessments.’ And from this perspective, yes, the gorilla indeed should be obvious to anyone watching the clip. But from that perspective, any number of other things in the clip […] should then also be obvious.” — Teppo Felin in The fallacy of obviousness (Photograph by Richard Saker)

Felin says it is hard to argue with the findings of the gorilla experiment itself. Most people who watch the clip miss the gorilla but this doesn’t necessarily mean that humans are ‘blind to the obvious.’ Preoccupied with the task of counting how often a basketball is passed between people, missing the gorilla is hardly surprising. In retrospect, the gorilla is prominent and obvious. “But the very notion of visual prominence or obviousness is extremely tricky to define scientifically, as one needs to consider relevance or, to put differently, obviousness to whom and for what purpose?,” Felin writes.

But if the gorilla experiment doesn’t illustrate that humans are blind to the obvious, what exactly does it illustrate? What is an alternative interpretation, and what does it tell us about perception, cognition and the human mind?

“The alternative interpretation says that what people are looking for — rather than what people are merely looking at — determines what is obvious. Obviousness is not self-evident. Or as Sherlock Holmes said: ‘There is nothing more deceptive than an obvious fact.’ This isn’t an argument against facts or for ‘alternative facts,’ or anything of the sort. It’s an argument about what qualifies as obvious, why and how. See, obviousness depends on what is deemed to be relevant for a particular question or task at hand. Rather than passively accounting for or recording everything directly in front of us, humans — and other organisms for that matter — instead actively look for things. The implication (contrary to psychophysics) is that mind-to-world processes drive perception rather than world-to-mind processes. The gorilla experiment itself can be reinterpreted to support this view of perception, showing that what we see depends on our expectations and questions — what we are looking for, what question we are trying to answer.

At first glance that might seem like a rather mundane interpretation, particularly when compared with the startling claim that humans are ‘blind to the obvious.’ But it’s more radical than it might seem. This interpretation of the gorilla experiment puts humans centre-stage in perception, rather than relegating them to passively recording their surroundings and environments. It says that what we see is not so much a function of what is directly in front of us (Kahneman’s natural assessments), or what one is in camera-like fashion recording or passively looking at, but rather determined by what we have in our minds, for example, by the questions we have in mind. People miss the gorilla not because they are blind, but because they were prompted — in this case, by the scientists themselves — to pay attention to something else. The question — ‘How many basketball passes’ (just like any question: ‘Where are my keys?’) — primes us to see certain aspects of a visual scene, at the expense of any number of other things.”

“[A]s Albert Einstein put it in 1926: ‘Whether you can observe a thing or not depends on the theory which you use. It is the theory which decides what can be observed.’ The same applies whether we are talking about chest-thumping gorillas or efforts to probe the very nature of reality.” — Teppo Felin in The fallacy of obviousness (Photoprahy by Yousuf Karsh, 1948)

Felin’s “central concern is that the current obsession with human blindness and bias — endemic to behavioural economics and much of the cognitive, psychological and computational sciences — has caused scientists themselves to be blind to the more generative and creative aspects of human nature and the mind. Yes, humans do indeed miss many ‘obvious’ things, appearing to be blind, as Kahneman and others argue. But not everything that is obvious is relevant and meaningful. Thus human blindness could be seen as a feature, not a bug.”

Adding, “Humans do a remarkable job of generating questions, expectations, hypotheses and theories that direct their awareness and attention toward what is relevant, useful and novel. And it is these — generative and creative — qualities of the human mind that deserve further attention. After all, these and related aspects of mind are surely responsible for the significant creativity, technological advances, innovation and large-scale flourishing that we readily observe around us. Of course, humans continue to make mistakes, and any number of small- and large-scale problems and pathologies persist throughout the world. But understanding the more generative and creative capacities of the human mind deserves careful attention, as insights from this work can in turn help to solve additional problems, and lead to further technological advances and progress.”

Quanta Magazine published an extensive interview with Michela Massimi, the recent recipient of the Wilkins-Bernal-Medawar Medal. In her prize speech, she defended both science and the philosophy of science from accusations of irrelevance. Massimi argues that neither enterprise should be judged in purely utilitarian terms, and asserts that they should be allies in making the case for the social and intellectual value of the open-ended exploration of the physical world.

When asked by Philip Ball whether science has motivated new philosophical questions?, Massimi says:

“I think that again we should resist the temptation of assessing progress in philosophy in the same terms as progress in science. To start with, there are different views about how to assess progress in science. Is it defined by science getting closer and closer to the final true theory? Or in terms of increased problem-solving? Or of technological advance? These are themselves philosophical unsolved questions.

The received view up to the 1960s was that scientific progress was to be understood in terms of producing theories that were more and more likely to be true, in the sense of being better and better approximations to an ideal limit of scientific inquiry — for example, to some kind of theory of everything, if one exists. With the historical work of Thomas Kuhn in the 1960s, this view was in part replaced by an alternative that sees our ability to solve more and more problems and puzzles as the measure of our scientific success, regardless of whether or not there is an ideal limit of scientific inquiry to which we are all converging.”

Michela Massimi argues that the philosophy of science doesn’t have to be useful to scientists for it to be useful to humanity. (Video by Kieran Dodds for Quanta Magazine)

Continuing…

“Philosophy of science has contributed to these debates about the nature of scientific success and progress, and as a result we have a more nuanced and historically sensitive view today.

But also the reverse is true: Science has offered to philosophers of science new questions to ponder. Take, for example, scientific models. The exponential proliferation of different modeling practices across the biomedical sciences, engineering, earth sciences and physics over the last century has prompted philosophers to ask new questions about the role and nature of scientific models and how they relate to theories and experimental evidence. Similarly, the ubiquitous use of Bayesian statistics in scientific areas has enticed philosophers to go back to Bayes’ theorem and to unpack its problems and prospects. And advances in neuroscience have invited philosophers to find new accounts of how the human mind works.

Thus, progress accrues via a symbiotic relation through which philosophy and the sciences mutually develop, evolve and feed into each other.”

“All eyes are on Sidewalk Labs’ futuristic plans for a data-driven neighborhood in Toronto. But no one’s watching more closely than Bianca Wylie,” says Laura Bliss in Meet the Jane Jacobs of the Smart Cities Age.

“Wylie is among the most prominent voices of opposition to Sidewalk Labs’ vision for Toronto. And because this project is poised to be North America’s most ambitious test of how data-gathering technology might be fused into urban developments, she has also gained a following as a critic of ‘smart cities’ writ large. The 39-year-old Torontonian and mother of two has authored dozens of newspaper articles and blog posts, spoken with the Toronto city council and the Canada House of Commons, and piped up at nearly every open event Sidewalk Labs has hosted over the past year. She is often described as a privacy advocate, since she talks a lot about how companies and governments use citizen data. But ‘civic tech reformer’ might be a more appropriate label, for the drum she is beating is bigger than privacy. It’s about the risks of governments ceding power to private companies.” Bliss writes.

“It’s also bigger than Toronto. What happens in this city is a test case that any tech company curious about building a neighborhood will be watching. And observers are seeing that Wylie’s camp is having an impact.”

“It’s about our neighborhoods, our cities, how we want them to work, what problems should be solved, and which options should be looked at… I reject the technocratic vision of problem solving,” Wylie told Bliss.

Anthony Townsend, the urban futurist and technology consultant, who has also worked with Sidewalk Labs, told Laura Bliss he thinks of Bianca Wylie as “the Jane Jacobs of the smart city.” (Photograph by Calvin Thomas)

“‘A city is not a business,’ [Wylie] said. Sidewalk Labs and Waterfront Toronto also took the unusual step of forming a joint entity called Sidewalk Toronto; it is this organization that has largely led public consultation on the development, rather than Waterfront Toronto or government itself. Wylie believes the result is a planning process that has had more to do with generating PR than garnering opinion, and argues that there has been little opportunity for citizens to learn about alternatives. It didn’t help that the terms of the agreement signed by Sidewalk Labs and Waterfront Toronto were not made public until after months of agitation by her and others. ‘I was skeptical a year ago that we could pull off a really democratically informed process,’ Wylie said. ‘I have found the process to be thoroughly anti-democratic,’” Bliss writes.

“Regular citizens and, frequently, elected officials lack clear language to talk about what it means to integrate technology into normal democratic governance, Wylie believes. Self-driving vehicles, pavement tiles that can sense traffic and absorb rainwater, micro-dwellings, and common spaces monitored by ‘smart’ energy systems — the sort of elements that Sidewalk Labs has mapped out for the land — sound great, but the problem as Wylie sees it is that they’ve been framed as the only option for developing public land. Sensors and software may well belong in the public realm, but Wylie thinks citizens should direct how to use them, not the private sector.”

Eight buildings designed by Frank Lloyd Wright have been nominated to the UNESCO world heritage list — a collection which represents the first modern architecture nomination from the United States to the list. Spanning 50 years of American architect’s career, the projects include instantly recognizable structures such as Fallingwater and New York’s Guggenheim museum, as well as some lesser known, but equally significant, schemes. See each of the eight projects below, and read more about the nomination process on the website of the Frank Lloyd Wright Foundation.

Via designboom

Built in 1935, Fallingwater has been dubbed Wright’s “crowning achievement in organic architecture.” The project was completed for Edgar and Liliane Kaufmann, a prominent Pittsburgh couple who were known for their distinctive sense of style and taste. According to the FLW foundation, Wright was determined to build over the stream that punctuated the property. The architect remarked that rather than simply look out at it, he wanted the Kaufmanns to “live with the waterfall… as an integral part of [their] lives.”

In 1943, Museum director Hilla von Rebay sought a “temple for the spirit” in which to house Solomon R. Guggenheim’s growing collection of modern art. She commissioned Frank Lloyd Wright, despite the fact that he had never received a significant commission in New York. Sixteen years later — following over 700 sketches, countless meetings, and the death of both Guggenheim and Wright — the architect’s self-described ‘archeseum’ was finally unveiled. The building has been widely credited with launching the age of museum architecture, with the design positing that a collection’s physical home could be as crucial a part of the museum experience as the work itself.

Taliesin (Photograph courtesy of Frank Lloyd Wright Foundation)

Taliesin is the home, studio, school and 800-acre agricultural estate of Frank Lloyd Wright. The architect built the structure on his favorite boyhood hill in the Wisconsin river valley, homesteaded by his Welsh grandparents, and named it Taliesin in honor of the Welsh bard whose name means ‘Shining Brow.’ “I meant to live if I could an unconventional life,’ Wright explained. “I turned to this hill in the valley as my grandfather before me had turned to America — as a hope and haven.”

“The publication of the photograph of the earth as viewed from space, called Earthrise, or Spaceship Earth, has been described as a crucial moment in the relationship between this planet and the humans who live on it. Some people say that humans began to care for their environment when they saw a photograph of the earth from above. This is because the photograph showed people that space on earth is limited and the atmosphere is only a thin cover. Valentina Tereshkova, the first woman in space, described her experience of viewing the planet for the first time: The beauty of the earth was overwhelming… I realized how small earth is, and how fragile, so that it can be destroyed very quickly.

There are other people who believe that the Earthrise images had quite a different effect. While Tereshkova said that seeing earth from space made her respect it more, there are others who believe that the images are a symptom for human disrespect. That is, seeing the earth from space has given humans a false sense of independence from this planet.” — Daisy Hildyard, The Second Body (page 30–31)

“And as I surveyed them from this point, all the other heavenly bodies appeared to be glorious and wonderful — now the stars were such as we have never seen from this earth; and such was the magnitude of them all as we have never dreamed; and the least of them all was that planet, which farthest from the heavenly sphere and nearest to our earth, was shining with borrowed light, but the spheres of the stars easily surpassed the earth in magnitude — already the earth itself appeared to me so small, that it grieved me to think of our empire, with which we cover but a point, as it were, of its surface.” — Cicero, Somnium Scipionis *

* Somnium Scipionis or The Dream of Scipio is the sixth book Cicero’s De re publica, and describes a fictional dream vision of Scipio Aemilianus, set two years before he oversaw the destruction of Carthage in 146 BC. The Roman general is taken up into the sphere of distant stars to gaze back towards the Earth from the furthest reaches of the cosmos.

--

--

Mark Storm

Helping people in leadership positions flourish — with wisdom and clarity of thought