Random finds (2018, week 50) — On the ‘extended self,’ being alive to the world, and the obsession for frictionless tech
I have gathered a posy of other men’s flowers, and nothing but the thread that binds them is mine own.” — Michel de Montaigne
Random finds is a weekly curation of my tweets and, as such, a reflection of my fluid and boundless curiosity.
I am also building an ∞ of little pieces of wisdom, art, music, books and other things that have made me stop and think. #TheInfiniteDaily
This week: Where does your ‘self’ end?; attention-as-resource vs. attention-as-experience; is tech too easy to use?; Anand Giridharadas whose job it is “to slay the bullshit”; why we cannot deal with crisis; reimagining the Universal Declaration of Human Rights; living in a Gerrit Rietveld design; warped concrete in Miami; and, finally, Japanese aesthetics through the eyes of Jun’ichirō Tanizaki.
The ‘extended self’
“Here’s a thought experiment: where do you end? Not your body, but you, the nebulous identity you think of as your ‘self.’ Does it end at the limits of your physical form? Or does it include your voice, which can now be heard as far as outer space; your personal and behavioral data, which is spread out across the impossibly broad plane known as digital space; or your active online personas, which probably encompass dozens of different social media networks, text message conversations, and email exchanges?,” Kevin Lincoln asks in Where is the boundary between your phone and your mind?
This is a question with no clear answer, and, as the smartphone grows ever more essential to our daily lives, that border’s only getting blurrier.
According to Michael Patrick Lynch, a writer and professor of philosophy at the University of Connecticut, the notion of an ‘extended self’ was coined in 1998 by the philosophers Andy Clark and David Chalmers. They argued that the mind and the self are extended to those devices that help us perform what we ordinarily think of as our cognitive tasks. This can include items such as a piece of paper and a pen, but also a shopping list, which becomes part of our memory; the mind spilling out beyond the confines of our skull to encompass anything that helps it think.
“‘Now if that thought is right, it’s pretty clear that our minds have become even more radically extended than ever before,’ Lynch says. ‘The idea that our self is expanding through our phones is plausible, and that’s because our phones, and our digital devices generally — our smartwatches, our iPads — all these things have become a really intimate part of how we go about our daily lives. Intimate in the sense in which they’re not only on our body, but we sleep with them, we wake up with them, and the air we breathe is filled, in both a literal and figurative sense, with the trails of ones and zeros that these devices leave behind.’
This gets at one of the essential differences between a smartphone and a piece of paper, which is that our relationship with our phones is reciprocal: we not only put information into the device, we also receive information from it, and, in that sense, it shapes our lives far more actively than would, say, a shopping list. The shopping list isn’t suggesting to us, based on algorithmic responses to our past and current shopping behavior, what we should buy; the phone is,” Lincoln writes.
“The debate over what it means for us to be so connected all the time is still in its infancy, and there are wildly differing perspectives on what it could mean for us as a species. One result of these collapsing borders, however, is less ambiguous, and it’s becoming a common subject of activism and advocacy among the technologically minded. While many of us think of the smartphone as a portal for accessing the outside world, the reciprocity of the device, as well as the larger pattern of our behavior online, means the portal goes the other way as well: it’s a means for others to access us.
Most obviously, this can take the form of the omnipresent harassment that many people experience online, as well as more specific tactics, like revenge porn and the leaking of nude pictures; doxxing, or the revealing of someone’s personal details; and swatting, the practice of calling a Swat team to an individual’s home under false pretenses.
Less clear to most people, however, is the extent to which the companies that make the technology, apps, and browsers that we use are not just tracking but shaping our behavior.”
“Tim Hwang, [a writer and researcher who used to work as the global public policy lead for artificial intelligence and machine learning at Google], has thought extensively about how these devices foster the functioning of the collective in addition to the individual. About a decade ago, he explains, the rhetoric around the Internet was that the crowd would prevent the spread of misinformation, filtering it out like a great big hive mind; it would also help to prevent the spread of things like hate speech. Obviously, this has not been the case,” Lincoln writes.
“[Hwang] says that the pessimism resulting from this realization has led us to give power to the platforms so that they can regulate themselves — allowing Facebook to tell us what’s true and what’s not — but that there is another approach to the way we actually exist in these spaces.
‘Are there tools, are there designs we can put in place to allow communities to do a better job at self-governance?’ Hwang asks. ‘Do we want to give more moderation to particular users? Does the platform want to grant users more power to control issues of harassment and hate speech, knowing that, in some cases, it might be over-applied?’”
Moira Weigel, a writer and junior fellow at Harvard University, “sees two potential opportunities for limiting the amount of influence the big five has on consumers. The first would be legal; she cites a growing body of work exploring possible antitrust suits designed to break up these companies. […] The other option is for workers, many having entered tech with idealistic rather than financial motives, to help regulate and restrict their own employers. Many have already begun to express regret over the effectiveness of their innovations, a phenomenon perhaps best exemplified by the Center for Humane Technology, led by the former Google design ethicist Tristan Harris,” Lincoln writes.
“But Weigel views these efforts with suspicion due to the idea that they often follow the same playbook as the paternalistic, top-down design infrastructure that created these problems in the first place.” She believes “a more fitting example of positive change took place when Google employees successfully campaigned for the company to stop its work with the Pentagon on Project Maven, a program that improved the effectiveness of military drones.”
“Tech humanism fails to address the root cause of the tech backlash: the fact that a small handful of corporations own our digital lives and strip-mine them for profit. This is a fundamentally political and collective issue. But by framing the problem in terms of health and humanity, and the solution in terms of design, the tech humanists personalise and depoliticise it.” — Ben Tarnoff and Moira Weigel in Why Silicon Valley can’t fix itself
“Lynch […] also believes that one of our best hopes comes from the bottom up, in the form of actually educating people about the products that they spend so much time using. We should know and be aware of how these companies work, how they track our behavior, and how they make recommendations to us based on our behavior and that of others.”
Essentially, Lynch argues, “we need to understand the fundamental difference between our behavior IRL and in the digital sphere — a difference that, despite the erosion of boundaries, still stands.” He believes that “we should especially recognize this when it seems least clear: in those situations online that most closely seek to emulate the structures and dynamics of real life. Like, for example, your relationships. It isn’t enough that the apps in our phone flatten all of the different categories of relationships we have into one broad group: friends, followers, connections. They go one step further than that.
‘You’re being told who you are all the time by Facebook and social media because which posts are coming up from your friends are due to an algorithm that is trying to get you to pay more attention to Facebook,’ [he] says. ‘That’s affecting our identity, because it affects who you think your friends are, because they’re the ones who are popping up higher on your feed.’”
Being alive to the world
“‘We are drowning in information, while starving for wisdom.’ Those were the words of the American biologist E. O. Wilson at the turn of the century. Fastforward to the smartphone era, and it’s easy to believe that our mental lives are now more fragmentary and scattered than ever. The ‘attention economy’ is a phrase that’s often used to make sense of what’s going on: it puts our attention as a limited resource at the centre of the informational ecosystem, with our various alerts and notifications locked in a constant battle to capture it,” Dan Nixon writes in Attention is not a resource but a way of being alive to the world.
According to him, this narrative is helpful in a world of information overload in which our devices and apps are intentionally designed to get us hooked. It assumes, however, a certain kind of attention. “An economy, after all, deals with how to allocate resources efficiently in the service of specific objectives (such as maximising profit). Talk of the ‘attention economy’ relies on the notion of attention-as-resource: our attention is to be applied in the service of some goal, which social media and other ills are bent on diverting us from. Our attention, when we fail to put it to use for our own objectives, becomes a tool to be used and exploited by others.
However, conceiving of attention as a resource misses the fact that attention is not just useful. It’s more fundamental than that: attention is what joins us with the outside world. ‘Instrumentally’ attending is important, sure. But we also have the capacity to attend in a more ‘exploratory’ way: to be truly open to whatever we find before us, without any particular agenda.”
Treating attention as a resource […] tells us only half of the overall story. To be specific, the left half. “According to the British psychiatrist and philosopher Iain McGilchrist, the brain’s left and right hemispheres ‘deliver’ the world to us in two fundamentally different ways. An instrumental mode of attention, McGilchrist contends, is the mainstay of the brain’s left hemisphere, which tends to divide up whatever it’s presented with into component parts: to analyse and categorise things so that it can utilise them towards some ends.
By contrast, the brain’s right hemisphere naturally adopts an exploratory mode of attending: a more embodied awareness, one that is open to whatever makes itself present before us, in all its fullness. This mode of attending comes into play, for instance, when we pay attention to other people, to the natural world and to works of art. None of those fare too well if we attend to them as a means to an end. And it is this mode of paying attention, McGilchrist argues, that offers us the broadest possible experience of the world,” Nixon writes.
“So, as well as attention-as-resource, it’s important that we retain a clear sense of attention-as-experience. I believe that’s what the American philosopher William James had in mind in 1890 when he wrote that ‘what we attend to is reality’: the simple but profound idea that what we pay attention to, and how we pay attention, shapes our reality, moment to moment, day to day, and so on. It is also the exploratory mode of attention that can connect us to our deepest sense of purpose.”
“If I am right, that the story of the Western world is one of increasing left hemisphere domination, we would not expect insight to be the key note. Instead, we would expect a sort of insouciant optimism, the sleepwalker whistling a happy tune as he ambles towards the abyss.” — Iain McGilchrist, The Master and his Emissary
According to Nixon, “the deluge of stimuli competing to grab our attention almost certainly inclines us towards instant gratification. This crowds out space for the exploratory mode of attention. When I get to the bus stop now, I automatically reach for my phone, rather than stare into space; my fellow commuters (when I do raise my head) seem to be doing the same thing. Second, on top of this, an attention-economy narrative, for all its usefulness, reinforces a conception of attention-as-a-resource, rather than attention-as-experience.
At one extreme, we can imagine a scenario in which we gradually lose touch with attention-as-experience altogether. Attention becomes solely a thing to utilise, a means of getting things done, something from which value can be extracted. […]
While such an outcome is extreme, there are hints that modern psyches are moving in this direction. One study found, for instance, that most men chose to receive an electric shock rather than be left to their own devices: when, in other words, they had no entertainment on which to fix their attention. Or take the emergence of the ‘quantified self’ movement, in which ‘life loggers’ use smart devices to track thousands of daily movements and behaviours in order to (supposedly) amass self-knowledge. If one adopts such a mindset, data is the only valid input. One’s direct, felt experience of the world simply does not compute.”
No society has reached this dystopia, at least, not yet. “But faced with a stream of claims on our attention, and narratives that invite us to treat it as a resource to mine, we need to work to keep our instrumental and exploratory modes of attention in balance. How might we do this?
To begin with, when we talk about attention, we need to defend framing it as an experience, not a mere means or implement to some other end.
Next, we can reflect on how we spend our time. Besides expert advice on ‘digital hygiene’ (turning off notifications, keeping our phones out of the bedroom, and so on), we can be proactive in making a good amount of time each week for activities that nourish us in an open, receptive, undirected way: taking a stroll, visiting a gallery, listening to a record.”
But most effective of all,Nixon believes, “is simply to return to an embodied, exploratory mode of attention, just for a moment or two, as often as we can throughout the day. Watching our breath, say, with no agenda. In an age of fast-paced technologies and instant hits, that might sound a little … underwhelming. But there can be beauty and wonder in the unadorned act of ‘experiencing’. This might be what Weil had in mind when she said that the correct application of attention can lead us to ‘the gateway to eternity … The infinite in an instant.’”
The obsession for frictionless tech
In The New York Times, Kevin Roose, who, in his column The Shift, examines the intersection of technology, business, and culture, wonders whether tech is too easy to use?
“Of all the buzzwords in tech, perhaps none has been deployed with as much philosophical conviction as ‘frictionless.’ Over the past decade or so, eliminating ‘friction’ — the name given to any quality that makes a product more difficult or time-consuming to use — has become an obsession of the tech industry, accepted as gospel by many of the world’s largest companies,” Roose writes.
“There is nothing wrong with making things easier, in most cases, and the history of technology is filled with examples of amazing advances brought about by reducing complexity. Not even the most hardened Luddite, I suspect, wants to go back to the days of horse-drawn carriages and hand-crank radios.
But it’s worth asking: Could some of our biggest technological challenges be solved by making things slightly less simple?
After all, the frictionless design of social media platforms […], which makes it trivially easy to broadcast messages to huge audiences, has been the source of innumerable problems.”
Roose, who has spoken to more than a dozen designers, product managers and tech executives about the principles of frictionless design, sees signs that some tech companies are beginning to appreciate the benefits of friction.
“WhatsApp limited message forwarding in India this year after reports that viral threads containing misinformation had led to riots. And YouTube tightened its rules governing how channels can earn ad revenue, to make it technharder for spammers and extremists to abuse the platform. More of these kinds of changes would be welcome, even if they led to a short-term hit to engagement. And there are plenty of possibilities.”
For example, “[what] if Facebook made it harder for viral misinformation to spread by adding algorithmic ‘speed bumps’ that would delay the spread of a controversial post above a certain threshold until fact checkers evaluated it?
[…]
This approach might seem overly paternalistic. But the alternative — a tech infrastructure optimized to ask as little of us as possible, with few circuit breakers to limit the impact of abuse and addiction — is frightening. After all, ‘friction’ is just another word for ‘effort,’ and it’s what makes us capable of critical thought and self-reflection,” Roose argues.
“I don’t want to romanticize the slow, often frustrating processes of the past. There is nothing inherently good about complexity, and the tech industry could still do a lot of good by reducing friction in systems like health care, education and financial services.
But there are both philosophical and practical reasons to ask whether certain technologies should be a little less optimized for convenience. We wouldn’t trust a doctor who made speed a priority over safety. Why would we trust an app that does?”
And also this …
“This has been a time over the last 40 years of extraordinary growth and prosperity in many places, but also of substantial gaping inequality in many of those same places. What that means in practice is that the future has not stopped raining on Britain or the US and other areas in the last 30–40 years. There are some places where the future has stopped raining look at Spain perhaps.
But, in other places, new things have happened, new companies have formed, new industries have birthed. However, a small fraction of people have derived most of those benefits of the future and harvested most of the rainwater of progress.”
“A lot of those people happen to be less tethered to a place, that’s a signature of this prosperous generation. Partly, it’s because they are of the meritocratic elite if you thought of Britain 300 years ago the people with the most means and power were people tied to particular patches of earth.
If you think about it today, the people with the most power on earth are the people with the least connection to a specific patch on earth. They are the people who have moved for an opportunity, who are often immigrants or children of immigrants, they live in London, but they run a company in Burkina Faso which is funded by Chinese investors who bought their fund out by Koreans, that is the new world. Moreover, if you have the education, the luck and the means to take advantage of this new liquid world as many of these rootless cosmopolitans do, this has been a great era for you.
If you’re someone who can hire coders in Sweden and then sell the app to customers in Columbia this has been a great age for you and full of opportunity. But that is the kind of boon available to the very few because it requires education, skill and capital. People who are wrong on the side of that divide who have limited leverage and stuck in one place have had a raw deal. If your skill is cutting hair, the new liquid world doesn’t increase your opportunities at all.”
Anand Giridharadas in an interview with 52 Insights. More Giridharadas in When the Market Is Our Only Language, a conversation with On Being’s Krista Tippett.
In Our age lacks gravitas. That’s why we cannot deal with crisis, Ian Jack writes, “If we want to see the world differently and, just possibly, avert the collapse, we need different kinds of information. What has mattered until now is money. The indices that appear without fail — fixed on the printed page and changing on the screen — show the fluctuations of the FTSE 100, the Nikkei, the Dow Jones, Nasdaq and the currency exchange rates. Imagine if instead the same little boxes showed the average global temperature, the extent of Arctic sea ice, the rise in sea level and the parts per million of CO2 in the atmosphere. Day by day, the changes would be tiny — consoling in their minuteness. Comparison with the same set of figures for the same day 20 years before would be needed to show their ominous development.
There they would be: sober, factual, grave and rarely consulted; but always warning against the ultimate crisis, like an old-fashioned sermon on hell.”
As the Universal Declaration of Human Rights turns 70, The Guardian asked several leading authors to reimagine it for today. One of them is James Bridle, who made a case for ‘the right to understand.’
“We live in strange times, and we’re actively making them stranger. Volkswagen has been forced to pay out more than €28bn for designing cars that could cheat emissions tests. Political operatives use the unstated fashion choices of voters to microselect for campaign ads. YouTube’s recommendation algorithms are implicated in the radicalisation of flat earthers and ultranationalists. Artificially intelligent machines beat us at games with novel strategies we do not, and cannot, understand. The future is only going to be more confusing. Six years before the Universal Declaration of Human Rights was proclaimed, the science fiction writer Isaac Asimov proposed his famous Three Laws of Robotics: that a robot must not harm or, through inaction, allow harm to come to any human being; that it must obey orders given to it by human beings where they do not contradict the first law; and that it must protect its own existence to the extent that doing so would not conflict with the first two laws. They’re good laws, but the robots that Asimov imagined were discrete beings, aloof and accountable — and very different from the entangled, ever-present, overwhelmingly sophisticated yet often obscure technologies we actually find ourselves living among today.
Instead of visibly dangerous robots, we have hidden programs inside car engines that poison the atmosphere, prejudiced automation systems for sentencing and job selection, covert data-gathering regimes that sell us out to corporations and political enemies, and proprietary attention-seeking algorithms which distract our attention, and amplify division and conspiracy. If it were robots that were doing all that, a contemporary update to Asimov’s laws might require them to explain themselves to humans, so that we might not be harmed by the fact that most of the time, we have no idea what they’re doing. But really, the onus is on us, not on them.”
“What then would this requirement look like if framed as a right? The Universal Declaration of Human Rights was primarily concerned with interpersonal relationships and mutual understanding; it speaks of ‘a common understanding of these rights and freedoms’ as well as the promotion of ‘understanding, tolerance and friendship among all nations, racial or religious groups.’ But when so many of those relationships are mediated by inscrutable technologies, the understanding required is that of systems, not of one another.
Today we hear a lot about the benefits new technologies such as artificial intelligence and mass automation will bring to our lives, but even those tasked with constructing them have little understanding of the effects they will have on our societies. Lack of understanding — the feeling of being lost and powerless in the world — leads to fear, apathy and rage. It’s hardly surprising then that in these times of technological acceleration and complexity, those are the dominant emotions felt across the globe.
The right to understand, then, would be a useful addition to what is expected for us today. A demand both for better education — not just in technology, but in all forms of critical thinking — allied with the requirement that those deploying complex and life-affecting technologies must consult with those subject to them, engineer them for transparency, and actively work to make them comprehensible and accountable. Only through mass understanding, and thus mass engagement and mass participation, might we hope to get a firmer grip on an increasingly strange and inscrutable world.”
Most of Gerrit Reitveld’s residential houses have been refurbished over the years. Not all, though. The Dutch photographer Arjan Bronkhorst has found twenty ; unchanged and with their loving inhabitants.
All photographs by Arjan Bronkhorst.
Via de Volkskrant
International firm Arquitectonica extends its long-standing commitment to the University of Miami School of Architecture with the design of the ‘Thomas P. Murphy design studio building.’ The 20,000 square-foot LEED certified studio building serves as a laboratory and collaborative space for the next generation of architects.
“The thin-shell concrete roof structure is a visually dramatic moment. the slab warps slightly, seemingly melting in the Miami heat, to form a gentle arc that adds complexity to the silhouette. Besides affording effective shading over the glazed east and west fronts, the bowed roof also sets up the design’s primary formal swerve. the curve of the roof interacts with the curve at the entrance to demonstrate the plasticity of concrete. These two gestures transform a simple box into dynamic architectural expression, incorporating core modernist principles into a progressive design that will serve as an influence for the next generation of architects,” designboom writes.
“One of the oldest and most deeply ingrained Japanese attitudes to literary style holds that too obvious a structure is contrivance, that too orderly an exposition falsifies the ruminations of the heart, that the truest representation of the searching mind is to ‘just follow the brush.’ Indeed it would not be far wrong to say that the narrative technique we call ‘stream of consciousness’ has an ancient history in Japanese letters. It is not that Japanese writers have been ignorant of the powers of concision and articulation. Rather they have felt that certain subjects — the vicissitudes of the emotions, the fleeting perceptions of the mind — are best couched in a style that conveys something of the uncertainty of the mental process and not just its neatly packaged conclusions.” — Thomas J. Harper, a former senior lecturer in Japanese literature at the Australian National University, Canberra, in his Afterword to In Praise of Shadows (1933), an essay on Japanese aesthetics, written by the Japanese author and novelist Jun’ichirō Tanizaki