Random finds (2019, week 8) — On technology in deep time, changing the narrative of humanity, and the good-enough life

Mark Storm
19 min readFeb 23, 2019

--

The TWA Flight Center at New York’s John F Kennedy Airport is among the best-known designs by Eero Saarinen. After being shut for 16 years, the building has been restored to its Modernist glory and will re-open in May as the TWA Hotel.

I have gathered a posy of other men’s flowers, and nothing but the thread that binds them is mine own.” — Michel de Montaigne

Random finds is a weekly curation of my tweets and, as such, a reflection of my fluid and boundless curiosity.

If you want to know more about my work and how I help leaders and their teams find their way through complexity, ambiguity, paradox & doubt, go to Leadership Confidant — A new and more focused way of thriving on my ‘multitudes’ or visit my new ‘uncluttered’ website.

This week: How technology evolves alongside us; why we should be skeptical of in-progress narratives occurring over long timespans; the desire for greatness as an obstacle to our own potential; how the pursuit of efficiency could be suppressing serendipity; what Ray Kurzweil gets wrong about singularity; wishy-washy job titles; the maddening and brilliant Karl Lagerfeld; and, finally, being distracted from distraction by distraction.

Technology in deep time

If we look at technology over very long timescales, our definition of what it is transforms, and as Tom Chatfield argues in Technology in deep time: How it evolves alongside us, it also displays a form of evolution entwined with our own.

“We often think about technology as the latest innovation: the smartphone, the 3D printer, the VR headset. It’s only by taking a longer view, however, that we can understand its entwining with our species’ existence. For technology is more than computers, cars or gadgets. It is the entirety of human-made artefacts that extend and amplify our grasp of the world. As the philosopher Hannah Arendt put it in 1958, we have in recent centuries developed a science ‘that considers the nature of the Earth from the viewpoint of the Universe.’ Yet in doing so we have paradoxically trained ourselves to ignore the most important lesson of all: our co-evolution with technology.”

From “around two-and-a-half million years ago, our distant ancestors began to use found objects in a deliberate manner: hard or sharp stones, for breaking open shells or protection; sticks for reaching distant food; plants or animal parts for shelter or camouflage. In this, and in their initial crafting and improvement of these objects, our ancestors weren’t so different from several other groups of animals,” Chatfield writes.

“Only humans, however, have turned this craft into something unprecedented: a cumulative process of experiment and recombination that over mere hundreds of thousands of years harnessed phenomena such as fire to cook food, and ultimately smelt metal; as gravity into systems of levers, ramps, pulleys, wheels and counterweights; and mental processes into symbolic art, numeracy, and literacy.

It is this, above all, that marks humanity’s departure from the rest of life on Earth. Alone among species (at least until the crows have put in a million years more effort) humans can consciously improve and combine their creations over time — and in turn extend the boundaries of consciousness. It is through this process of recursive iteration that tools became technologies; and technology a world-altering force.”

According to the economist W. Brian Arthur it is not only pointless but also actively misleading to treat the history of technology as a greatest-hits list of influential inventions. “This is not because such inventions weren’t hugely important, but because it obscures the fact that all new technologies are at root a combination of older technologies — and that this in turn traces an evolutionary process resembling life itself.”

“In a sense this is self-evident,” Chatfield writes. “It is, after all, only possible to build something out of components that exist — and these components must, in turn, have been assembled from other pre-existing components, and those from others that came before, and so on. Equally self-evidently, this accumulative combination is not by itself sufficient to explain technology’s evolution. Another force is required to drive it, and it’s similar to the one driving biological evolution itself: fitness as manifested through successful reproduction.

In the case of biological evolution, this process is based upon the transmission of genetic code from parents to offspring. […] In the case of technology, the business of survival and reproduction is symbiotic in a more fundamental way. This is because technology’s transmission has two distinct requirements: the ongoing existence of a species capable of manufacturing it, and networks of supply and maintenance capable of serving technology’s own evolving needs.”

Our “fundamental needs are obvious enough — survival and reproduction, based upon adequate food, water, shelter and security — but in what sense can technology be said to have needs of its own? The answer lies all around us, in the immense interlinked ecology of the human-made world. Our creations require power, fuel, raw materials; globe-spanning networks of information, trade and transportation; the creation and maintenance of accrued layers of components that, precisely because they cannot reproduce or repair themselves, bring with them a list of needs far outstripping anything natural.

[…]

In its separateness from and yet reliance upon biological life, technology is uniquely powerful and uniquely needy. It embodies an ever-expanding network of dependencies, and in this sense it invents many more needs than it serves — with both its requirements and its capacities growing at an exponential rate compared to our own.”

“Time in the human sense doesn’t mean much when it comes to technology, because — unlike something living — a tool doesn’t struggle to survive or to pass on its pattern. Rather, it’s the increments of design, manufacture, refinement and combination that mark development. In this sense, no time whatsoever passes for a technology if there is no development. If a human population uses thousands upon thousands of identical farming tools in an identical way for thousands of years, that technology is frozen in stasis. To use an ancient tool is to enact a kind of time travel. From this perspective, most of our planet’s history saw no technological time passing whatsoever. Four billion years were less than the blink of an eye — while the last few centuries loom larger than all the rest of history.”

According to Chatfield, “There’s a simple mathematical way of thinking about this. When it comes to combining things, increasing the number of components you’re working with vastly increases the number of potential combinations. [This means that], thanks to the fertile recombination of ever-more technological possibilities, time and evolution are steadily speeding up from the perspective of our creations. And the rate at which they’re speeding up is itself increasing. This has perhaps been most familiarly stated in the form of Moore’s law.”

“This is the point where what Arendt termed ‘the onslaught of speed’ starts to do strange things to time. Among the implications of Moore’s law, some thinkers have reasoned, is that the next two years are likely to see as much progress in computing terms as the entire history of technology from the beginning of time to the present — something that’s also likely to be true for the next two years, and the next.

And if this kind of analysis feels overfamiliar — or overstated — we can recapture its shock by putting things slightly differently. From the perspective of technology, humans have been getting exponentially slower every year for the last half-century. In the realm of software, there is more and more time available for adaptation and improvement — while, outside it, every human second takes longer and longer to creep past. We — evolved creatures of flesh and blood — are out of joint with our times in the most fundamental of senses.

All of which takes us towards one of the defining myths of our digital age, that of the Singularity: a technological point of no return beyond which, it’s argued, the evolution of technology will reach a tipping point where self-design and self-improvement take over, cutting humanity permanently out of the loop.”

Chatfield wonders if any of this is likely?

“My belief is that, like most myths, the least interesting thing we can do with this story is take it literally. Instead, its force lies in the expression of a truth we are already living: the fact that clock and calendar time have less and less relevance to the events that matter in our world. The present influence of our technology upon the planet is almost obscenely consequential — and what’s potentially tragic is the scale of the mismatch between the impact of our creations and our capacity to control them.”

This brings Chatfield to the biggest question of all: can we deflect the path of technology’s needs towards something like our own long-term interest, not to mention that of most other life on this planet?

“Not […] if we surrender to the seduction of thinking ourselves impotent or inconsequential — or technology’s future as a single, predetermined course,” Chafield argues. “Like our creations, we are minute in individual terms — yet of vast consequence collectively. It took the Earth 4.7 billion years to produce a human population of one billion; another 120 years to produce two billion; then less than a century to reach the seven-and-a-half billion currently alive, contemplating their future with all the tools of reason, wishfulness, knowledge and delusion that evolution and innovation have bequeathed.

This is what existence looks like at the sharp end of 4.7 billion years. We have less time than ever before — and more that we can accomplish.”

The three illustrations above come from Très Riches Heures du Duc de Berry, one of the best surviving examples of French Gothic manuscript illumination. The book is a collection of prayers to be said at the canonical hours and was created for the Duke of Berry by the Limbourg brothers between c. 1412 and 1416. When the three painters and their sponsor died in 1416, possibly victims of plague, the manuscript was left unfinished. It was further embellished in the 1440s by an anonymous painter, who many art historians believe was Barthélemy d’Eyck.

Changing the narrative of humanity

“Some of the most unexpected cultural shifts seem inevitable with the benefit of hindsight, and noisy random walks can reveal smooth sweeping trends over long enough durations. In order to construct a view of history unencumbered by the noise of the present, you have to step back from the subject of inquiry. Biographies of the living cannot contemplate the influence of their legacies, and you can’t judge the value of an action until you have observed its effect,” Georgia Frances King and Andrew Fursman write in We’re thinking about the fourth industrial revolution all wrong.

“For this reason, we should be skeptical of in-progress narratives occurring over long timespans. One such narrative is the so-called fourth industrial revolution.”

According to the popular narrative, industrialised human production went through three fundamental shifts, and is presently entering the fourth major era, which “builds on the most recent ‘digital revolution’ and is marked by emerging technologies. Combined with the communications infrastructure necessary to connect all of humanity to these breakthroughs, the result is the potential for a truly global society.”

Although this narrative undeniably fits the facts of history, it “focuses on the minutiae of exciting new technologies — precisely because they are exciting and new — without observing the meaning of the larger trends behind these shifts.” But if we organise eras based not on the what of technology, but on the who of production, a different story reveals itself; one of “transition focused on the dehumanization of production. This view could help us understand and react to the trends of today using a familiar narrative arc.

While a different story doesn’t avoid the problem of being constructed from within the history it tells, a less human-centered view of production can bring some different truths into sharper focus and allow us to explore other, more radical possible futures. So, rather than tell a story about human production, let’s look at the history of human consumption.”

“The first industrial revolution, which began in 18th century Europe. Workers during this time witnessed a dramatic trend toward urbanization, accompanied by a rise in the iron and textile industries, all driven by the invention of the steam engine.” — Coalbrookdale by Night (1801), by Philip Jacques de Loutherbourg (oil on canvas, 68 x 107 cm; Science Museum, London)

In this alternative narrative, “The fourth era is the period of industrialized intelligence, rising with the mental-energy-saving inventions of the mid-20th century and continuing through today. Much as the industrial revolution dehumanized biological strength with machines, the displacement of biological intelligence with computers represents the dehumanization of intellectual labour. Projecting current techniques a few years forward suggests that autonomous systems will eventually be capable of outcompeting humans in every area where intelligence is the key component of production,” King and Fursman argue.

“Since many tasks are a combination of mental and physical labor, the development of this fourth intelligence revolution will also accelerate the physical impact of industrialized labour. This means that the arrival of automated intelligence is likely to be even more disruptive to the existing patterns of society than predicted. Automated intelligence combined with automated physical labor is inevitably more cost-effective than human production, and will leave no segment of production untouched for humans to monopolize.”

But King and Fursman wonder what will happen when AI makes automated mental labor more efficient? Where do we move once our brains are no longer as competitive with machines? Where do we shift our efforts?

“When writing any narrative of humanity, it’s impossible to avoid a certain degree of myopia; after all, we’re living within it. Some of the most unexpected cultural shifts seem inevitable with the benefit of hindsight, and noisy random walks can reveal smooth sweeping trends over long enough durations.” — Dudley (1832), by J.M.W. Turner (watercolour and bodycolour on paper, 28.8 x 43.0 cm; Board of Trustees of the National Museums and Galleries on Merseyside, Lady Lever Art Gallery, Liverpool)

King and Fursman believe that “this latest shift in production might require an equally significant shift in the structure of society. We may need to radically rethink of our assumptions about how to live a meaningful life. Is dedicating your life to ‘making a living’ really the ultimate good of human existence?”

Also, we need to change the narrative around the fourth industrial revolution and zoom out to observe larger trends. “As humans, we make sense of the world by telling stories about ourselves and our societies. By examining the world through different stories, we can more clearly understand the present and prepare for a very different future.

Perhaps we are not in a fourth industrial revolution that will simply progress the roles of humans in production. Perhaps we are in the final stages of a grand process to create and automate all the tasks necessary to sustain a stable society. Perhaps jobs are not the source of human dignity. Perhaps escape from the burden of labour is not unemployment, but freedom. Perhaps new economic models, like [universal basic income] variants, will arise to replace labour income. Perhaps we are not moving into a new era of human industrialization — perhaps we’re on the cusp of the dehumanization of industrialization.

Perhaps it is time to tell ourselves a very different story to help us prepare for a very different future.”

The good-enough life

According to Avram Alpert, the desire for greatness can be an obstacle to our own potential.

Ideals of greatness not only cut across the American political spectrum, they also unite the diverse philosophical camps of Western ethics. “Aristotle called for practicing the highest virtue. Kant believed in an ethical rule so stringent not even he thought it was achievable by mortals. Bentham’s utilitarianism is about maximizing happiness. Marx sought the great world for all. Modern-day libertarians will stop at nothing to increase personal freedom and profit. These differences surely matter, but while the definition of greatness changes, greatness itself is sought by each in his own way,” Alpert writes in The Good-Enough Life.

“Swimming against the tide of greatness is a counter-history of ethics embodied by schools of thought as diverse as Buddhism, Romanticism and psychoanalysis. It is by borrowing from D.W. Winnicott, an important figure in the development of psychoanalysis, that we get perhaps the best name for this other ethics: ‘the good-enough life.’”

“I want to be famous to shuffling men / who smile while crossing streets, / sticky children in grocery lines, / famous as the one who smiled back.” (From the poem Famous by Naomi Shihab Nye) — Jean Shin’s mosaic for the Second Avenue Subway in New York City was based on archival photographs of everyday riders and pedestrians. (Photograph by George Etheredge for The New York Times)

“From Buddhism and Romanticism we can get a fuller picture of what such a good enough world could be like. Buddhism offers a criticism of the caste system and the idea that some people have to live lives of servitude in order to ensure the greatness of others. It posits instead the idea of the ‘middle path,’ a life that is neither excessively materialistic nor too ascetic. […]

The Romantic poets and philosophers extend this vision of good-enoughness to embrace what they would call ‘the ordinary’ or ‘the everyday.’ This does not refer to the everyday annoyances or anxieties we experience, but the fact that within what is most ordinary, most basic and most familiar, we might find a delight unimaginable if we find meaning only in greatness. The antiheroic sentiment is well expressed by George Eliot at the end of her novel Middlemarch: ‘that things are not so ill with you and me as they might have been, is half owing to the number who lived faithfully a hidden life, and rest in unvisited tombs.’ And its legacy is attested to in the poem Famous by Naomi Shihab Nye: ‘I want to be famous to shuffling men / who smile while crossing streets, / sticky children in grocery lines, / famous as the one who smiled back.’

Being good enough is not easy. It takes a tremendous amount of work to smile purely while waiting, exhausted, in a grocery line. Or to be good enough to loved ones to both support them and allow them to experience frustration. And it remains to be seen if we as a society can establish a good-enough relation to one another, where individuals and nations do not strive for their unique greatness, but rather work together to create the conditions of decency necessary for all.

Achieving this will also require us to develop a good enough relation to our natural world, one in which we recognize both the abundance and the limitations of the planet we share with infinite other life forms, each seeking its own path toward goodenoughness. If we do manage any of these things, it will not be because we have achieved greatness, but because we have recognized that none of them are achievable until greatness itself is forgotten.”

And also this …

First of all, two interesting research papers on serendipity and singularity.

Serendipity: Towards a taxonomy and a theory, by Ohid Yaqub. Using the archives of Robert K. Merton, who introduced the term ‘serendipity’ to the social sciences, as a a starting point for gathering literature and examples, Yaqub identifies four types of serendipity (Walpolian, Mertonian, Bushian, Stephanian) together with four mechanisms of serendipity (Theory-led, Observer-led, Error-borne, Network-emergent), and discusses implications of the different types and mechanisms for theory and policy.

“The possibility of serendipity occurring through a variety of mechanisms should raise concerns among those seeking greater efficiency in research, and those framing innovation solely in terms of reducing uncertainty. The pursuit of efficiency could be suppressing the error-borne serendipity mechanism, and driving out diversity in methodological approaches needed for Mertonian serendipity to come about. Greater pressure for efficiency might also make it harder to recognise and appreciate that it is possible for research to unexpectedly solve a later problem, where research may initially appear to have little utility and be deemed inefficient.” — Ohid Yaqub in Serendipity: Towards a taxonomy and a theory

In a revised edition of his working paper, entited Is Singularity a Scientific Concept, or the Construct of Metaphysical Historicism? Implications for Big History, Graeme Donald Snooks, who discovered the so called ‘Snooks-Panov vertical,’ the algorithm that gave rise to an early expression of the Big History singularity, challenges the methodology, measurement, and interpretation employed by various writers in this field, such as Ray Kurzweil.

“At the end of decades of thinking about the dynamics of life and human society, I return to an issue that I raised at the very beginning of this odyssey — the exponential nature of its progression. In the process I hope to have clarified the nature of the logological constant and its essential role in life. I will conclude by saying that life has an observable pattern and an existential meaning. The rise and fall of species, dynasties, societies, empires, and civilizations; the great genetic and technological revolutions; the great diasporas, civil wars, world wars, and extinctions — all are a part of an intelligible whole. These patterns, outlined above, are the outcome of individual organisms struggling to gain access to nature’s resources and energy sources, through the pursuit of a four-fold set of dynamic strategies, in order to survive and prosper. This is the dynamic core of the engine of life. It possesses a pattern and meaning that can be understood from within life itself (rather than by seeking external, indeed metaphysical, explanations as society has long attempted) through the realist general dynamic theory presented here. My dynamic-strategy theory finally makes sense of simple algorithms about the acceleration of life. The future of life can only be envisaged through a nuanced realist dynamic theory that embraces all the complexities of reality and eschews the flawed metaphysical notion of the singularity.” — Graeme Snooks in Is Singularity a Scientific Concept, or the Construct of Metaphysical Historicism? Implications for Big History

“Where preposterous titles in the past concentrated on over-defining job roles in excruciating detail, today’s vogue is for exactly the opposite. The more ambiguous, meaningless and mystical, the better,” writes Izabella Kaminska in Companies need fewer mystics and more critical thinkers.

“Some might suggest the trend reflects an increasingly edgy and innovative corporate sector that welcomes young people and freethinkers, especially in the executive class. But I would argue it hints at something more worrying: a lack of corporate understanding about what a company’s purpose in society and the marketplace really is these days.”

“What is going on? In seeking an answer, a good place to start is the arrival of AOL’s self-styled “digital prophet”, Shingy (née David Shing), in 2011. On the surface, the role entailed looking wacky and mysterious on the conference circuit. Shingy is now famous for being ‘Shingy.’ Unfortunately, no one under the age of 40 is any wiser about AOL,” Izabella Kaminska writes in Companies need fewer mystics and more critical thinkers.

“We live in a world in which investors demand hyper-growth from would-be corporate success stories. Understandably, companies need to justify their otherworldly ambitions. Appointing sycophantic mystics who can tell fanciful stories about the good that can be done, if and when absolute power is achieved, is one way to do it. How else can you sell total market dominance and monopoly to yourself and others?

From Merlin to Rasputin, the role of chief mystic has always been to provide vindication for the pursuit of outsized power while keeping reality at bay. The same is true of their corporate successors today. The danger comes in the malleability of the narratives if and when a region or population no longer supports your corporate power agenda. Amazon’s recent retreat from New York is a case in point.

And that is the problem: wishy-washy roles encourage wishy-washy thinking, which distracts leaders and investors from the realities at hand. The corporate world would be much better off with more chief critical thinking officers ready to speak truth to power rather than fanning it further. Sadly, there are no such roles on LinkedIn.”

“You know, it’s very simple. I like the idea, and I think that’s my biggest success, that everybody knows visually how I look but nobody knows me in fact. I never made an effort to be visible. I just became visible. I don’t know why. I am not a singer. I am not an actor. I have no scandals. In fact, nobody knows anything about me.” — Karl Lagerfeld

“When Lagerfeld was hired a few years after Coco Chanel’s passing, many thought he would destroy her design legacy, which, in fact, he did — but he did so in order to rebuild, to refresh the Chanel look. Since then, he reinstated Chanel at the top of the fashion industry, believing in instinct, but never thinking there was ever a single creative genius moment that did the magic, and therefore working tirelessly; unconcerned about marketing, yet always aware that his designs had to sell. It is this mix of contradictions that led him to develop a rare balanced mindset between art and business, one that just got rarer in the world since his passing [on February 19th],” Mukti Khaire writes in The Business of Being Karl Lagerfeld, Creator.

“They said, ‘Oh, Chanel would be shocked to death!’ But they didn’t want the homage — the respectful shit — either. So to survive you have to cut the roots to make new roots. Because fashion is about today. You can take an idea from the past, but, if you do it the way it was, no one wants it.” — Karl Lagerfeld in In the Now. (Photograph by Francois Guillot / AFP-Getty Images)

“‘His major strength is to be about his business in the present and never have a moment for other people to think that he’s passé,’ Michael Roberts, the fashion director of Vanity Fair (and, before that, of this magazine) and a friend of Lagerfeld’s for thirty years, says,” John Colapinto wrote in The New Yorker in 2007. “Lagerfeld has maintained his preëminence for five decades, and without any visible sign of strain — unlike his contemporary Yves Saint Laurent, who, until he retired, in 2002, took a Proustian attitude to designing collections, experiencing nervous breakdowns over the hemline juste. ‘Yves pursued the goal of poetic designer suffering for his art,’ Roberts says. ‘I can’t imagine Karl for one minute sitting down and thinking, I’m going to suffer for my art. Why should he? It’s just dresses, for God’s sake.’”

For Chanel’s couture Fall/Winter 2013 show, Lagerfeld transformed the Grand Palais into a bombed-out theater. The curtains parted under a scorched proscenium arch, and it revealed a skyline of glittering post-modern architecture, and the shiny glass surfaces of skyscrapers were in turn reflected in a procession of sequin-crusted clothes. (Photograph by Stephan Cardinale/Corbis via Getty Images)

“Karl Lagerfeld loves only the present. He loves work and does eight collections a year for Chanel, as well as his work for Fendi and other companies. In conversational terms, he takes to the track like a prize racehorse, not only groomed, but leaping the fences and taking the corners with brio. Unlike most people in fashion, he actually likes questions, gaining on you one moment, falling back the next, but never resting on his laurels. I don’t know if I’ve ever met anyone more fully native to their own conception of wonder. That’s to say: He lives out his own legend in every way he can think of, with every instinct he has, and in a world of stolid conventions, he has the courage to perpetuate a vision of something wonderful. He also has the intelligence not to take himself terribly seriously, laughing easily, sending up his own iconic status, and — God save us — actually thinking about the world he makes money from, instead of just feasting on its vanities. Lagerfeld is a man on top of his own greatest invention: himself. And believe it or not he has the talent and the good taste, after all these years, to continue finding the world mysterious, and to give himself wholeheartedly to its discovery. There’s nothing that doesn’t interest Lagerfeld, except perhaps death,” Andrew O’Hagan wrote in an article for The New York Times Magazine in 2015, entitled The Maddening and Brilliant Karl Lagerfeld.

For Fall/Winter 2014, Lagerfeld played a game of Supermarket Sweep and transformed the Grand Palais into a fully-functioning supermarket. No detail was spared, with plenty of double-C-emblazoned food items and even a hardware section fully stocked with Chanel brooms and hammers. Naturally, the usually sterile flock of fashion show attendees completely lost their chill and proceeded to loot the Chanel supermarket after the show finished. (Photograph by Catwalking/Getty Images)
“I have a desk and a lamp (and access to strong Greek coffee) in the Onassis Foundation Library, close to Hadrian’s Arch. Out of the window, across the near constant hum of thick traffic on Syngrou Avenue, I can see the vast columns of the Temple of Olympian Zeus. Their tall Corinthian capitals shine in the cool winter sun. It is a mere slingshot distance from the Acropolis and a truly privileged spot in which to work,” Simon Critchley writes in Athens in Pieces: The Art of Memory. (Illustration from Ruins of Athens, with Remains and the Valuable Antiquities in Greece, translated and edited by Robert Sayer, London, 1759.

“Now, something that I have noticed here and there, talking to sundry folk over the past couple of years, is a renewed interest in antiquity: Greek, Roman, Babylonian, Chinese, Mayan, or whatever. This is partly because the ancient past offers some kind of solace and escape from the seeming urgency of the present — and such consolation cannot be disregarded. Antiquity can be the source of immense pleasure, a word that feels almost scandalous to employ. For a time, we can be transported elsewhere, where life was formed by different forces and shaped with patterns slightly alien to our own.

But also — and most importantly — the ancient past can give us a way of pushing back against what Wallace Stevens called ‘the pressure of reality,’ of enlivening the leadenness of the present with the transforming force of the historical imagination. As such, antiquity can provide us with breathing space, perhaps even an oxygen tank, where we can fill our lungs before plunging back into the blips, tweets, clicks, and endless breaking news updates that populate our days, and where we are ‘distracted from distraction by distraction,’ as T.S. Eliot said. By looking into the past, we can see further and more clearly into the present.” — Simon Critchley in Athens in Pieces: The Art of Memory, the first in a series of dispatches for The Stone

Thank you for reading and Random finds will be back next week. If fortune allows, of course.

--

--

Mark Storm
Mark Storm

Written by Mark Storm

Helping people in leadership positions flourish — with wisdom and clarity of thought

No responses yet