Random finds (2016, week 43–44) — On dumb ‘smart’ devices, artificial intelligence, and the rise of the urbanpreneur

Mark Storm
12 min readNov 4, 2016
Let there be space. How a Norwegian renovation project is combating stress with a new approach to urban art and architecture. (source: Quartz)

Every Friday, I run through my tweets to select a few observations and insights that have kept me thinking.

On dumb ‘smart’ devices

For as long as there’s been an internet, its evangelists have assured us increased connectivity will yield a brighter future. “By connecting all manner of devices, from cars to appliances to clothing, to the web, we’ll realize more convenience and efficiency,” Oliver Staley writes in The dream of a fully connected future is starting to look like a nightmare. “We’ll be able to adjust our thermostat before we get home and have milk delivered when the refrigerator senses we’re running out. According to one analysis, 6.4 billion devices will be online by the end of the year.”

But the gadgets connecting the Internet of Things haven’t been designed with the robust security we now take for granted on our phones and laptops. They don’t have enough memory for safety software, use generic code and access web by default. And as more devices are connected to the web, our vulnerability only grows, and the machines hacked might not just be relatively harmless gadgets like cameras but potentially lethal tools like cars. Last month, Chinese tech company Tencent alerted Tesla it was able to hack into its cars and activate their driving and braking systems, according to a recent article in WIRED. Tesla has since beefed up security in all its cars.

A Tesla Model X is displayed inside of the new Tesla flagship store in San Francisco. Photograph: Justin Sullivan/Getty Images.

According to Staley, “the solution is likely new, industry-wide standards for security and an independent organization that can issue seals-of-approval for packaging that attest to the safety of products, so customers can know what they’re buying, Krebs says. The European Commission is already drafting requirements — but they won’t fix the billions of vulnerable devices already out there.”

Our crippling vulnerability to malicious attacks is a problem created by technology, and if they can’t fix it, governments must.

“In their rush to give us the new and improved,” Staley argues, “technology companies wave off concerns about the unintended consequences of digital lifestyle. Security flaws are just another bug to be fixed, or better, an opportunity to market a new product. Many of Silicon Valley’s moguls embrace a quasi-libertarian outlook that rejects regulation and say technology can solve problems the government can’t. But our crippling vulnerability to malicious attacks is a problem created by technology, and if they can’t fix it, governments must.”

“So where is the smart online criminal going to go next? Obligingly, the tech industry has provided him with the capability to assemble even bigger botnets with much less effort. The new magic ingredient is the internet of things — small, networked devices that are wide open to penetration. (source: Why the internet of things is the new magic ingredient for cyber criminals, by John Naughton, The Guardian)

A depiction of the outages caused by recent DDoS attacks on Dyn, an Internet infrastructure company.

“Sims [Richard Sims is a product development consultant at the Technology Partnership] suggested manufacturers were needlessly over-equipping their connected products. ‘Why would you want your baby monitor to be contactable from outside your home at all? Maybe there are ways to mitigate some of this stuff by being a little bit more intelligent about how you open things up to the broader internet.’” (source: ‘Smart’ devices ‘too dumb’ to fend off cyber-attacks, say experts, by Damien Gayle, The Guardian)

On artificial intelligence

“Speaking to attendees at a deep learning conference in London last month, there was one particularly noteworthy recurring theme: humility, or at least, the need for it,” James Vincent writes in These are three of the biggest problems facing today’s AI.

“While companies like Google are confidently pronouncing that we live in an ‘AI-first age,’ with machine learning breaking new ground in areas like speech and image recognition, those at the front lines of AI research are keen to point out that there’s still a lot of work to be done. Just because we have digital assistants that sound like the talking computers in movies doesn’t mean we’re much closer to creating true artificial intelligence.”

Machine learning in 2016 is creating brilliant tools, but they can be hard to explain, costly to train, and often mysterious even to their creators.

One of the problems facing today’s AI is that “all our current systems are, essentially, idiot savants.” Once they’ve been trained, they can be incredibly efficient at tasks like recognizing cats or playing Atari games. But there is no neural network in the world, and no method right now that can be trained to identify objects and images, play Space Invaders, and listen to music.

But the problem is even worse than that, Vincent argues. “When Google’s DeepMind announced in February last year that it had built a system that could beat 49 Atari games, it was certainly a massive achievement, but each time it beat a game the system needed to be retrained to beat the next one. As Google DeepMind research scientist Raia Hadsell points out, you can’t try to learn all the different games at once, as the rules end up interfering with one another. You can learn them one at a time — but you end up forgetting whatever you knew about previous games. ‘To get to artificial general intelligence we need something that can learn multiple tasks,’ says Hadsell. ‘But we can’t even learn multiple games.’”

The basic layout of a progressive neural network. (Image credit: DeepMind/Raia Hadsell)

“A solution to this might be something called progressive neural networks — this means connecting separate deep learning systems together so that they can pass on certain bits of information. In a paper published on this topic in June, Hadsell and her team showed how their progressive neural nets were able to adapt to games of Pong that varied in small ways (in one version the colors were inverted; in another the controls were flipped) much faster than a normal neural net, which had to learn each game from scratch.”

It seems a promising method — when applied to robotic arms during recent experiments, their learning process was accelerated from a matter of weeks to just a single day — there are still significant limitations. “Progressive neural networks can’t simply keep on adding new tasks to their memory. If you keep chaining systems together, sooner or later you end up with a model that is ‘too large to be tractable,’ Hadsell says. And that’s when the different tasks being managed are essentially similar — creating a human-level intelligence that can write a poem, solve differential equations, and design a chair is something else altogether.”

“In conclusion, consumers seem ready to accept AI, or at least are ready for what they think AI is right now. Perceptions are undoubtedly colored by stories in the media about robots, drones, gaming, and speech recognition. But many people appear willing to use AI to save time, complete dangerous tasks, and make their lives easier. Of course, the biggest challenge will likely be helping consumers overcome genuine fears about jobs being replaced and increased cybercrime.” (source: What Do People — Not Techies, Not Companies — Think About Artificial Intelligence?, by Leslie Gaines-Ross, Harvard Business Review)

“Predictive learning, also called unsupervised learning, is the principal mode by which animals and humans come to understand the world. Take the sentence ‘John picks up his phone and leaves the room.’ Experience tells you that the phone is probably a mobile model and that John made his exit through a door. A machine, lacking a good representation of the world and its constraints, could never have inferred that information. Predictive learning in machines — an essential but still undeveloped feature — will allow AI to learn without human supervision, as children do. But teaching common sense to software is more than just a technical question — it’s a fundamental scientific and mathematical challenge that could take decades to solve. And until then, our machines can never be truly intelligent.” — Yann LeCun, Facebook’s director of artificial-intelligence research (source: What’s Next for Artificial Intelligence, The Wall Street Journal)

A bit more …

“Entrepreneurs can live anywhere in the world and focus on any industry. Urbanpreneurs embed in their socio-ecological environments — cities and towns — to draw influence. They’re tapping into what cities have to offer so they can collaborate and innovate in their community,” says Boyd Cohen in an interview with Citylab about his latest book written with Pablo Muñoz, The Emergence of the Urban Entrepreneur.

Photograph: Elijah Nouvelage/Reuters.

“The book explores the changing trends in entrepreneurship and how it has been urbanizing in recent years. We know the creative class has started to migrate to urban areas in cities around the globe. Lower-cost innovation and less need for physical space have converged and changed the way we work. As part of the creative class, tech entrepreneurs and their staff are increasingly preferring to live and work in dense, diverse urban areas with excellent public transit, cultural amenities, food, and night life.”

“Systems are in a state of constant change,” writes Duncan Green, the author of How change happens, in Radical thinking reveals the secrets of making change happen. “Jean Boulton, one of the authors of Embracing Complexity, likes to use the metaphor of the forest, which typically goes through cycles of growth, collapse, regeneration, and new growth. In the early part of the cycle’s growth phase, the number of species and of individual plants and animals increases quickly, as organisms arrive to exploit all available ecological niches. The forest’s components become more linked to one another, enhancing the ecosystem’s ‘connectedness’ and multiplying the ways the forest regulates itself and maintains its stability. However, the forest’s very connectedness and efficiency eventually reduce its capacity to cope with severe outside shocks, paving the way for a collapse and eventual regeneration. Jean argues that activists need to adapt their analysis and strategy according to the stage that their political surroundings most closely resemble: growth, maturity, locked-in but fragile, or collapsing.”

Traders on floor of the New York Stock Exchange in 2008. Earthquakes are often unforeseeable, despite later claims. Photograph: Mike Segar/Reuters.

“Change in complex systems occurs in slow, steady processes such as demographic shifts and in sudden, unforeseeable jumps. Nothing seems to change until suddenly it does. When British Prime Minister Harold Macmillan was asked what he most feared in politics, he reportedly replied in his wonderfully patrician style, ‘Events, dear boy.’ Such ‘events’ that disrupt social, political, or economic relations are not just a prime ministerial headache. They can open the door to previously unthinkable reforms.”

Further reading: How do you go about embracing complexity? It’s complicated, also by Duncan Green.

“We’re moving in general too fast. We have to go more slowly, have more organic processes, and plan much further ahead than we do now. When you’re doing city development you can’t rely on linear processes … because you’ll miss things along the way.” — Anne Beate Hovind in A Norwegian renovation project is combating stress with a new approach to urban art and architecture.

“These days […] the sharing economy feels a bit past its prime,” says Ben Tarnoff in The future: where borrowing is the norm and ownership is luxury. “After an initial wave of investor enthusiasm, a few “sharing” startups such as Uber and Airbnb have hit it big, but many more have fallen flat. It turns out, people are happy to pay for rooms or rides but are less thrilled about renting PlayStations or power tools. ‘The Sharing Economy is Dead,’ Fast Company declared last year, summarizing a general sense of fatigue with what now feels like a wildly oversold idea.”

“But what if the postmortems are premature? New paradigms usually don’t arrive all at once, but by fits and starts. The first phase of the sharing economy might have fizzled out, but new technologies may soon resurrect it in a far more radical form. And if they do, the result won’t just be a new app for renting power tools. It’ll be the biggest revolution in property ownership since the birth of capitalism.”

“Fin-de-siècle factories and modern-day workplaces aren’t too dissimilar. Both were designed explicitly for teams of specialists.” — Lisa Baird in Want To Be More Productive And Creative? Collaborate Less.

In Scientific American, Scott Barry Kaufman writes about his recent experiences at the Futurist Imagination Retreat organized by the Imagination Institute and the Institute for the Future (IFTF) in Palo Alto, California.

“I was personally so inspired by the discussions,” Kaufman writes, “and learned a lot about how futurists imagine future scenarios. I was particularly struck by how prosocial the group was — they specifically wanted to use their imaginations to make the world a better place. Another big takeaway for me was that this way of thinking is a skill that can be taught, and that society would benefit greatly if we all learned the valuable skills that enable us to think beyond what is to the many ways things could be.”

“Futurists are trained to imagine distant realities that to others seem implausible, or even impossible, today: technologies that don’t exist yet, dramatic changes to social norms or laws, detailed scenarios such as the strange pandemic most likely to infect us in the year 2031, or new forms of government that may unfold when space colonization becomes commonplace. Even if such possible futures can be interesting to consider, most lay people view them as little more than an intellectual curiosity. What is the practical purpose in contemplating a world thousands of tomorrows away, a world that may never actually come to pass, when there are so many pressing concerns right now?

Why indeed think about such far-off futures? Are there psychological and social benefits to imagining the world, and our lives, decades in advance? And if so, what does it take to become good at imagining the far future? How do you evaluate skill, success and impact? What obstacles might prevent someone from being able, or willing, to practice far-future thinking — and how can we help people overcome them?” — Jane McGonigal and Mark Frauenfelder

You can read the full report here, prepared by Jane McGonigal and Mark Frauenfelder. Also, here’s a neat write-up by Mark Frauenfelder.

“The conventional wisdom goes that, for its all fury and fanfare, brutalism was a mere speck in architectural history; a 20-year experiment from 1955 to 1975 that produced some interesting outcomes and some downright weird cityscapes. Modernism is over, our faith in the future is damaged, and architecture today should just respond to what’s already there rather than creating brave new worlds built from great cliffs of exposed concrete and multiple, vertiginous levels.”

But according to Christopher Beanland in What cities of the future can learn from brutalism, “brutalist buildings looked to the future. They recognised that they were alien, sometimes scary, and not like what was already there. They were an investment in the city, showing great vision and optimism. Much of what has been built in cities in the last 10 years often looks like it’s come straight back from the past — especially houses, many of which just pander to false notions of a cosy history.”

Keeling House, a 16-storey block of flats in Bethnal Green, London, was British architect Denys Lasdun’s attempt to break away from certain ideas of mass housing derived from Le Corbusier and to keep other elements that were relevant to British living condition.

“What cities can really learn from brutalism is faith. Brutalism was an act of faith, a song to the city. It was rough and tough but so are cities — they’re not the countryside. If we built like this again it would say that we love our cities and we see them as places that can be exciting, innovative, fair, prosperous and long-lasting.”

Richard ‘Bucky’ Buckminster Fuller is “a forgotten hippie idol, a sage of 1960s counterculture,” says Samanth Subramanian. Forgotten or not, what can we learn from Buckminster Fuller’s faith in technology?

“In the years after Buckminster Fuller’s death, the world’s faith in technology faded,” Subramanian writes in Buck to the future. “Globalisation and the emergent neoliberal consensus accompanied environmental crises, the failures of assorted technological panaceas, and a mistrust of officials and experts. We are emerging only now from the shadow of these other, deeper disappointments, into the bright, confusing light of a new technological boom.”

The Montreal Biosphere designed by Richard Buckminster Fuller for the 1967 World’s Fair, Expo 67. Courtesy Wikipedia.

“Perhaps this is why there is a certain reassurance to be mined from Buckminster Fuller’s principles. A belief in technology as salvation is really a belief in humankind’s capacity to use it as such, to not bend it to damaging ends. Fuller was a tireless optimist when it came to his species. We have been endowed with intelligence, he thought, so we occupy a special place in the universe. ‘Nature was,’ he wrote, ‘clearly intent on making humans successful.’ This was the quintessential Buckminster Fuller paradox: he doubted our ability to mend our imperfections, but he was confident in our facility to develop technologies that can outwit them.”

Leonard Bernstein

“I am a beginner … all the time.” — Leonard Bernstein in Dinner with Lenny.

--

--

Mark Storm

Helping people in leadership positions flourish — with wisdom and clarity of thought