Is There Anything "Smart" about Smart Cities?
Strong Towns member Joe Cortright runs the think tank and blog City Observatory.
You can’t be an urbanist or care about cities without hearing—a lot—from the folks in the “Smart Cities” movement. The idea is that there’s nothing wrong with cities that a healthy does of information (especially “big data”) or technology can’t solve. The result has been a seemingly unending series of claims that we can fix problems as challenging as housing, traffic and inequality simply by building more elaborate models based on big data, or deploying new forms of technology. Color us skeptical: It’s hard to see how we’ll make better decisions with even bigger data when policymakers seem to routinely ignore the small and obvious data that’s already well in hand.
As an example, we start with a fairly simple problem. Lots of Americans would like to be able to walk to more common destinations. They pay a substantial premium for living in housing and neighborhoods with high levels of walkability. Yet we seem to be consistently building more cities and neighborhoods that are wildly car-dependent. Surely, if smart cities and technology are the solution, they ought to be able to grapple with this basic problem. But they aren’t.
We’ve frequently picked on Houston, a city which is notoriously hostile to people who want to walk (although all American cities need to improve in this regard). The typical technocratic/smart city approach which gathers copious data about current travel patterns—which themselves reflect a car-dominated world—contain almost no information about walking and biking, and, even more importantly, never ask the question of whether people would prefer places that made it easier, more convenient and safer to walk, compared to ones optimized for vehicle movement.
The way the “smart city” and technology folks approach it, “fixing” cities and transportation is all about vehicles, as their simulations illustrate.
In the “smart city” world, this kind of thinking leads to claims that we can eliminate all traffic lights at intersections, turning vehicle control over to centralized computers (essentially forgetting about bikes and pedestrians), or that autonomous taxis could eliminate congestion, but which ignore fundamental concepts like induced demand. Researchers at MIT and the University of Texas came up with these ideas, and simply assumed pedestrians and cyclists don’t exist, as Eric Jaffe explained, to these auto-oriented engineers:
It’s natural to model intersections as if cars were the only mode that mattered—especially when computer drivers make every move predictable. The driverless intersection we presented a few years ago, based on work from computer scientists Peter Stone and Kurt Dresner of the University of Texas at Austin, made the same assumptions: lots of cars, no people or bikes.
Even really smart people, armed with loads of data and ”design-thinking” tend to ask questions that are fundamentally too narrowly drawn, and that emphasize movement for movement’s sake, rather than harnessed to any greater sense of well-being or quality of life.
Transportation planners and self-proclaimed “smart city” technologists focus on optimizing cities for vehicle movement, and place little or no value on optimizing the quality of place for people, whether they’re walking, biking, or just choosing to be in a particular place. The “Big Data/Smart City” viewpoint imagines that making cities work is only about getting from “A” to “B”, when in reality the urban challenge is creating places that people want to “be.” That’s a big, big problem.
The real challenge for cities is being more ambitious and aspirational in building the livable and inclusive places we want. That’s less about tweaking the performance of systems like transportation, and more about building strong community engagement around the vision we have for the future.
Consider housing, which is an economic, affordability and equity challenge. Again, there are abundant technological solutions, like 3-D printing houses, that are provocative, but don’t deal with the fundamental institutional problem that we simply make it illegal to build affordable housing in lots of places in the US. The effort to build a coalition around the idea of allowing “missing middle” housing is really key to all these objectives (and environmental ones, as well). And, as Portland, Oregon’s recent experience experience in legalizing duplexes and creating greater affordability with its Residential Infill Project shows, this isn’t just or even primarily about wonky modeling—it’s about political engagement. Technology, modeling and data can play a supporting role, but the challenge is organizational, political and communication.
Promises of an easy technical fix, moreover, create a kind of Gresham’s law in which the prospect of impractical but technologically exciting vaporware drives out fundamental institutional reform. See, for example, Elon Musk’s magical hyperloop, currently serving as an excuse not to fix mass transit in many cities, as Alissa Walker trenchantly observes:
And each time city leaders promote one of his [Elon Musk’s] fantastical ideas—tiny tunnels! autonomous vehicles! platooning!—it does serious damage to the real-life solutions being proposed by experts that will actually make life better for their residents.
Case Study: Portland, Oregon
In theory, big data and smart city models should help us make better decisions, in practice, they’re slaves to broken and biased policies.
Almost a decade and two mayors ago, IBM’s Smart City team came to Portland to show off some of its elaborate models of urban systems. There’s precious little evidence that the IBM smart city modeling has had any staying power in the city. Take for example climate change, which is something the IBM model was meant to address. As it turns out, Portland didn’t use this modeling for the 2015 Climate Action Plan; and since then, the city has pretty much walked away from a rigorous look at how reducing driving is the key to achieving our stated (and now more ambitious) climate goals.
The bigger question is how models and their results get used in the political/institutional setting in which we live. Models, especially big complex ones, are generally wielded as weapons by institutions to avoid or deflect scrutiny of their big decisions. Highway builders like the Oregon Department of Transportation routinely cook the books in their transportation modeling to justify giant projects—like its Columbia River Crossing and Rose Quarter projects. The public lacks the energy and resources to contest this technical work, and it becomes a huge barrier to change and fair consideration of alternatives.
Frequently, state highway agencies and regional planning models that create such models construct them in ways that systematically rule out important factors. Portland’s Metro has gone to great lengths to deny that there’s any such thing as the price elasticity of demand (perhaps the most fundamental concept in economics), arguing that driving will increase regardless of the price of gasoline. That leads Metro, in turn, to ignore the most powerful policy levers it could deploy to reduce pollution and greenhouse gas emissions, and help bolster transit ridership. Its chosen financing mechanism actually subsidizes driving, which undercuts all of its stated goals of reducing emissions and discouraging sprawl.
And even when the models show that plans aren’t achieving our goals, the powers that be simply ignore them. Witness Metro’s $5 billion transportation package which will, according to their estimates, reduce transportation greenhouse gases in the Portland area by 5/100ths of one percent—essentially nothing. In the face of growing evidence that their climate plans are feeble and failing, we get the repetition of discredited myths (i.e., we’ll reduce greenhouse gases by reducing idling in traffic by widening roads). And the consensus of the state DOT’s calls for climate arson in the form of spending hundreds of billions of dollars on new highway capacity.
Much of the “smart cities” work is engaged in and tolerated only to the extent that it supports—or at least doesn’t get in the way of—the status quo megaprojects these organizations want. Bottom line: the “smart cities” effort focuses mostly on developing algorithms for the optimal arrangement of deck chairs on the Titanic, and hasn’t convinced anyone to change course to avoid icebergs.
Big Data, Bright Lights, Blind Spots
Shining the light of big data changes our perception:
smallest details of some things come into sharp relief.
But our vision changes,
our pupils constrict, our focus narrows.
Many things that were shades of gray are now fully illuminated.
But outside the glare of the big data spotlight,
everything else is plunged into darkness.
This is the profound bias of big data,
it illuminates some things, but darkens others.
And it does so in ways that mean we make bad decisions.
We are drawn to the light.
Focused on solving the problems we see
and dismissing—out of ignorance—the ones we can’t.
When we measure movement,
we inherently advantage those who are moving through
over those who are in or of a place, stationary.
When we measure vehicles,
we inherently advantage vehicles, and penalize those who do not have vehicles
As a result, we end up spending resources and building places
for the people who do not want to be there,
in the process, making them worse for the people who do.
Little surprise that people with choices move away
In this episode of Upzoned, host Abby Newsham is joined by Edward Erfurt, Strong Towns' director of community action, to discuss the U.S. DOT's plan to implement vehicle-to-everything technology in 75% of the nation's intersections.