#25: The Empty Seat
On grief, driverless cars, and the trade we make between control and survival.
The steering wheel turned itself, and the first thing I thought about was calling Jack.
I was alone in the back of a Waymo in Austin, watching the wheel make small, precise corrections as we rolled down Congress Avenue. My six-year-old was back home in Greenville, probably building something with Legos or negotiating bedtime with Beth. But he’d want to see this, the haunted car, he’d call it. So I pulled out my phone and FaceTimed him.
His face filled the screen, backlit and a little pixelated. “Dad, why is nobody driving?”
“The car knows where to go and how to get there on its own,” I told him.
He leaned closer to his camera, trying to see past me to the empty driver’s seat. “Is it a robot?”
“Sort of. It’s a computer car that can see everything around us.”
He thought about this for a second, then: “Can it see me?”
“No, buddy, I don’t think so, just the road.”
“Oh.” A pause. “That’s cool.” Then Beth called him for dinner, and he was gone.
I kept the phone in my hand for a while after, watching the wheel turn, thinking about what I’d just shown my son. He’d joined me in my first glimpse into the future where driving is something computers do. Where the thing I learned at sixteen, the skill that meant freedom and independence and adulthood, might be as obsolete to him as a rotary phone.
Where Robert might still be alive.
The God We Made
Robert died in the fall of our junior year.
We grew up in a small town in rural Tennessee, where a driver’s license at sixteen was an economic necessity; the nearest mall was fifteen minutes away. Your sixteenth birthday meant your parents handed you the keys to a decade-old sedan, and you became responsible for getting yourself places.
Robert got his license in September, and he died in October.
It was a Tuesday night, clear and cold. He was driving home alone from somewhere—I don’t remember where, and I’m not sure anyone ever told me. His pickup left the road on a curve he’d driven a hundred times, struck a tree, and that was it. No alcohol, no drugs, no mechanical failure. The police report said “driver inattention,” which is the official way of saying: a teenage brain was inexperienced or got distracted for two seconds, and his life ended.
We held his funeral in the biggest church in town. I remember his mother’s face. I remember thinking that it couldn’t be real, that Robert was too present in the world to just vanish. I remember the drive home from the service, watching my own father’s hands on the wheel, suddenly aware that we were all just fragile bodies piloting two-ton machines at highway speed, trusting ourselves and everyone around us not to drift six inches in the wrong direction.
In a way, I’ve thought about Robert every time I’ve gotten behind a wheel since. Not consciously, but just the awareness that driving is the most dangerous thing most of us do every day, and we do it while thinking about groceries.
**
The Waymo made its way through six traffic lights, two construction zones, and one oblivious pedestrian who stepped into the crosswalk against the signal. The car saw her before I did; it registered her on the display as a glowing outline, recalculated, and stopped with room to spare. No horn, no drama, no flash of irritation that someone had violated the implicit contract of right-of-way.
A sixteen-year-old driver, any sixteen-year-old, including the one I used to be, might have been looking at his phone. Might have been changing the radio station. Might have been thinking about the girl in third period, the argument with his parents, or the basketball game on Friday.
The car was thinking about nothing except not hitting the pedestrian.
What We Worship
Americans worship at multiple altars, but few devotions run deeper than the one we’ve built around the automobile. The driver’s license is a secular confirmation, a ritual passage from childhood dependence to self-directed adulthood. The open road appears in our literature, our music, our foundational myths about who we are….Easy Rider, Born to Run, On the Road; the titles alone constitute a liturgy.
In rural Tennessee, it’s how you get to work, to school, to the doctor, to the life you’re trying to build. Taking away someone’s license can end their employment, their independence, and their ability to function in the geography they’re stuck in.
So when I say I can’t wait to give up driving, I’m conscious of what I’m asking people to surrender. This isn’t like switching from film cameras to digital. It’s more like being told that the skill you learned at sixteen, the capability that underwrote your independence for decades, is now, statistically speaking, insufficient, and that your judgment, honed over hundreds of thousands of miles, isn’t as good as an algorithm. I’ve come to terms with the fact that I’m not as good.
That’s not a small thing to ask. It touches identity, autonomy, and the whole American mythology of self-reliance.
I know the resistance is deeper than Luddism, and it’s not irrational to resist, but it might be lethal to insist.
What the Numbers Actually Say
I’ll try to be precise about the data, because this is where most arguments about autonomous vehicles either get dishonest or sloppy.
Waymo has driven over 20 million fully autonomous miles across several cities, and their reported injury crash rate per million miles is 0.41, compared to a human benchmark of 2.78…an 85% reduction. Their property damage claim rate shows similar improvement. A Swiss Re analysis of insurance data found that Waymo vehicles demonstrate substantially lower frequencies of property damage and bodily injury claims than human drivers in comparable urban and suburban environments.
Meanwhile, the National Highway Traffic Safety Administration reported 42,514 traffic fatalities in the United States in 2022, a number that’s remained stubbornly stable for years despite massive investments in vehicle safety technology. The World Health Organization estimates 1.19 million road deaths globally each year, and traffic crashes are the leading cause of death for people aged 5-29.
Here’s what those numbers don’t tell you:
They don’t tell you how autonomous vehicles perform in snow, ice, or heavy rain, conditions where even the best sensors struggle. Or when a whole city has a power outage, and the traffic lights stop working. They don’t tell you how the systems handle true edge cases, the scenarios so rare that even millions of miles might not surface them. They don’t tell you what happens when a child runs into the street chasing a ball, when a driver ahead has a medical emergency and stops unpredictably, when construction workers wave you into oncoming traffic, or when a plastic bag blowing across the road looks exactly like a pedestrian to a computer vision system.
They don’t tell you what happens when the technology fails, and it has and will continue to fail, or when bad actors deliberately try to confuse it. They don’t tell you whether the safety benefits hold up as the technology scales from test fleets in favorable conditions to millions of vehicles in every weather condition in every city.
The technology isn’t perfect, and it will never be perfect. People will die in autonomous vehicles, but the important question isn’t whether AVs eliminate all risk, nothing does. The question is whether they reduce harm compared to the alternative, and by how much, and under what conditions.
Based on current data, in controlled urban environments, with mature systems like Waymo’s, the answer appears to be: yes, substantially.
That “appears to be” matters. We’re still early and sample sizes are smaller than we’d like, deployment has been limited to mostly favorable conditions, so the gap between testing and reality is real. Skepticism is warranted, and caution is appropriate.
But we also know with certainty what the alternative looks like. Human drivers killed 42,514 people in the US in 2022. Ninety-four percent of serious crashes involve human error: distraction, impairment, fatigue, or speeding. These are problems that don’t afflict software.
Robert would have turned 47 this year.
The burden of proof cuts both ways. Yes, autonomous vehicles must prove they’re safer, but at some point, we have to ask whether human drivers can justify these numbers as an acceptable cost of maintaining control.
The Cost of Control
I teach Jack to look both ways, to watch for drivers who might not see him, and that a green light means it’s legal to cross, not safe; you still have to verify that the vehicles have actually stopped.
These are good lessons, necessary lessons, and they’re also an admission that we’ve built a transportation system where six-year-olds must develop defensive survival skills just to walk to school.
The freedom we celebrate, the autonomy of the open road, has always existed in tension with mutual vulnerability. Driving isn’t a private act. It’s a continuous dance with strangers where the stakes are life and death, and the contract is enforced imperfectly by laws, social pressure, and fear of consequences.
We’ve already surrendered enormous amounts of control in the name of safety: seatbelt laws, speed limits, sobriety checkpoints, vehicle inspections, graduated licensing for teenagers, and mandatory insurance. The entire infrastructure of traffic regulation exists because we collectively decided that individual freedom must bend to collective safety.
We didn’t do this happily. People fought seatbelt laws. They argued that mandatory helmets for motorcyclists infringed on personal liberty. They still argue that speed limits on empty roads are government overreach. Every safety regulation is a small surrender of autonomy, and every one was controversial when introduced.
But we did it anyway, because the alternative was counting bodies.
Most modern cars already intervene constantly: automatic emergency braking, lane-keeping assistance, blind-spot monitoring, and adaptive cruise control. These systems exist because automobile companies know what we’re reluctant to admit: the Platonic ideal of the alert, skilled driver in full control is largely a myth.
We are distracted. We are tired. We misjudge distances and speeds. We look at our phones. We fight with our partners. We rehearse conversations. We think about work. We glance away at exactly the wrong moment. The human mind is a meaning-making machine, not an attention-sustaining machine. It’s built to wander, to make connections, to get bored with repetitive tasks, which describes most driving most of the time.
Robert’s mind drifted for two seconds on a road he knew by heart, and the laws of physics don’t care about familiarity.
I’ve had close calls; I bet everyone who drives has. Think about that moment when you look up, and the car ahead has stopped, and you slam the brakes, and your body floods with adrenaline, and you think “that was almost—” and then you keep driving. We’ve normalized those moments and accepted them as the price of mobility.
But what if we don’t have to?
What We Actually Lose
I need to be honest about this: there is something genuine and valuable that disappears when the steering wheel becomes decorative.
I’ve had perfect drives. Highway 1 down the California coast with the windows open, and nowhere I needed to be. Back roads in Vermont during October when the leaves hit peak color. Empty highways at dawn when I’m the only person awake for miles, music playing, coffee still hot, the road unspooling ahead like possibility itself.
In those moments, driving becomes more than just transportation: it’s meditation, flow state, communion with machine and landscape. The car becomes an extension of intention. The road becomes conversation. Time collapses into the present tense in a way that’s increasingly rare in modern life.
Those experiences are real. The pleasure is real. For some people, I know that it’s more than pleasure; it’s passion. I know car enthusiasts who know their vehicles intimately, who’ve modified and maintained them, who experience driving as craft, and people who grew up working on engines with their fathers, who can diagnose problems by sound, who’ve built identity and community around automotive culture.
I also appreciate that we have professional drivers who’ve built entire careers around their skill, three and a half million of them in the United States alone: truck drivers, taxi drivers, delivery drivers, bus drivers. These are people who’ve spent decades developing expertise that’s about to become obsolete. They have families, mortgages, and lives built on the assumption that their skills have market value.
Telling them to learn to code or retrain for something else is an absurd request that ignores economic reality. Many are in their fifties or sixties or live in places where there are no other jobs. I know the coming automation will destroy some of these people financially, and “creative destruction” doesn’t spend much time thinking about the destroyed.
All of that matters. All of it deserves more than dismissive reassurance that everything will work out. The transition to autonomous vehicles will create winners and losers, and we should be honest about who ends up where.
The tension is real. Every one of those 42,514 deaths in 2022 left behind family, friends, futures that ended on an ordinary Tuesday or Saturday or whenever their particular variables lined up wrong. The romance of the open road has always had a body count.
I don’t expect everyone to weigh these trade-offs the same way I do. For some people, what’s lost will always matter more than what’s gained. I understand that. It’s a legitimate difference in values, a different calculus about what kind of freedom matters most.
But I am interested in asking whether that romance is worth its cost.
And I’m interested in asking who gets to make that decision.
The Harder Questions
Everything I’ve written so far is the easy part. The hard part is what comes next.
Who owns the data from every trip you take?
Right now, Waymo knows where you go, when you go there, how long you stay, and who goes with you. That information has extraordinary commercial value and extraordinary surveillance potential. The privacy implications of ubiquitous autonomous vehicles make current concerns about smartphone tracking look quaint. We’re talking about a detailed map of human movement, commercially owned, potentially accessible to governments, marketers, insurance companies, and anyone willing to pay or compel access.
We haven’t begun to grapple with this seriously. The legal frameworks don’t exist. The democratic debate hasn’t happened. We’re sleepwalking into a surveillance infrastructure that will make opting out of mobility tracking nearly impossible.
What happens to those 3.5 million professional drivers?
The usual answer involves retraining programs, economic transitions, and faith in labor market adaptability. The reality is that many of these people will be economically destroyed by automation, and we don’t have good solutions for that.
Universal basic income is a theory. Job guarantees are a slogan. The social safety net is already inadequate for current needs. The idea that we’ll suddenly develop the political will to support millions of displaced workers at the exact moment their labor becomes worthless is optimistic, bordering on delusional.
Some people will land on their feet, but many won’t. The cost of safer roads might be measured partly in communities that depended on driving jobs and don’t anymore, in families that can’t make rent, in middle-aged men trying to explain to their kids why everything changed.
I don’t have an answer to this that feels adequate. Neither does anyone else. But pretending the problem will solve itself is not realistic.
How should an autonomous vehicle make decisions in no-win scenarios?
The trolley problem isn’t just a philosophy seminar exercise anymore; it’s an engineering specification. When a crash is unavoidable, whose safety does the algorithm prioritize? Its passengers? Pedestrians? The greatest number? The youngest? The legally innocent party?
These aren’t just hypothetical questions anymore; now they’re value judgments that we’re encoding into software that will execute them at millisecond speed, without appeal, thousands of times per day, and we’re doing it without anything resembling democratic input into what those judgments should be.
Should the car swerve to avoid a child even if it means crashing into a concrete barrier and potentially killing its passenger? Should it protect its passenger at all costs? Should it calculate expected years of life remaining and optimize for that? Should it factor in fault; whether the pedestrian was jaywalking or the other driver ran a red light?
Every answer is defensible, and every answer is horrifying to someone. And right now, it seems these decisions are being made by engineers and corporate lawyers, not through public deliberation.
Who’s liable when an autonomous vehicle causes harm?
Our entire legal framework for traffic crashes assumes human drivers. When an AV hits someone, who gets sued? The passenger, who had no control? The manufacturer? The software company? The city that permitted deployment? The sensor manufacturer whose LIDAR failed to detect the pedestrian?
We don’t know. The case law doesn’t exist yet. The insurance models are provisional. The regulatory frameworks are incomplete. We’re deploying technology that will generate novel legal questions at scale, and we’re hoping we’ll figure it out as we go.
Maybe we will. But “move fast and break things” feels different when the things being broken are people.
What I Tell My Son
Jack is six. If the optimistic timelines are correct, he might never need to learn to drive. By the time he’s sixteen, the steering wheel could be as archaic as a hand-crank ignition.
I don’t know how I feel about that.
Part of me wants him to experience what I experienced at sixteen: that first unsupervised drive, windows down, music too loud, the sudden expansion of the world into something he could navigate alone. The rite of passage. The proof of competence. The freedom.
Part of me remembers Robert’s funeral.
And part of me thinks about that FaceTime call, about Jack’s face on the screen asking if the car could see him, about how quickly he accepted the strangeness and moved on. About how the generation growing up with this technology won’t experience it as a loss because they’ll never have had the thing we’re giving up.
If we do this right, and that’s a massive “if” that contains multitudes of ethical, economic, and technical challenges, Jack will grow up in a world where traffic fatalities are rare enough to make national news, the elderly and disabled have access to mobility that doesn’t depend on begging rides or paying surge pricing, commutes become time recovered for reading, working, sleeping, thinking, writing, or just watching the world go by, and where children don’t need to develop defensive survival skills just to cross the street.
Where some kid in a small town in Tennessee gets distracted on a Tuesday night and the car just... corrects for it.
That world I’m dreaming of requires giving up something real, answering hard questions we’re currently avoiding, caring about displaced workers and privacy and algorithmic accountability in ways we’re not demonstrating. And it requires humility about what we don’t know, caution about unintended consequences, and willingness to slow down when the answers aren’t ready, but it might also mean that fewer parents have to stand in a church looking at a closed casket.
The Haunted Car
I showed Jack the video from the Austin Waymo ride last week. He watched the empty driver’s seat, the steering wheel turning itself, the display showing glowing outlines of pedestrians and cars.
“It’s like a ghost is driving,” he said.
“Sort of.”
“But a nice ghost.”
“Yeah. A nice ghost that doesn’t get tired or distracted.”
He thought about this. “Does it know where our house is?”
“It could, if we told it.”
“Could it take me to school?”
“Maybe someday.”
He went back to his Legos. The strangeness had already become normal. The haunted car was just another piece of technology in a world full of inexplicable magic.
I’m forty-five. I’ve been driving for twenty-five years. I’ve driven across the country twice. I’ve navigated rush hour in Boston and learned to parallel park in San Francisco and driven through blizzards in Colorado. I’ve experienced what it feels like to handle a car well, to read traffic like music, to make a vehicle an extension of will.
I still can’t keep my attention perfectly focused every second I’m behind the wheel. Nobody can.
Waymo can.
That’s not everything. It doesn’t resolve the privacy concerns or the job displacement or the ethical dilemmas, or the regulatory gaps. It doesn’t mean we should rush deployment or ignore the people who’ll be harmed by the transition. It doesn’t mean the technology is ready for every condition or that the risks are fully understood.
But it means that the fundamental problem—human attention in a task that demands perfect vigilance—has a potential solution that didn’t exist when my friend Robert died.
And that means these vehicular deaths aren’t inevitable. They’re a choice we’re making by default; every year we delay, every time we prioritize the feeling of control over the reduction of harm.
Robert should have made it home that Tuesday night. The road was clear. The weather was fine. Nothing dramatic happened, just the ordinary drift of teenage attention or inexperience and the extraordinary consequences of being wrong for two seconds.
The ghost is driving now.
We should let it.


