A self-driving Uber car fatally crashed into a pedestrian in Tempe, Ariz., last week, tragically illustrating the fears that some of us have long held about the dangers of these technologies. The woman appeared from a darkened area onto a road, and the police said the accident would have been hard to avoid even with a human driver behind the wheel. Yet this is not the way it was supposed to be: Autonomous cars were supposed to be better than humans in exactly such situations.

The lidar, radar and cameras that self-driving cars employ are designed to have advanced vision, and their computers have the ability to make instantaneous decisions. Yet the crash suggests that the technology may not be ready for prime time. The race among technology companies to be the first to put these cars on the road is having fatal consequences.

Uber’s self-driving vehicle system appeared to have several flaws, according to my colleague Raj Rajkumar, who heads Carnegie Mellon University’s self-driving laboratory. As he explained in an email, “What we saw on the video indicates several trouble spots with the Uber approach, design and software capabilities. There is a serious mismatch between its sensor configuration and actual usage contexts. For example, even though Uber’s self-driving vehicle has multiple cameras, their usefulness at night time is extremely limited at best and add no value during those dark hours when they do operate the vehicles.”

Rajkumar also didn’t let the operator off the hook. “The operator’s role is to act as the safety backup — when the technology fails, (s)he is required to step in. The operator in this case was distracted for a shockingly long duration of time, which culminated in the death of the pedestrian,” he wrote.

The reality is that self-driving cars are far from being able to coexist with humans on local roads. Both sides are learning. It is one thing for a human to put the car into autopilot on a highway and another to navigate city streets onto which adults, children and animals may suddenly wander. Autonomous cars need to be relegated to special tracks and highways for at least two or three more years, till they can deal with such contingencies.

To be clear, I am not an opponent of the technology. I own a Tesla Model S and am comfortable with letting the car take control of the wheel on highways — despite the Tesla crash that occurred in 2016. But using autopilot on local roads is as dangerous as using cruise control on local roads: You just shouldn’t do it.

Toyota did the wise thing by halting testing of its autonomous cars on local roads. All other makers of autonomous cars need to do the same. Or governments may need to call the race off by declaring a moratorium until the vehicles to be road-tested demonstrate certain minimum capabilities.

Self-driving cars may bring profound improvements in our lives and slash accident and fatality rates, saving millions of lives. They could reduce the need for ownership, because we would be able to share them, and they could deliver incontrovertible social benefits, offering the disabled on-demand personal drivers. People living in the country could finally gain access to transportation services that put them nearly on par with their city cousins. Crossing or walking next to roads may cease to be a high-risk activity.

And, eventually, these autonomous systems could replace humans at the steering wheel, just as horseless carriages replaced the horses. But injudiciously rushing into autonomous driving will lead to unnecessary accidents, justifying calls to outlaw it and halting progress of the technology. It is better to proceed cautiously with it and ensure that the rewards outweigh the risks.