Advertisement

MIT helps self-driving cars ‘see’ through snow and fog

By mapping what’s beneath the road instead of what’s on it.

Self-driving technology has come a long way, but it can still be tripped up by bad weather. A team from MIT's Computer Science and Artificial Intelligence Lab (CSAIL) may have a solution. They've developed a way to help autonomous vehicles "see" by mapping what's beneath the road using Ground Penetrating Radar (GPR).

Most autonomous vehicles use LIDAR sensors and/or cameras to figure out where they are on the road, but cameras can be thrown off by lighting conditions or snow-covered signs and lane markings, and LIDAR often becomes less accurate in inclement weather. GPR, on the other hand, sends electromagnetic pulses into the ground to measure the specific combination of soil, rocks and roots. That data is turned into a map for self-driving vehicles.

The system, which uses a type of GPR called Localizing Ground Penetrating Radar developed at the MIT Lincoln Laboratory, offers a few benefits. For starters, it doesn't matter if the road is snow-covered or if visibility is blocked by fog. And conditions under the road tend to change less often than features like lane striping and signage.

"If you or I grabbed a shovel and dug it into the ground, all we're going to see is a bunch of dirt," says CSAIL PhD student Teddy Ort. "But LGPR can quantify the specific elements there and compare that to the map it's already created, so that it knows exactly where it is, without needing cameras or lasers."

So far, the CSAIL team has only tested the system at low speeds on a closed country road, but the researchers believe it could be easily extended to highways and other high-speed areas. They admit that the system doesn't work as well in rainy conditions, when water has seeped into the ground below the road, and that it is far from road-ready. It would also have to be used in combination with other technology, as it wouldn't detect hazards on the road.

A paper on the project will be published in the IEEE Robotics and Automation Letters journal later this month. The team plans to continue refining the hardware, so that it is less bulky -- it's currently six feet wide -- and improving LGPR mapping techniques.