MIT is Using AI to Predict What Roads look like Behind Obstructions


MIT is working with Qatar Computing Research Institute to build-out maps with AI.

“While visiting Qatar, we’ve had experiences where our Uber driver can’t figure out how to get where he’s going, because the map is so off,” said Sam Madden, a professor at MIT’s Department of Electrical Engineering and Computer Science. “If navigation apps don’t have the right information, for things such as lane merging, this could be frustrating or worse.”

How are they doing bridging the gap? With satellite images. The biggest issue is that roads can be blocked by buildings, trees, or street signs. To overcome this they created RoadTagger, a neural network to automatically predict what roads look like behind obstructions. It’s able to guess how many lanes a given road has and whether it’s a highway or residential road.

RoadTagger uses a convolutional neural network and a graph neural network. First, raw satellite images of the roads in question are input to the convolutional neural network. Then, the graph neural network divides up the roadway into 20-meter sections called “tiles.” The CNN pulls out relevant road features from each tile and then shares that data with the other nearby tiles. That way, information about the road is sent to each tile. If one of these is covered up by an obstruction, then, RoadTagger can look to the other tiles to predict what’s included in the one that’s blocked.

“Humans can use information from adjacent tiles to guess the number of lanes in the occluded tiles, but networks can’t do that,” Madden said. “Our approach tries to mimic the natural behavior of humans … to make better predictions.”

The model correctly is currently able to predict 77 percent of the time correctly and inferred the correct road types 93 percent of the time. In the future, the team hopes to include other new features, like the ability to identify parking spots and bike lanes.


Please enter your comment!
Please enter your name here