Deckard's photo-enhancing gear in Blade Runner is still the stuff of fantasy. However, Google might just have a close-enough approximation before long. The Google Brain team has developed a system that uses neural networks to fill in the details on very low-resolution images. One of the networks is a "conditioning" element that maps the lower-res shot to similar higher-res examples to get a basic idea of what the image should look like. The other, the "prior" network, models sharper details to make the final result more plausible.
The results are far from perfect, but can frequently come close to the real deal. A virtually unusable 8-pixel by 8-pixel portrait suddenly has recognizable facial features, for instance. And even in those moments where the AI system gets many details wrong, it's frequently close enough that you'll at least have an inkling of what's there. An indistinguishable blob might become clear enough to tell that it's a bedroom.
As for potential uses? Google+ (on some Android phones) is already using a similar implementation for image compression. Police wouldn't want to use Google's technology to definitively identify a suspect (not in its current state, at least), but it could help validate hunches that their suspect was present in the background of a shot. This might also be useful for cleaning up tiny details in photos when they're blown up to larger sizes. It might not be strictly accurate, but it would be more presentable.