Removing Motion Blur with Space-Time Processing

Datum konání: 27.05.2011
Přednášející: Hiroyuki Takeda
Odpovědná osoba: Šroubek

Although spatial deblurring is relatively well-understood by assuming that the blur kernel is shift-invariant, motion blur is not so when we attempt to deconvolve this motion blur on a frame-by-frame basis: this is because, in general, video include complex, multi-layer transitions. Indeed, we face an exceedingly difficult problem in motion deblurring of a single frame when the scene contains motion occlusions.

Instead of deblurring video frames, individually, a fully 3-D deblurring method is proposed in this work to reduce motion blur from a single motion-blurred video to produce a high resolution video in both space and time. The blur kernel is free from explicit knowledge of local motions unlike other existing motion-based deblurring approaches. Most importantly, due to its inherent locally adaptive nature, the 3-D deblurring is capable of automatically deblurring the portions of the sequence which are motion blurred, without segmentation, and without adversely affecting the rest of the spatiotemporal domain where such blur is not present.

Our proposed approach is a two-step approach: first we upscale the input video in space and time without explicit estimates of local motions and then perform 3-D deblurring to obtain the restored sequence.

Details