Optical flow describes computerized tracking of moving objects by analyzing content differences between video frames. In a video, both object and the observer may be in motion; the computer can locate cues that mark the boundaries, edges, and regions of individual still images. Detecting their progressions allows the computer to follow an object through time and space. The technology is employed in industries and research, including the operation of unmanned aerial vehicles (UAV) and security systems.
Two primary methods generate this computer vision: gradient-based and feature-based motion detection. Gradient-based optical flow measures changes in image intensity through space and time. It scans a dense flow field plane. Feature-based flows overlay edges of objects within frames to mark progress.
This technique resembles camcorder image stabilization, allowing a computed field of vision to be locked into the frame despite camera shake. Optical flow algorithms calculate matches between images in sequence. The computer divides each image into square grids. Overlaying two images permits comparisons to find the best matches of squares. When the computer locates a match, it draws a line between the points of difference, sometimes called needles.
Algorithms work systematically from coarse to fine resolutions. This permits motion tracking between images with differences in resolution. The computer does not recognize objects, but only detects and follows those characteristics of objects that can be compared between frames.
Computing optical flow vectors can detect and track objects and also extract an image's dominant plane. This can aid in robotic navigation and visual odometry, or robot orientation and position. It notes not only objects but also surrounding environs in three dimensions, and gives robots more lifelike spatial awareness. Vectors computed in a plane allow the processor to infer and respond to movements extracted from the frames.
Some weaknesses of the optical flow technique include data loss that results from squares the computer cannot match between images. These unmatched areas remain vacant and create planar voids, reducing accuracy. Clear edges or stable elements like corners contribute to flow analysis.
Detailed factors may be obscured if the observer is also in motion, since it can't distinguish certain elements from frame to frame. The analysis divides motion into apparent global flow and localized object motion, or egomotion. Spatial-temporal changes in edges or image intensity get lost in the motion of the camera and the global flow of the moving environment. Analysis is enhanced if the computer can eliminate the effect of the global flow.