Learn something new every day More Info... by email
Camera calibration, often referred to as camera resectioning, is a way of examining an image, or a video, and deducing what the camera situation was at the time the image was captured. Camera calibration is used primarily in robotic applications, and when modeling scenes virtually based on real input. Traditionally, camera calibration was a difficult and tedious process, but modern software applications make it quite easy to achieve, even for home users.
One of the main uses of camera calibration is to figure out where a camera was in relation to a scene in a photograph. Let’s say you’ve taken a picture of a large room with a gridded floor, and in that room you’ve placed a chair and a table. You’ve then inputted that image into a modeling program, and built a 3-dimension model around the scene. Into that scene, you can then place any number of other virtual objects, such as modeled characters to interact with the scene, or other props.
Rendering programs, however, also make use of a camera, albeit a virtual one. In order for the modeled objects to interact properly with the objects that were taken as a photograph, we need to make sure that our virtual camera is in the same position as our real camera was when we shot the initial photograph. Camera calibration achieves this, by using formulas to essentially work backwards, and deduct where the real camera was relative to the scene.
Camera calibration can also be used to figure out other things about the camera in relation to the scene. For example, using formulas we can figure out the focal length that the scene was shot at. We can also figure out the skew factor of the image, and any lens distortion that may have been introduced, creating a pincushion effect. We can also figure out whether the actual camera pixels were square or not, and what the horizontal and vertical scaling factors for the pixels might have been.
One can also use camera calibration or resectioning to take an image sent to a computer, and figure out where various coordinates are in the real world. This type of deduction is crucial to the functioning of robots that are meant to interact visually with the physical world. These robots can then use a photographic, or video input, device and calibrate in order to figure out where objects it sees might actually be in the real world, in actual terms of distance and vector.
This is one of the major areas of study in robotics, as faster, more accurate, methods of resectioning allow robots to interact with the world in more sophisticated ways. A robot with a poor ability to discern the distance of objects will have to rely largely on trial and error to move over terrain or manipulate object, whereas one that is able to accurately model its own place in the world in relation to other objects, is able to move seamlessly and fluidly in the world.
One of our editors will review your suggestion and make changes if warranted. Note that depending on the number of suggestions we receive, this can take anywhere from a few hours to a few days. Thank you for helping to improve wiseGEEK!