Photographic camera Calibration¶

Goal¶

In this section,
  • We volition learn about distortions in photographic camera, intrinsic and extrinsic parameters of photographic camera etc.
  • We will acquire to notice these parameters, undistort images etc.

Basics¶

Today'south cheap pinhole cameras introduces a lot of distortion to images. 2 major distortions are radial distortion and tangential distortion.

Due to radial baloney, straight lines volition appear curved. Its effect is more as we move abroad from the center of image. For example, 1 prototype is shown beneath, where two edges of a chess board are marked with red lines. Just yous tin can come across that border is non a straight line and doesn't match with the red line. All the expected straight lines are bulged out. Visit Distortion (optics) for more details.

Radial Distortion

This distortion is solved equally follows:

x_{corrected} = x( 1 + k_1 r^2 + k_2 r^4 + k_3 r^6) \\  y_{corrected} = y( 1 + k_1 r^2 + k_2 r^4 + k_3 r^6)

Similarly, some other distortion is the tangential baloney which occurs because image taking lense is not aligned perfectly parallel to the imaging airplane. And so some areas in image may expect nearer than expected. It is solved equally below:

x_{corrected} = x + [ 2p_1xy + p_2(r^2+2x^2)] \\  y_{corrected} = y + [ p_1(r^2+ 2y^2)+ 2p_2xy]

In brusque, we need to find five parameters, known as distortion coefficients given past:

Distortion \; coefficients=(k_1 \hspace{10pt} k_2 \hspace{10pt} p_1 \hspace{10pt} p_2 \hspace{10pt} k_3)

In add-on to this, we demand to find a few more than information, like intrinsic and extrinsic parameters of a camera. Intrinsic parameters are specific to a photographic camera. It includes information similar focal length (f_x,f_y), optical centers (c_x, c_y) etc. It is likewise called camera matrix. It depends on the photographic camera just, so in one case calculated, it can be stored for future purposes. It is expressed equally a 3x3 matrix:

camera \; matrix = \left [ \begin{matrix}   f_x & 0 & c_x \\  0 & f_y & c_y \\   0 & 0 & 1 \end{matrix} \right ]

Extrinsic parameters corresponds to rotation and translation vectors which translates a coordinates of a 3D betoken to a coordinate arrangement.

For stereo applications, these distortions need to be corrected offset. To find all these parameters, what we have to do is to provide some sample images of a well defined pattern (eg, chess lath). We find some specific points in it ( foursquare corners in chess board). We know its coordinates in real world space and we know its coordinates in image. With these data, some mathematical problem is solved in background to get the distortion coefficients. That is the summary of the whole story. For improve results, we demand atleast ten test patterns.

Code¶

As mentioned above, we need atleast 10 test patterns for camera scale. OpenCV comes with some images of chess board (encounter samples/cpp/left01.jpg -- left14.jpg ), and then we will utilize it. For sake of agreement, consider just one epitome of a chess board. Important input datas needed for camera calibration is a set of 3D real globe points and its corresponding 2d paradigm points. 2D paradigm points are OK which we can easily find from the prototype. (These epitome points are locations where two blackness squares bear on each other in chess boards)

What about the 3D points from existent globe space? Those images are taken from a static photographic camera and chess boards are placed at dissimilar locations and orientations. So we need to know (X,Y,Z) values. But for simplicity, we tin say chess board was kept stationary at XY plane, (so Z=0 ever) and photographic camera was moved accordingly. This consideration helps u.s. to notice only Ten,Y values. Now for X,Y values, nosotros tin can simply pass the points as (0,0), (1,0), (ii,0), ... which denotes the location of points. In this case, the results nosotros get volition be in the scale of size of chess board foursquare. Only if we know the foursquare size, (say 30 mm), and we can pass the values as (0,0),(30,0),(60,0),..., we get the results in mm. (In this example, nosotros don't know square size since we didn't have those images, and so we pass in terms of square size).

3D points are called object points and second image points are called image points.

Setup¶

And then to detect pattern in chess board, we use the part, cv2.findChessboardCorners(). We as well need to pass what kind of pattern we are looking, like 8x8 filigree, 5x5 grid etc. In this example, we employ 7x6 grid. (Normally a chess board has 8x8 squares and 7x7 internal corners). It returns the corner points and retval which volition be Truthful if pattern is obtained. These corners will be placed in an order (from left-to-right, top-to-bottom)

Run across likewise

This role may non be able to find the required pattern in all the images. And so one good option is to write the code such that, it starts the camera and check each frame for required blueprint. Once pattern is obtained, find the corners and shop it in a listing. Besides provides some interval before reading side by side frame and then that nosotros can adapt our chess board in different direction. Continue this process until required number of good patterns are obtained. Even in the example provided here, we are not sure out of fourteen images given, how many are adept. So we read all the images and have the skilful ones.

See also

Instead of chess board, we can use some circular grid, merely then use the function cv2.findCirclesGrid() to find the pattern. It is said that less number of images are enough when using circular grid.

In one case nosotros find the corners, we can increment their accuracy using cv2.cornerSubPix(). We can also draw the blueprint using cv2.drawChessboardCorners(). All these steps are included in below code:

                                    import                  numpy                  as                  np                  import                  cv2                  import                  glob                  # termination criteria                  criteria                  =                  (                  cv2                  .                  TERM_CRITERIA_EPS                  +                  cv2                  .                  TERM_CRITERIA_MAX_ITER                  ,                  30                  ,                  0.001                  )                  # set up object points, like (0,0,0), (1,0,0), (2,0,0) ....,(half dozen,5,0)                  objp                  =                  np                  .                  zeros                  ((                  six                  *                  7                  ,                  3                  ),                  np                  .                  float32                  )                  objp                  [:,:                  2                  ]                  =                  np                  .                  mgrid                  [                  0                  :                  seven                  ,                  0                  :                  6                  ]                  .                  T                  .                  reshape                  (                  -                  ane                  ,                  ii                  )                  # Arrays to store object points and image points from all the images.                  objpoints                  =                  []                  # 3d betoken in real earth space                  imgpoints                  =                  []                  # second points in prototype aeroplane.                  images                  =                  glob                  .                  glob                  (                  '*.jpg'                  )                  for                  fname                  in                  images                  :                  img                  =                  cv2                  .                  imread                  (                  fname                  )                  gray                  =                  cv2                  .                  cvtColor                  (                  img                  ,                  cv2                  .                  COLOR_BGR2GRAY                  )                  # Find the chess lath corners                  ret                  ,                  corners                  =                  cv2                  .                  findChessboardCorners                  (                  gray                  ,                  (                  seven                  ,                  6                  ),                  None                  )                  # If institute, add object points, epitome points (after refining them)                  if                  ret                  ==                  True                  :                  objpoints                  .                  suspend                  (                  objp                  )                  corners2                  =                  cv2                  .                  cornerSubPix                  (                  gray                  ,                  corners                  ,(                  11                  ,                  11                  ),(                  -                  i                  ,                  -                  1                  ),                  criteria                  )                  imgpoints                  .                  suspend                  (                  corners2                  )                  # Draw and brandish the corners                  img                  =                  cv2                  .                  drawChessboardCorners                  (                  img                  ,                  (                  7                  ,                  6                  ),                  corners2                  ,                  ret                  )                  cv2                  .                  imshow                  (                  'img'                  ,                  img                  )                  cv2                  .                  waitKey                  (                  500                  )                  cv2                  .                  destroyAllWindows                  ()                

I image with pattern drawn on it is shown beneath:

Calibration Pattern

Calibration¶

And then now nosotros have our object points and epitome points we are fix to become for calibration. For that we employ the function, cv2.calibrateCamera(). It returns the camera matrix, distortion coefficients, rotation and translation vectors etc.

                                    ret                  ,                  mtx                  ,                  dist                  ,                  rvecs                  ,                  tvecs                  =                  cv2                  .                  calibrateCamera                  (                  objpoints                  ,                  imgpoints                  ,                  gray                  .                  shape                  [::                  -                  1                  ],                  None                  ,                  None                  )                

Undistortion¶

We accept got what we were trying. Now we can have an image and undistort information technology. OpenCV comes with ii methods, we will come across both. But earlier that, we can refine the camera matrix based on a gratuitous scaling parameter using cv2.getOptimalNewCameraMatrix(). If the scaling parameter alpha=0 , it returns undistorted paradigm with minimum unwanted pixels. So it may fifty-fifty remove some pixels at paradigm corners. If alpha=1 , all pixels are retained with some extra black images. Information technology as well returns an image ROI which tin can be used to crop the consequence.

And then we take a new image ( left12.jpg in this instance. That is the beginning image in this chapter)

                                    img                  =                  cv2                  .                  imread                  (                  'left12.jpg'                  )                  h                  ,                  due west                  =                  img                  .                  shape                  [:                  2                  ]                  newcameramtx                  ,                  roi                  =                  cv2                  .                  getOptimalNewCameraMatrix                  (                  mtx                  ,                  dist                  ,(                  westward                  ,                  h                  ),                  1                  ,(                  w                  ,                  h                  ))                

one. Using cv2.undistort()

This is the shortest path. Only call the role and apply ROI obtained higher up to ingather the upshot.

                                        # undistort                    dst                    =                    cv2                    .                    undistort                    (                    img                    ,                    mtx                    ,                    dist                    ,                    None                    ,                    newcameramtx                    )                    # ingather the paradigm                    x                    ,                    y                    ,                    due west                    ,                    h                    =                    roi                    dst                    =                    dst                    [                    y                    :                    y                    +                    h                    ,                    10                    :                    ten                    +                    w                    ]                    cv2                    .                    imwrite                    (                    'calibresult.png'                    ,                    dst                    )                  

2. Using remapping

This is curved path. Start find a mapping function from distorted prototype to undistorted image. Then utilise the remap function.

                                        # undistort                    mapx                    ,                    mapy                    =                    cv2                    .                    initUndistortRectifyMap                    (                    mtx                    ,                    dist                    ,                    None                    ,                    newcameramtx                    ,(                    w                    ,                    h                    ),                    5                    )                    dst                    =                    cv2                    .                    remap                    (                    img                    ,                    mapx                    ,                    mapy                    ,                    cv2                    .                    INTER_LINEAR                    )                    # ingather the paradigm                    10                    ,                    y                    ,                    westward                    ,                    h                    =                    roi                    dst                    =                    dst                    [                    y                    :                    y                    +                    h                    ,                    x                    :                    x                    +                    w                    ]                    cv2                    .                    imwrite                    (                    'calibresult.png'                    ,                    dst                    )                  

Both the methods give the same result. See the result below:

Calibration Result

You can see in the consequence that all the edges are direct.

Now yous tin can store the photographic camera matrix and distortion coefficients using write functions in Numpy (np.savez, np.savetxt etc) for future uses.

Re-projection Error¶

Re-projection fault gives a skillful estimation of just how exact is the found parameters. This should be as close to zero as possible. Given the intrinsic, distortion, rotation and translation matrices, we outset transform the object indicate to image point using cv2.projectPoints(). And so we calculate the accented norm between what we got with our transformation and the corner finding algorithm. To find the average error we calculate the arithmetical mean of the errors summate for all the calibration images.

                                mean_error                =                0                for                i                in                xrange                (                len                (                objpoints                )):                imgpoints2                ,                _                =                cv2                .                projectPoints                (                objpoints                [                i                ],                rvecs                [                i                ],                tvecs                [                i                ],                mtx                ,                dist                )                fault                =                cv2                .                norm                (                imgpoints                [                i                ],                imgpoints2                ,                cv2                .                NORM_L2                )                /                len                (                imgpoints2                )                tot_error                +=                error                print                "total mistake: "                ,                mean_error                /                len                (                objpoints                )              

Boosted Resource¶

Exercises¶

  1. Try camera calibration with circular grid.