My project code is contained in the jupyter notebook located in "./project-find lane.ipynb".
My result images are located in "./output_images". My result video is located in "./output_videos".
Advanced Lane Finding Project
The goals / steps of this project are the following:
- Compute the camera calibration matrix and distortion coefficients given a set of chessboard images.
- Apply a distortion correction to raw images.
- Use color transforms, gradients, etc., to create a thresholded binary image.
- Apply a perspective transform to rectify binary image ("birds-eye view").
- Detect lane pixels and fit to find the lane boundary.
- Determine the curvature of the lane and vehicle position with respect to center.
- Warp the detected lane boundaries back onto the original image.
- Output visual display of the lane boundaries and numerical estimation of lane curvature and vehicle position.
Rubric Points
Here I will consider the rubric points individually and describe how I addressed each point in my implementation.
Camera Calibration can be divided into two small steps:
First, I prepare object points
and image points
, which are the chessboard corners in the world and images.
Then I used the cv2.calibrateCamera()
function to compute the camera calibration and distortion coefficients.
I use the cv2.undistort()
function to distortion correction the original images.
This is an example of a distortion corrected calibration image.
This is an example of how I apply the distortion correction to test images:
I used a combination of color and gradient thresholds to generate a binary image.
Here's an example of my output for this step.
I define the source and destination points for perspective transform.
I verified my perspective transform by inspecting if the lines are parallel in the warped image.
I locate the lane's start points by region mask and histogram. Then I use slide window to find lane pixels.
So I can use these lane pixels to fit 2 order polynomial coefficient for plotting lines.
I did these by unit conversion and math formula from pixels to meters.
Here is an example of my result on a test image:
First, we can search around last frame's fit. If it can't work well, reset to slide window.
If slide window can't work well, we can use last frame's pixels to get a approximate result.
I compare this fit and last frame's fit, they should have similar curvature.
If this fit isn't sanity, we can use the average fit to get a approximate result.
Here's a link to my video result
There are two main problems I faced in this project:
1.adjust the color and gradient threshold to get a binary image with clear lane and eliminate other interference.
2.build a pipeline to find enough pixels for fitting lines
My pipeline may fail at two steps: create threshold image, find lane pixels.
I can adjust the color and gradient threshold to get a binary image with clear lane and eliminate other interference.