Eye Localization Method Based on YCgCr Color Space and Geometric Information
- Login to Download
- 1 Credits
Resource Overview
Novel eye detection approach leveraging YCgCr color space conversion and facial geometry constraints for robust pupil localization
Detailed Documentation
In the field of computer vision, eye localization serves as a fundamental technology widely applied in facial recognition, gaze tracking, and fatigue detection systems. Traditional RGB color space often underperforms in complex environments due to its sensitivity to lighting variations. The YCgCr color space, with its effective separation of luminance and chrominance components, significantly reduces the impact of illumination changes on skin detection, thereby enhancing the robustness of eye localization.
The implementation methodology follows a two-stage pipeline: skin detection and precise eye localization. Initially, the algorithm converts input images from RGB to YCgCr color space, leveraging YCgCr's distinct color distribution characteristics to filter skin regions. Compared to YCrCb or HSV spaces, YCgCr demonstrates superior background interference suppression, particularly in highlight or shadow regions (implementation tip: use cv2.cvtColor() with COLOR_RGB2YCrCb conversion code). Subsequently, facial geometric priors—such as eyes being located in the upper facial region and inter-ocular distance proportions—are combined with morphological operations and candidate region filtering to narrow the search scope.
The introduction of geometric constraints constitutes a key innovation: by calculating candidate regions' aspect ratios, symmetry features, and positional relationships relative to facial center points, the method effectively excludes false detections like eyebrows and hair (algorithm detail: implement Euclidean distance calculations and contour analysis). Finally, precise pupil coordinates are determined within candidate regions using gradient-based features or template matching techniques. Experimental results confirm that this approach maintains high accuracy under challenging conditions including uneven illumination and partial occlusion.
This fusion strategy of color space processing and geometric information not only reduces computational complexity but also enables real-time eye tracking on embedded devices. Future enhancements could integrate deep learning models to further improve localization precision under extreme conditions.
- Login to Download
- 1 Credits