Face Recognition with OpenCV

OpenCV is one of the most popular computer vision libraries that are often used for image analysis and machine learning. It is a very easy to use library that provides not only basic image transformation, but also image filtering, face recognition, object recognition, object tracking, and other functions that are often used in practice. It is a library that I always use when I work with image recognition in practice.

In this article, we will try to detect faces, which is a standard feature of OpenCV.

github

  • The file in jupyter notebook format is here

google colaboratory

  • To run it in google colaboratory here face_nb.ipynb)

environment

The author’s OS is macOS, and the options are different from Linux and Unix commands.

My environment

! sw_vers
ProductName: Mac OS X
ProductVersion: 10.14.6
BuildVersion: 18G95
Python -V
Python 3.8.5
import cv2

print('opencv version :', cv2.__version__)
opencv version : 4.4.0

We will also import matplotlib for image display. We will save the images as svg for better web appearance.

%matplotlib inline
%config InlineBackend.figure_format = 'svg'

import matplotlib.pyplot as plt
Matplotlib is building the font cache; this may take a moment.

Let’s say you have a file called lena.jpg in the upper level.

%%bash

ls -a ... / | grep jpg
binary_out.jpg
bitwise_out.jpg
gray_out.jpg
lena.jpg
lena_out.jpg
rotation.jpg
rotation_scale_1_angle_-30.jpg
rotation_scale_1_angle_-30.jpg
rotation_scale_2_angle_-30.jpg
rotation_scale_2_angle_-30.jpg
filename = '. /lena.jpg'.

Loading an image

Let’s load an image and display it, using matplotlib to display it in jupyter notebook.

img = cv2.imread(filename=filename)

# In OpenCV, the image is loaded in GBR preparation, but in JupyterNotebook, it is displayed in RGB.
rgb_img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)

plt.imshow(rgb_img)
plt.show()
# Create a grayscale image for later use
gray_img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

Get Image Information

Check the height, width and number of colors (usually 3 for RGB) of the image if it is color.

def get_image_info(img):
  if len(img.shape) == 3:
    img_height, img_width, img_channels = img.shape[:3].
    print('img_channels :', img_channels)
  else:
    img_height, img_width = img.shape[:2].

  print('img_height :', img_height)
  print('img_width :', img_width)

get_image_info(img=img)
img_channels : 3
img_height : 225
img_width : 225

Face recognition

To detect a human face in OpenCV, load the face recognition model file “haircascade_frontalface_alt.xml”, OpenCV has models for nose and mouth recognition as well.

cascade_file = "haircascade_frontalface_alt.xml"
cascade = cv2.CascadeClassifier(cascade_file)

gray_img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
face_list = cascade.detectMultiScale(gray_img, minSize=(50, 50))

# Mark the detected faces.
for (x, y, w, h) in face_list:
  color = (0, 0, 225)
  pen_w = 3
  cv2.rectangle(img, (x, y), (x + w, y + h), color, thickness = pen_w)

plt.imshow(cv2.cvtColor(img, cv2.COLOR_BGR2RGB))
plt.show()

Facial Feature Extraction (dlib)

You can also use dlib, a library for facial feature extraction, to capture facial features. dlib’s shape_predictor will load a 68-point model of the face. There is no OpenCV for this.

The following code is adapted from Chapter 9, “10 Knocks on Image Processing to Understand Potential Customers,” in “100 Knocks on Python Data Analysis” (https://www.amazon.co.jp/dp/B07ZSGSN9S/ref=dp-kindle-redirect?_encoding=UTF8&btkr=1). The following code is based on Chapter 9, “10 Knocks on Image Processing to Understand Potential Customers” in 100 Knocks on Python Data Analysis. This book is very useful because it covers a wide range of skills required for data analysis.

import dlib
import math

predictor = dlib.shape_predictor("shape_predictor_68_face_landmarks.dat")
detector = dlib.get_frontal_face_detector()
dets = detector(img, 1)

for k, d in enumerate(dets):
  shape = predictor(img, d)

  # Display the face area
  color_f = (0, 0, 225)
  color_l_out = (255, 0, 0)
  color_l_in = (0, 255, 0)
  line_w = 3
  circle_r = 3
  fontType = cv2.FONT_HERSHEY_SIMPLEX
  fontSize = 1
  cv2.rectangle(img, (d.left(), d.top()), (d.right(), d.bottom()), color_f, line_w)
  cv2.putText(img, str(k), (d.left(), d.top()), fontType, fontSize, color_f, line_w)

  num_of_points_out = 17
  num_of_points_in = shape.num_parts - num_of_points_out
  gx_out = 0
  gy_out = 0
  gx_in = 0
  gy_in = 0

  for shape_point_count in range(shape.num_parts):
    shape_point = shape.part(shape_point_count)

    #draw each organ
    if shape_point_count<num_of_points_out:
      cv2.circle(img,(shape_point.x, shape_point.y),circle_r,color_l_out, line_w)
      gx_out = gx_out + shape_point.x/num_of_points_out
      gy_out = gy_out + shape_point.y/num_of_points_out
    else:
      cv2.circle(img,(shape_point.x, shape_point.y),circle_r,color_l_in, line_w)
      gx_in = gx_in + shape_point.x/num_of_points_in
      gy_in = gy_in + shape_point.y/num_of_points_in

  # Draw the center of gravity position
  cv2.circle(img,(int(gx_out), int(gy_out)),circle_r,(0,0,255), line_w)
  cv2.circle(img,(int(gx_in), int(gy_in)),circle_r,(0,0,0), line_w)

  # Calculate the orientation of the face
  theta = math.asin(2*(gx_in-gx_out)/(d.right()-d.left()))
  radian = theta*180/math.pi
  print("Face orientation:{} (angle:{} degrees)".format(theta,radian))

  # print the face orientation
  if radian<0:
    textPrefix = " left "
  else:
    textPrefix = " right "
  textShow = textPrefix + str(round(abs(radian),1)) + " deg."
  cv2.putText(img, textShow, (d.left(), d.top()), fontType, fontSize, color_f, line_w)
Face orientation: 0.30603061652114893 (Angle: 17.534262728448397 degrees)

We now know that lena is facing the right direction. The angle is also computed.

plt.imshow(cv2.cvtColor(img, cv2.COLOR_BGR2RGB))
plt.show()

Conclusion

OpenCV is a very easy to use image analysis library. Recently, deep learning recognition has become popular, but considering the contrast with the cost, OpenCV is often sufficient. When I am asked if I can improve the accuracy of image analysis by using deep learning, I make a proposal while thinking whether it is really necessary to use deep learning or not, and whether there is a less expensive way to satisfy the customer.

If we can get enough accuracy using the various modules of OpenCV, we often find that we do not need the relatively high cost of deep learning.