Welcome to this guide where we will embark on an exciting journey into the world of face recognition using the Raspberry Pi! This small yet powerful computer is capable of remarkable tasks, and with the help of the OpenCV library, we can make it recognize faces in real-time. Let’s get started!
What is Raspberry Pi?
Raspberry Pi is a versatile mini-computer, no larger than a credit card, that can perform a multitude of tasks, albeit with limited computational power. Think of it as the little engine that could: it may not match the speed of a high-end laptop, but it delivers all the necessary functionalities while consuming significantly less energy. Explore more
What is Face Recognition?
Face recognition is a biometric software technique used to identify individuals based on their facial features. The system captures an image via digital cameras, trains itself on that data, and eventually, it becomes capable of identifying the individual when presented with a new face.
What is OpenCV Library?
OpenCV (Open Source Computer Vision Library) is an open-source library designed to support computer vision and machine learning applications effectively. Imagine it as a toolkit suitable for a wide array of computer vision challenges, including face recognition! OpenCV works with programming languages such as C++, Python, and Java and is compatible with various operating systems like Windows, Linux, and macOS. Read more
Project Overview
In this project, we will learn how to detect faces on a Raspberry Pi using a camera module. Below are the steps we will tackle:
Requirements
Here is what you’ll need to get started:
Software Requirements:
Procedure
Step 1: Installing OpenCV Library
Begin by installing the OpenCV library. For Raspberry Pi V3, the ideal method is to follow the amazing tutorial by Adrian Rosebrock. After completing the installation, enter the virtual environment using:
source ~/.profile
Then activate the virtual environment:
workon cv
If successful, your prompt should appear like this: (cv) pi@raspberry:~$
Next, check that OpenCV is successfully installed by entering the Python interpreter and importing OpenCV:
import cv2
If no errors pop up and you can check the OpenCV version using cv2.__version__
, you are good to go!
Step 2: Testing the Camera
With OpenCV up and running, it’s time to test the Pi camera. If you encounter an “Assertion failed” error, it probably means the camera wasn’t enabled during installation. Solve this by running:
sudo modprobe bcm2835-v4l2
Then, enter the following code to display live camera feed in both color and grayscale:
import numpy as np
import cv2
cap = cv2.VideoCapture(0)
cap.set(3,640) # set Width
cap.set(4,480) # set Height
while(True):
ret, frame = cap.read()
if ret:
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
cv2.imshow('frame', frame)
cv2.imshow('gray', gray)
k = cv2.waitKey(30) & 0xff
if k == 27: # press ESC to quit
break
cap.release()
cv2.destroyAllWindows()
If you see the live feed, your camera is working perfectly!
Step 3: Face Detection
Let’s create a face detection system using the Haar Cascade classifier. This is like teaching your Raspberry Pi the difference between faces and non-faces using a set of images. Download the necessary code and execute it. The whole process is like training a dog—you show it pictures until it recognizes faces in real life!
import numpy as np
import cv2
faceCascade = cv2.CascadeClassifier('Cascades/haarcascade_frontalface_default.xml')
cap = cv2.VideoCapture(0)
cap.set(3,640) # set Width
cap.set(4,480) # set Height
while True:
ret, img = cap.read()
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
faces = faceCascade.detectMultiScale(
gray,
scaleFactor=1.2,
minNeighbors=5,
minSize=(20,20)
)
for (x,y,w,h) in faces:
cv2.rectangle(img,(x,y),(x+w,y+h),(255,0,0),2)
cv2.imshow('video', img)
k = cv2.waitKey(30) & 0xff
if k == 27: # press ESC to quit
break
cap.release()
cv2.destroyAllWindows()
Step 4: Data Gathering
Next, we need to start capturing faces to create a dataset. This is similar to preparing a recipe before cooking—the right ingredients prepared in advance lead to a successful dish. Use the code provided in the tutorial for this task.
Step 5: Trainer
In this crucial phase, you’ll use the collected data to train the recognizer. This process can be thought of as sharing your recipe with an automated chef who learns how to recreate it from your ingredients effectively. Run the respective script to initiate the training:
import cv2
import numpy as np
from PIL import Image
import os
path = './dataset'
os.chdir(path)
recognizer = cv2.face.LBPHFaceRecognizer_create()
detector = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
def getImagesAndLabels(path):
imagePaths = [os.path.join(path, f) for f in os.listdir(path)]
faceSamples=[]
ids = []
for imagePath in imagePaths:
PIL_img = Image.open(imagePath).convert('L') # convert it to grayscale
img_numpy = np.array(PIL_img, uint8)
id = int(os.path.split(imagePath)[-1].split('.')[1])
faces = detector.detectMultiScale(img_numpy)
for (x, y, w, h) in faces:
faceSamples.append(img_numpy[y:y+h, x:x+w])
ids.append(id)
return faceSamples, ids
faces, ids = getImagesAndLabels(path)
recognizer.train(faces, np.array(ids))
recognizer.write('trainer.yml')
print("[INFO] {0} faces trained. Exiting Program.".format(len(np.unique(ids))))
Step 6: Recognizer
Finally, it’s time to recognize a face! This is like unveiling a magic trick—you put your hard work into it and marvel as it happens in real-time. Download the necessary script and execute it:
import cv2
import numpy as np
import os
os.chdir('/home/pi/opencv-3.4.1/data/haarcascades')
recognizer = cv2.face.LBPHFaceRecognizer_create()
recognizer.read('trainer.yml')
cascadePath = 'haarcascade_frontalface_default.xml'
faceCascade = cv2.CascadeClassifier(cascadePath)
font = cv2.FONT_HERSHEY_SIMPLEX
id = 0
names = ['None', 'Kunal', 'Kaushik', 'Atharv', 'Z', 'W']
cam = cv2.VideoCapture(0)
cam.set(3, 640) # set video width
cam.set(4, 480) # set video height
minW = 0.1 * cam.get(3)
minH = 0.1 * cam.get(4)
while True:
ret, img = cam.read()
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
faces = faceCascade.detectMultiScale(gray, scaleFactor=1.2, minNeighbors=5, minSize=(int(minW), int(minH)))
for (x, y, w, h) in faces:
cv2.rectangle(img, (x, y), (x+w, y+h), (0, 255, 0), 2)
id, confidence = recognizer.predict(gray[y:y+h, x:x+w])
if (confidence < 100):
id = names[id]
confidence = " {0}%".format(round(100 - confidence))
else:
id = "unknown"
confidence = " {0}%".format(round(100 - confidence))
cv2.putText(img, str(id), (x+5, y-5), font, 1, (255, 255, 255), 2)
cv2.putText(img, str(confidence), (x+5, y+h-5), font, 1, (255, 255, 0), 1)
cv2.imshow('camera', img)
k = cv2.waitKey(10) & 0xff # Press ESC for exiting video
if k == 27:
break
print("[INFO] Exiting Program and cleanup stuff")
cam.release()
cv2.destroyAllWindows()
Troubleshooting Ideas
- If you receive an assertion error with OpenCV, ensure the camera was enabled during installation. Use
sudo modprobe bcm2835-v4l2
to resolve it. - Ensure all required libraries and dependencies are adequately installed. Missing libraries are often the culprit for unexpected errors.
- If you face any issues during training, ensure that enough images are captured to create an accurate model.
For more insights, updates, or to collaborate on AI development projects, stay connected with fxis.ai.
Conclusion
With this guide, you should now have a functioning real-time face recognition system on your Raspberry Pi! Remember, experimentation is key, so feel free to tweak your code and explore more functionalities to make your project even more exciting.
At fxis.ai, we believe that such advancements are crucial for the future of AI, as they enable more comprehensive and effective solutions. Our team is continually exploring new methodologies to push the envelope in artificial intelligence, ensuring that our clients benefit from the latest technological innovations.