In the rapidly evolving field of artificial intelligence, efficient face detection is a necessity. The ‘faced’ library is an innovative solution designed for near real-time face detection even on CPUs. In this guide, we will cover how to install and use the ‘faced’ library, alongside troubleshooting tips for an optimal experience.
How to Install
To install the ‘faced’ library, you need to run the following command:
bash
$ pip install git+https://github.com/iitzco/faced.git
Note that it will soon be available on PyPI.
How to Use
As a Library
To utilize ‘faced’ as a library in your Python scripts, follow these steps:
python
import cv2
from faced import FaceDetector
from faced.utils import annotate_image
face_detector = FaceDetector()
img = cv2.imread(img_path)
rgb_img = cv2.cvtColor(img.copy(), cv2.COLOR_BGR2RGB)
# Receives RGB numpy image (HxWxC) and
# returns (x_center, y_center, width, height, prob) tuples.
bboxes = face_detector.predict(rgb_img, thresh)
# Use this utils function to annotate the image.
ann_img = annotate_image(img, bboxes)
# Show the image
cv2.imshow("image", ann_img)
cv2.waitKey(0)
cv2.destroyAllWindows()
As a Command-Line Program
The ‘faced’ library can also be used directly from the command line. Here are some commands:
- Detection on image with saving the output:
bash $ faced --input imgs/demo.png --save - Live webcam detection:
bash $ faced --input webcam - Detection on video with a low decision threshold:
bash $ faced --input imgs/demo.mp4 --threshold 0.5
For additional information, you can run faced --help.
Examples of the Faced Library Performance
Below are some examples of images processed with the ‘faced’ library:






Performance Comparison
Here is a performance summary when using ‘faced’ on different hardware:
| CPU (i5 2015 MBP) | GPU (Nvidia TitanXP) |
|---|---|
| ~5 FPS | 70 FPS |
How Does Faced Compare with Haar Cascades?
While Haar Cascades have been widely used for face detection, ‘faced’ demonstrates greater robustness:
How the Faced Library Works
The ‘faced’ library employs a two-stage approach using deep neural networks implemented with TensorFlow.
Stage 1:
Utilizes a custom fully convolutional neural network (FCNN) based on the YOLO architecture. It processes a 288×288 RGB image to output a grid of predicted bounding boxes and probabilities for detected faces.

Stage 2:
This stage employs a standard CNN that refines the bounding box outputs from Stage 1, thereby improving accuracy.

Why Not Use Pre-Trained YOLO Models?
Pre-trained models, such as YOLO, cater to multi-class detection which is overkill for simple face detection tasks. These models are too large and complex to run efficiently on CPUs. ‘Faced’ proves that a lighter network can effectively handle dedicated tasks.
Training and Development
The ‘faced’ library was trained using the WIDER FACE dataset on the powerful Nvidia Titan XP GPU. For more on training methods or data preprocessing, feel free to reach out!
Running on GPU
To run ‘faced’ on a GPU, simply install the TensorFlow GPU version instead of the standard TensorFlow.
Troubleshooting
In case you encounter issues while using the ‘faced’ library, consider the following tips:
- Ensure that all required libraries are installed and updated.
- If you experience low performance on CPU, try running it on a GPU.
- Verify that the input images are in the correct format and resolution.
- For more insights, updates, or to collaborate on AI development projects, stay connected with fxis.ai.
At fxis.ai, we believe that such advancements are crucial for the future of AI, as they enable more comprehensive and effective solutions. Our team is continually exploring new methodologies to push the envelope in artificial intelligence, ensuring that our clients benefit from the latest technological innovations.





