Make sure you have python, OpenFace and dlib installed. You can either install them manually or use a preconfigured docker image that has everying already installed:
docker pull bamos/openface
docker run -p 9000:9000 -p 8000:8000 -t -i bamos/openface /bin/bash
cd /root/openface
Pro-tip: If you are using Docker on OSX, you can make your OSX /Users/ folder visible inside a docker image like this:
docker run -v /Users:/host/Users -p 9000:9000 -p 8000:8000 -t -i bamos/openface /bin/bash
cd /root/openface
Then you can access all your OSX files inside of the docker image at /host/Users/...
ls /host/Users/
Make a folder called ./training-images/
inside the openface folder.
mkdir training-images
Make a subfolder for each person you want to recognize. For example:
mkdir ./training-images/will-ferrell/
mkdir ./training-images/chad-smith/
mkdir ./training-images/jimmy-fallon/
Copy all your images of each person into the correct sub-folders. Make sure only one face appears in each image. There's no need to crop the image around the face. OpenFace will do that automatically.
Run the openface scripts from inside the openface root directory:
First, do pose detection and alignment:
./util/align-dlib.py ./training-images/ align outerEyesAndNose ./aligned-images/ --size 96
This will create a new ./aligned-images/
subfolder with a cropped and aligned version of each of your test images.
Second, generate the representations from the aligned images:
./batch-represent/main.lua -outDir ./generated-embeddings/ -data ./aligned-images/
After you run this, the ./generated-embeddings/
sub-folder will contain a csv file with the embeddings for each image.
Third, train your face detection model:
./demos/classifier.py train ./generated-embeddings/
This will generate a new file called ./generated-embeddings/classifier.pkl
. This file has the SVM model you'll use to recognize
new faces.
At this point, you should have a working face recognizer!
Get a new picture with an unknown face. Pass it to the classifier script like this:
./demos/classifier.py infer ./generated-embeddings/classifier.pkl your_test_image.jpg
You should get a prediction that looks like this:
=== /test-images/will-ferrel-1.jpg ===
Predict will-ferrell with 0.73 confidence.
From here it's up to you to adapt the ./demos/classifier.py
python script to work however you want.
Important notes:
- If you get bad results, try adding a few more pictures of each person in Step 3 (especially picures in different poses).
- This script will always make a prediction even if the face isn't one it knows. In a real application, you would look at the confidence score and throw away predictions with a low confidence since they are most likely wrong.
I have used pre configured docker as mentioned in the blog.
I put a single face image in sub directory under training-images directory and run the command as mentioned in Step 3 and executed the command for Step 4.
Pose detection and alignment:
./util/align-dlib.py ./training-images/ align outerEyesAndNose ./aligned-images/ --size 96
Output:
=== ./training-images/subhadeep/IMG_20180219_180131.jpg ===
Generate the representations from the aligned images:
./batch-represent/main.lua -outDir ./generated-embeddings/ -data ./aligned-images/
Output:
{
data : "./aligned-images/"
imgDim : 96
model : "/root/openface/models/openface/nn4.small2.v1.t7"
device : 1
outDir : "./generated-embeddings/"
cache : false
cuda : false
batchSize : 50
}
./aligned-images/
cache lotation: /root/openface/aligned-images/cache.t7
Creating metadata for cache.
{
sampleSize :
{
1 : 3
2 : 96
3 : 96
}
split : 0
verbose : true
paths :
{
1 : "./aligned-images/"
}
samplingMode : "balanced"
loadSize :
{
1 : 3
2 : 96
3 : 96
}
}
running "find" on each class directory, and concatenate all those filenames into a single file containing all image paths for a given class
now combine all the files to a single large file
load the large concatenated list of sample paths to self.imagePath
1 samples found...... 0/1 ......................] ETA: 0ms | Step: 0ms
Updating classList and imageClass appropriately
[=================== 1/1 =====================>] Tot: 0ms | Step: 0ms
Cleaning up temporary files
Splitting training and test sets to a ratio of 0/100
nImgs: 1
Represent: 1/1
Later when I run the command for "train your face detection model", I have received an error:
./demos/classifier.py train ./generated-embeddings/
Output:
/root/.local/lib/python2.7/site-packages/sklearn/lda.py:4: DeprecationWarning: lda.LDA has been moved to discriminant_analysis.LinearDiscriminantAnalysis in 0.17 and will be removed in 0.19
"in 0.17 and will be removed in 0.19", DeprecationWarning)
Loading embeddings.
Training for 1 classes.
Traceback (most recent call last):
File "./demos/classifier.py", line 291, in
train(args)
File "./demos/classifier.py", line 166, in train
clf.fit(embeddings, labelsNum)
File "/root/.local/lib/python2.7/site-packages/sklearn/svm/base.py", line 151, in fit
y = self._validate_targets(y)
File "/root/.local/lib/python2.7/site-packages/sklearn/svm/base.py", line 521, in _validate_targets
% len(cls))
ValueError: The number of classes has to be greater than one; got 1
Can you help me how do I resolve the issue?