程序问答   发布时间:2022-06-02  发布网站:大佬教程  code.js-code.com
大佬教程收集整理的这篇文章主要介绍了使用 LBP、深度学习和 OpenCV 进行实时人脸识别大佬教程大佬觉得挺不错的,现在分享给大家,也给大家做个参考。

如何解决使用 LBP、深度学习和 OpenCV 进行实时人脸识别?

开发过程中遇到使用 LBP、深度学习和 OpenCV 进行实时人脸识别的问题如何解决?下面主要结合日常开发的经验,给出你关于使用 LBP、深度学习和 OpenCV 进行实时人脸识别的解决方法建议,希望对你解决使用 LBP、深度学习和 OpenCV 进行实时人脸识别有所启发或帮助;

我是计算机视觉的新手。我正在尝试使用基于深度学习 dnn 模块的人脸检测部分实现具有本地二进制模式的实时人脸识别。我正在使用 caltech_faces 数据集,并在其中添加了一个包含我的 20 张照片的文件夹。

所以,这是我的代码。我基本上通过进行一些更改和添加将示例图像的人脸识别代码转换为实时人脸识别。

执行以下代码时出现以下错误:

predname = le.inverse_transform([preDictions[i]])[0]
                                                       ^
TabError: inconsistent use of tabs and spaces in indentation

我检查了所有选项卡和缩进,但找不到要修复的内容和位置。我恳请您给我一个提示,告诉我该怎么做。非常感谢!

# import the necessary packages

from sklearn.preprocessing import LabelEncoder
from sklearn.model_SELEction import Train_test_split
from sklearn.metrics import classification_report
from imutils.vIDeo import VIDeoStream
from imutils import paths
import matplotlib.pyplot as plt
import numpy as np
import argparse
import imutils
import time
import cv2
import os


#CreaTing our face detector

def detect_faces(net,frame,minConfIDence=0.5):
    # grab the dimensions of the image and then construct a blob
    # from it
    (h,w) = frame.shape[:2]
    blob = cv2.dnn.blobFromImage(frame,1.0,(300,300),(104.0,177.0,123.0))

    # pass the blob through the network to obtain the face detections,# then initialize a List to store the preDicted bounding Boxes
    net.seTinput(blob)
    detections = net.forWARD()
    Boxes = []

    # loop over the detections
    for i in range(0,detections.shape[2]):
        # extract the confIDence (i.e.,probability) associated with
        # the detection
        confIDence = detections[0,i,2]

        # filter out weak detections by ensuring the confIDence is
        # greater than the minimum confIDence
        if confIDence > minConfIDence:
            # compute the (x,y)-coordinates of the bounding Box for
            # the object
            Box = detections[0,3:7] * np.array([w,h,w,h])
            (startX,startY,endX,endY) = Box.astype("int")

            # update our bounding Box results List
            Boxes.append((startX,endY))

    # return the face detection bounding Boxes
    return Boxes


#Loading the CALTECH Faces dataset

def load_face_dataset(inputPath,net,minConfIDence=0.5,minSamples=15):
    # grab the paths to all images in our input directory,extract
    # the name of the person (i.e.,class label) from the directory
    # structure,and count the number of example images we have per
    # face
    imagePaths = List(paths.List_images(inputPath))
    names = [p.split(os.path.sep)[-2] for p in imagePaths]
    (names,counts) = np.unique(names,return_counts=TruE)
    names = names.toList()

    # initialize Lists to store our extracted faces and associated
    # labels
    faces = []
    labels = []

    # loop over the image paths
    for imagePath in imagePaths:
        # load the image from disk and extract the name of the person
        # from the subdirectory structure
        frame = cv2.imread(imagePath)
        name = imagePath.split(os.path.sep)[-2]

        # only process images that have a sufficIEnt number of
        # examples belonging to the class
        if counts[names.index(Name)] < minSamples:
            conTinue

        # perform face detection
        Boxes = detect_faces(net,minConfIDencE)

        # loop over the bounding Boxes
        for (startX,endY) in Boxes:
            # extract the face ROI,resize it,and convert it to
            # grayscale
            faceROI = frame[startY:endY,startX:endX]
            faceROI = cv2.resize(faceROI,(47,62))
            faceROI = cv2.cvtcolor(faceROI,cv2.color_BGR2GRAY)

            # update our faces and labels Lists
            faces.append(faceROI)
            labels.append(Name)

    # convert our faces and labels Lists to NumPy arrays
    faces = np.array(faces)
    labels = np.array(labels)

    # return a 2-tuple of the faces and labels
    return (faces,labels)

#ImplemenTing Local Binary Patterns for face recognition    

# # construct the argument parser and parse the arguments
# ap = argparse.ArgumentParser()
# ap.add_argument("-i","--input",type=str,required=True,#   Help="path to input directory of images")
# ap.add_argument("-f","--face",#   default="face_detector",#   Help="path to face detector model directory")
# ap.add_argument("-c","--confIDence",type=float,default=0.5,#   Help="minimum probability to filter weak detections")
# args = vars(ap.parse_args())

# since we are using Jupyter Notebooks we can replace our argument
# parsing code with *hard coded* arguments and values
args = {
    "input": "caltech_faces","face": "face_detector","confIDence": 0.5,}

# load our serialized face detector model from disk
print("[INFO] loading face detector model...")
prototxtPath = os.path.sep.join([args["face"],"deploy.prototxt"])
weightsPath = os.path.sep.join([args["face"],"rES10_300x300_ssd_iter_140000.caffemodel"])
net = cv2.dnn.readNet(prototxtPath,weightsPath)

# load the CALTECH faces dataset
print("[INFO] loading dataset...")
(faces,labels) = load_face_dataset(args["input"],minSamples=20)
print("[INFO] {} images in dataset".format(len(faces)))

# encode the String labels as Integers
le = LabelEncoder()
labels = le.fit_transform(labels)

# construct our Training and tesTing split
(TrainX,testX,TrainY,testY) = Train_test_split(faces,labels,test_size=0.25,stratify=labels,random_state=42)

# Train our LBP face recognizer
print("[INFO] Training face recognizer...")
recognizer = cv2.face.LBPHFaceRecognizer_create(
    radius=2,neighbors=16,grID_x=8,grID_y=8)
start = time.time()
recognizer.Train(TrainX,TrainY)
end = time.time()
print("[INFO] Training took {:.4f} seconds".format(end - start))


# initialize the List of preDictions and confIDence scores
print("[INFO] gathering preDictions...")
preDictions = []
confIDence = []
start = time.time()

# loop over the test data
for i in range(0,len(testX)):
    # classify the face and update the List of preDictions and
    # confIDence scores
    (preDiction,conf) = recognizer.preDict(testX[i])
    preDictions.append(preDiction)
    confIDence.append(conf)

# measure how long making preDictions took
end = time.time()
print("[INFO] inference took {:.4f} seconds".format(end - start))

# show the classification report
print(classification_report(testY,preDictions,target_names=le.classes_))


# initialize the vIDeo stream and allow the cAMMera sensor to warmup
print("[INFO] starTing vIDeo stream...")
vs = VIDeoStream(src=0).start()
time.sleep(2.0)

# loop over the frames from the vIDeo stream
while True:

    # grab the frame from the threaded vIDeo stream and resize it
    # to have a maximum wIDth of 400 pixels
    face = vs.read()
    face = imutils.resize(face,wIDth=400)

    # loop over the detections
    for i in range(0,detections.shape[2]):

        # grab the preDicted name and actual name
    predname = le.inverse_transform([preDictions[i]])[0]
    actualname = le.classes_[testY[i]]


    # draw the preDicted name and actual name on the image
    cv2.putText(face,"pred: {}".format(predName),(5,25),cv2.Font_HERShey_SIMPLEX,0.8,(0,255,0),2)
    cv2.putText(face,"actual: {}".format(actualName),60),255),2)

    # display the preDicted name,actual name,and confIDence of the
    # preDiction (i.e.,chi-squared distance; the *lower* the distance
    # is the *more confIDent* the preDiction is)
    print("[INFO] preDiction: {},actual: {},confIDence: {:.2f}".format(predname,actualname,confIDence[i]))

# show the output frame
cv2.imshow("Face",facE)
key = cv2.waitKey(1) & 0xFF
 
# if the `q` key was pressed,break from the loop
if key == ord("q"):
    break

解决方法

我为此使用了 google collab,首先,请确保您已安装 OpenCV。您可以使用 pip 安装它:

['/wiki/Americas','/wiki/Asia','/wiki/Europe','/wiki/Americas','/wiki/Oceania','/wiki/Africa','/wiki/Oceania']

在检测人脸之前,我们应该使用 google collab 打开网络摄像头。

pip install opencv-python

您必须运行以下代码作为第二步。

from IPython.display import display,Javascript
from google.colab.output import eval_js
from base64 import b64decode
def take_photo(filename='photo.jpg',quality=0.8):
js = Javascript('''
async function takePhoto(quality) {
const div = document.createElement('div');
const capture = document.createElement('button');
capture.textContent = 'Capture';
div.appendChild(capturE);
const video = document.createElement('video');
video.style.display = 'block';
const stream = await navigator.mediaDevices.getUserMedia({video: truE});
document.body.appendChild(div);
div.appendChild(video);
video.srcObject = stream;
await video.play();
// Resize the output to fit the video element.     google.colab.output.setIframeHeight(document.documentElement.scrollHeight,truE);
// Wait for Capture to be clicked.
await new Promise((resolvE) => capture.onclick = resolvE);
const canvas = document.createElement('canvas');
canvas.width = video.videoWidth;
canvas.height = video.videoHeight;
canvas.getContext('2d').drawImage(video,0);
stream.getVideoTracks()[0].stop();
div.remove();
return canvas.toDataURL('image/jpeg',quality);
}
''')
display(js)
data = eval_js('takePhoto({})'.format(quality))
binary = b64decode(data.split(',')[1])
with open(filename,'wb') as f:
f.write(binary)
return filename

运行这两个代码后,网络摄像头就打开了,可以拍照了。 照片保存为 photo.jpg。

使用 Haar 级联的人脸检测是一种基于机器学习的方法,其中使用一组输入数据训练级联函数。 OpenCV 已经包含许多针对面部、眼睛、微笑等的预训练分类器。今天我们将使用面部分类器。您也可以尝试使用其他分类器。

大佬总结

以上是大佬教程为你收集整理的使用 LBP、深度学习和 OpenCV 进行实时人脸识别全部内容,希望文章能够帮你解决使用 LBP、深度学习和 OpenCV 进行实时人脸识别所遇到的程序开发问题。

如果觉得大佬教程网站内容还不错,欢迎将大佬教程推荐给程序员好友。

本图文内容来源于网友网络收集整理提供,作为学习参考使用,版权属于原作者。
如您有任何意见或建议可联系处理。小编QQ:384754419,请注明来意。