The face detection or recognition is one of the most popular researches in the computer system, and it is the most advanced and better application to analyse the images which are algorithm based technology. This technology mainly used in the offices for attendance purpose as well as the security reason. This study will be based on to evaluate the systems of multiple face detection, and a major objective of the study is to evaluate those with algorithms.
Face recognition mainly formulated as the still images, identifying one or more images that are stored in the database management. Face detection is taken as the more successful and advanced technology in a computer system nowadays. It will detect the exact location and the faces of the human being which is stored in the digital images. This application mainly detects the human faces and avoided another thing which is surrounded by. The face recognition is one of the hardest tasks in the computer system. With the help of an algorithm and many updated software required to control this application on the system. The main problems arise due to the following application is to implement new techniques and innovations in the system.
There are several problems like a variation in the scale, orientation, exact location, expression, lighting condition, and many things. Basically, it is the process of establishing the identification of the individual person in terms of physical, chemical and other attributes of the human being. In modern society, the biometrics is used for the large scale identification of the management system. The face recognition system uses many algorithms which creates many difficulties in taking the values. The algorithm, which is used in this particular application, is very complicated to proceed (Samarasinghe, 2016). The harrascode recogniser is used as the ranking algorithm in face recognition. Most of the old processes of the face recognition systems are slow in the process, and it cannot develop multiple images in a single time. Image qualities of the pictures are also unclear, and there are no related researches available which can highlight on this impact directly. Carrasco de recogniser and LBPH detector are having many variables and mathematical formula which creates a difficulty in getting the exact values for the studies. Many assumptions are taken for the harrascode recogniser and LBPH detector which may be responsible for getting an exact value from the mathematical formula.
Based on these two algorithms, this multiple face detection process will be developed in this study. A significant process of face detection work is formulated through stored images and videos in the database of the system. Face recognition is one of the best researches in the computer application which included the application of the image sensor and the algorithm properties. The face recognition is based on the property of the still form of images, verification of more than two people which is stored in the system of the database management (Huang et al. 2015). This particular technology had a various application in the indexing of the images and the retrieval of the information, and it can be used in searching of images that contain people, associated the face with the names, by the method of clustering the primary person can be identified. The face recognition technology also used as the determination of the user attention an example like the screen is facing publicly in which the face is detected once, the personage and sex will be determined for the advertising purpose. This application is also useful in the biometric process. It is the first advanced technology in which the face valuation is required.
The face recognition technology adopted many kinds of the method from the last few years, but among all these methods the classical method is prevalent. The prestigious face recognition classified into the component analysis, discriminate analysis, discrete transformation, and component analysis. It is taken as the primary factor in face recognition technology. The method of eigenfaces is used by many researchers in the face recognition technology. The eigenfaces are the principal component of this technology. It basically discriminated many input variables into several classes (Li & Hua, 2015). The original form of the image data can be extracted by using PCA (Principal Component Analysis) formula. One of the essential features that PCA follows that it can reconstruct the original form of the image from the original set by using the eigenfaces. Eigenfaces are taken as the central element in the face recognition technology. Eigenfaces generally represent the main features of the faces, which may not be contained in the original form of the image.
Some of the networks which are used in large scale are the artificial neural network and the neural network. Viola Jones algorithms have also been improvised. The filtering method involving false positives provides us with insight into different colour used for the false face detection. All of these existing systems are very old, and they consist of a lack of proper specifications and detailing in the system which needs to be changed. However, in the next part of the study, advantages of the multiple face recognition will be stated which follows the Multi related scale LBPS.
Facial recognition has several disadvantages like the quality of the image, size, facing of an angle and the processing part. At first, the quality of the image fundamentally affects the algorithm work of facial recognition. The quality of the image in the scanning of the video is deficient compared to the digital camera. The quality of the images affected the entire facial detection process. The storage and processing purpose of face recognition has significant difficulties. The facing of an angle is chosen to recognise the real image of the person (Minaee & Wang, 2015). To get an appropriate face by using recognition software there were many forms of angles are used up, and this will create a massive problem in the face detection process. Basically, they used the format of the 2D facial type’s photos. Due to this format, the multiple faces cannot be detected at the moment by the facial recognition. The motion of the person was taking inaccurate images, and it will create a problem in the facial recognition system. For more accuracy, the updated software is required which is very costly in the market. Sometimes the images come hazy, and it will create a problem in the detection process. The influences of the angle in the camera also affect the process of facial recognition technology.
The primary advantage of facial recognition in the computer system is that the process of integration is very smooth and easy. Binary patterns which are local consist of an LBP operator which has a firm texture to measure the exact complexity. The proposed system will automatically measure the multiple face data at the same time, and it will be based on the previous loaded images and videos (Richardson et al. 2017). This study will be based on the Discriminant Linear Analysis (LDA) and it will be operated through the local patterns of binary. Through this process, the original image will be converted to the LBP u2 8 images. However, apart from the original image, the face recognition system will also load the normalised and cropped images. LBP images are stored according to the different patterns of black, white and grey dark spots. An experimental setup will be done to increase the effectiveness of the process in brief.
The automating time tracker is used to measure the time tracking, and in this method, there is no need to monitoring the system 24 hours in a day. With the automated formula of this method, the error is being dismissed. This automating time tracker is also helpful in the attendance system by using facial recognition technology (Siddiqi et al. 2017). 3D technology is used nowadays to detect more than one face at a time. In the case of 3D technology, the accuracy level is remarkably higher than 2D technology. The security system used in this technology is far better. It can also be used without the knowledge of the user. The face recognition system is easy to handle with a better security structure, time saver and also the cost is less on the development of the software.
System Requirements and Software’s
Processor: Processor above 550 MHz.
Ram: Minimum of 6 GB
Hard Disk: Minimum 8 GB
Input device: Keyboards and Mouses.
Output device: High-Resolution Monitors and VGA outputs
Operating System: Windows 10 and above or MacOS or Linux
Programming: Python 3.7.0 and other related libraries
Python is an interactive, general purpose, high level and object-oriented programming language. It was developed by Guido van Rossum in 1991. The python source code is also available under the General Public License. It is mainly used for web development, scripting, software development and mathematics.
According to Tim Peters, python can be philosophised in many ways and forms. The highlighted inclusions are documented under the Zen of Python. Some of them are:
- Beautiful is always better as compared to ugly.
- Explicit should be better as compared implicit.
- Simple better than complex.
- The complex is, therefore, better as compared to complicated.
- Readability counts.
- Python can work on most platforms. (Mac, Windows, Raspberry Pi, Linux etc.)
- Python has a language that is more similar to English.
- In python, similar instances of the case use fewer lines of code than compared to other languages.
- Python code can be executed as soon as it is written as it uses an interpreter. Therefore prototyping can be quick.
- Python can be treated in a functional, procedural or object-oriented way.
Python is designed to be extremely extensible rather than having all its functionality in its core. This has made it famous in order to add interfaces as deemed necessary by the programmer. Different libraries and packages can be included and modified in order to adhere to the result (geeksforgeeks.org, 2019).
OpenCV is a software library for machine learning. It stands for Open Source Computer Vision Library. It is used to promote machine and deep learning in commercial products and applications. It has a considerable number of optimised libraries which includes state of the art and classic machine learning algorithms. These algorithms can be used to identify faces, recognise objects, produce 3D clouds from cameras, extract 3D models, find matching images from the database, stitch images to obtain a higher resolution picture of the entire scene, remove red eyes using flash from images, follow movements of the eye, recognise scenery etc.
Well-known tech giants such as Google, Microsoft, and Yahoo make use of this platform, but it is most popular with startups. Making use of this library has enabled different minds in the industry to produce innovative ways to approach different problems. It supports most of the programming languages and open source platforms. It is inclined more towards real-time applications. It in itself is an open source network that allows developers to modify its code and make public changes in its architecture. It is written in C/C++, in the library here this can take advantage of the multilevel core processing. This is optimised with OpenCL, which can take the influence of the acceleration in hardware with the different underlying computer platform.
Face Detection is a combination of computer intelligence and image and data processing to achieve the goal to match face information to our data for security or recognition purpose. The complete Face Recognition algorithm uses mainly 3 phases to work together to achieve the result. They are classified as:
- Face Detection and Identification
- Recognition Algorithm
- Face recognition
The Face detection algorithms are used to locate a human face in a particular scene. The detection techniques that are practised are divided into two types of scenes; for controlled backgrounds and for constrained scenes.
Finding faces in controlled backgrounds: This includes the detection of faces in single colour backgrounds (Bourlai, 2016). The faces are separated from the background by virtue of its motion or colour and are then put through recognition algorithms.
- Identification on the basis of colour: This section uses the aspect of colour as an indicator for detection. Colour is an efficient yet fruitful method which is vigorous under consideration in partial occlusions and depth. It can easily be combined with the motion for detection
Gaussian distribution for calculating colour ranges in a picture
Maximisation to update mixture components
The model used to assign a probability
The size of the box is approximated by computing S.D weighted by the probability of pixels:
- Identification on the basis of motion: Here face capture in case of motion is done by dividing the structure into four different parts: Frame differencing, noise removal, thresholding and adding up pixels. These are carried out by calculation the time difference between the present and previous frames.
- Combining the previous techniques.
Finding faces in controlled backgrounds: In these type of images, the situation is more challenging as images are mostly black and white, a human can identify the faces but what algorithms will help in doing so (Ahmed et al. 2018). These can be done by model-based tracking, using weak classifier cascades which include haar algorithm and deep learning.
Face Recognition in general cases dealing and identifying only faces in a multitude of faces. This study will deal with how multiple faces can be identified using recognition techniques.
Face Recognition using Haar Cascade: Haar Cascade involves advanced face recognition which includes combinations of positive and negative images to detect and recognise faces. Positive images include an image with faces whereas negative image includes an image without faces. The features are separated from these images, and they acquire different value after using pixel operations using only 4 pixels. But with the use of these techniques lots of feature values are obtained which are irrelevant (Ding et al. 2016). This is limited by using AdaBoost. We compare these values with the image set by matching its threshold by assigning different weights and the data with the least error rated are considered. These reduce the features list significantly, almost by 60% but it still is not enough. A further solution to this can be to eliminate the nation face windows and keep the windows containing facial information.
LBPH Face Recognition: Using the LBPH algorithm to identify faces is done by following step by step instructions to carry out the process. It uses four parameters namely Radius, neighbours, Grid X, Grid Y. These data are used to build a local circular binary pattern, and the grid values are set to 8 (Kątek et al. 2016). This algorithm also requires training, so it needs a dataset to match the faces. The first step in LBPH is to obtain an intermediate image from the original image containing mostly the facial structure. It uses the process of bilinear interpolation. After this step, the histograms are extracted. The histogram contains spatial information about the features such as eyes, nose, ears etc. The spatial encoding also gives weights to the histograms from each cell separately, giving more distinguishable power to more specific features of the face.
The main change when identifying multiple faces from only faces is the amount of data that should be taken into account while analysing the detection patterns. The main change to detect multiple faces is that we have to run the algorithms every frame more often to detect any face that may have appeared in the meantime. While tracking a single face, this is not a matter of concern as we only have to start tracking a different face only after the current data has been lost.
An important point to consider while approaching this outcome will be that we should be able to determine that which of the faces that are being detected already match the correlation algorithm for the current face that is being tracked (Kutty & Mathai, 2017). A simplified approach to solve this problem is to check if the detected face matches an already existing tracker point to the centre point of the already detected face and also if the centre part of that similar tracker is also within the region covered by the already detected face. So the approach to detect all the faces in a frame is to include in our main loop the following factors:
- To update the correlation trackers and to remove the trackers those aren’t considered to be reliable anymore.
- For every few handfuls of frames perform the following:
- Use detection in current frames and find all the faces.
- For the faces found, check if there is a tracker to match the centre of the detected face to an already existing tracker or if the tracker is within the bounded region of another tracker.
- If there is an already existing tracker, then this face was already detected before; otherwise, we have to set a new tracker for the face.
- Use information for all the trackers to determine the bounding rectangle.
To match the faces to existing images we need to provide a dataset to analyse the faces that are detected. Before we begin the training of our algorithms we need first to define the dataset itself and gather the faces. If there is an already pre-structured dataset, then most of the work is done. But in this case, the dataset has to defined and updated continuously (W. Lee & S.Z, 2007). This requires gathering data and quantifying them in some particular manner. The datasheet updated here will be used to match names with the attendance list to mark the presence of individuals. This is better known as enrolment as faces are enrolled in daily routine and data updated continuously. One of the most common ways to do this is by using OpenCV.
- Via OpenCV and Webcam: This method is useful when it involves on-site face recognition, and we have physical access to the persons. The language used will be Python. We may perform this process over multiple days under varying lighting conditions, time of day, moods etc. We build a python script to detect the faces through webcam and write the facial data to the disk. Two main command line arguments are to be used namely –cascade and –output. –cascade is a path to haar cascade file and –output is to write the images to the output directory (Yi et al. 2017). The OpenCV’s detector will do the main task as it will load the video stream and capture the image frame by frame. One such frame will be captured and transferred to the output directory. After this face detection is performed by using the algorithms as discussed before or we can use the deep learning set in OpenCV. OpenCV’ s deep learning is based on a face detector with Single Shot Detector (SSD) framework with a ResNet base network. The Caffe prototxt files and weight files are present for determining face detection in OpenCV. [Referred to Appendix 4]
We will use Excel to update the face that is recognised. When the face is identified the attendance system will mark present for those who are there in the excel sheet, and this can be then used for validation.
Face detection in its present form is an active area of study. Many researchers, scholars and academic personalities are continually trying to find better solutions to the existing methods. Accurate analysis in this field involves machine learning and artificial intelligence which requires improvements in hardware that are very expensive. The use of techniques such as Haar cascade and LBPH in facial recognition is efficient and inexpensive, but they are less inaccurate for random faces. They all require a prerequisite data set to compare their data.
The types of equipment used in these cases have advanced through the last decade like with the use of high definition cameras to adjust lighting, exposures, autofocus which enables the system to get more accurate information on the face. The study here discussed the background of the work and the limitations which they had. This research also dealt with the advantages and disadvantages of the Face recognition system and the proposed changes that will help the system to become better. The data analysis section enlightened on the Haar cascade and LBPH techniques for facial recognition and also gave an assumption as to how the system can be modified to include the functionality to detect multiple faces at the same time.
The technology involving face recognition is progressing steadily, but few problems need to be tackled soon so as to the this to the next step. First among many includes the use of better face detectors and video streamers in order to better capture images. Another problem that needs to be rectified is the assumptions and formulations of different algorithms that are hypothecated. The third step is the inclusion of deep learning and artificial intelligence to achieve recognition results from a comprehensive set of variables better. The advanced AI architecture can be used to gather datasets from open source networks and online interfaces to increase the information pool. Last but not the least the ML architecture in OpenCV should be implemented more than present architecture for better perfecting the face recognition system (sciencedirect.com, 2019).
Also look at: