Welcome to the IKCEST
IET Biometrics

IET Biometrics

Archives Papers: 156
IEEE Xplore
Please choose volume & issue:
Online writer identification system using adaptive sparse representation framework
Vivek VenugopalSuresh Sundaram
Keywords:feature extractionhandwriting recognitionhandwritten character recognitionimage classificationimage representationlearning (artificial intelligence)support vector machinessupport vector machinessub-stroke based feature vectorsenrolled writerssum-pooled sparse coefficientssaliency measurederived componentsgiven writerdictionary atomadaptive sparse representation approachadaptive sparse representation frameworkonline writer identification systemsparse codesadapted saliency values
Abstracts:This study explores an adaptive sparse representation approach for online writer identification. The main focus is on employing prior information that quantifies the degree of importance of a dictionary atom concerning a given writer. This information is proposed by a fusion of two derived components. The first component is a saliency measure obtained from the sum-pooled sparse coefficients corresponding to the sub-strokes of a set of enrolled writers. The second component is a similarity score, computed for each dictionary atom with regards to a given writer, that is related to the reconstruction error of the sub-stroke based feature vectors. The proposed identification is accomplished with an ensemble of support vector machines (SVMs), wherein the input to the SVM trained for a writer is obtained by incorporating the adapted saliency values of that writer on the document descriptor obtained via average pooling of sparse codes. Experiments performed on the IAM and IBM-UB1 online handwriting databases demonstrate the efficacy of the proposed scheme.
Cost-effective and accurate palm vein recognition system based on multiframe super-resolution algorithms
Venance KilianNassor AllyJosiah NomboAbdi T. AbdallaBaraka Maiseli
Keywords:cryptographyfeature extractionimage denoisingimage resolutionimage restorationmedical image processingreliabilityvein recognitionnoise suppressioninformative palm vein patternsauthentication methodpre-processing componentfeature extractionimage acquisitioncontactless biometric identification methodpalm vein recognition systemlow-resolution imaging devicesquery imagingmultiframe super-resolution algorithmslow-resolution imagingPVR systemhigh-resolution imagingsophisticated acquisition devices
Abstracts:Palm vein recognition (PVR) refers to the contactless biometric identification method that uses palm vein patterns to confirm the identity of a person. Compared with other methods, PVR has received a wide attention because it provides more secure results. The veins, being located inside the human body, make PVR robust against tempering and changes in morphology of body features. Most PVR systems integrate four stages: image acquisition, pre-processing, feature extraction, and decision. The first two stages determine accuracy of the final identification results. Focusing on the pre-processing component, we discovered that the available approaches fail to generate more informative vein patterns by simultaneously suppressing noise and blur, and also by recovering semantically useful features (edges, contours, and lines) from the acquired images. This weakness calls for sophisticated acquisition devices that make PVR systems costly. In this work, we have proposed multiframe super-resolution (MSR) as a pre-processing stage to improve performance of the traditional PVR systems. MSR exploits information from multiple images of the same scene to reconstruct a high-resolution image. This technique signals the possibility of using inexpensive low-resolution imaging devices demanded by the traditional PVR systems. Experiments show that our method outperforms most classical methods.
Deep learning for face recognition on mobile devices
Belén Ríos-SánchezDavid Costa-da SilvaNatalia Martín-YusteCarmen Sánchez-Ávila
Keywords:face recognitionJavalearning (artificial intelligence)mobile computingdeep learning solutionsfacial featurescapturing conditionsgreat variabilitymobile phoneface detection stagestemplate matchingpublicly available modelsmobile scenariosprivate datasetspublic datasetslow capacity devicessmall-size deep-learning modelmobile devicestraining stageautomatic face recognitioninteresting choice
Abstracts:Mobility implies a great variability of capturing conditions, which is not easy to control and directly affects to face detection and the extraction of facial features. Deep learning solutions seem to be the most interesting choice for automatic face recognition, but they are highly dependent on the model generated during the training stage. In addition, the size of the models makes it difficult for their integration into applications oriented to mobile devices, particularly when the model must be embedded. In this work, a small-size deep-learning model was trained for face recognition on low capacity devices and evaluated in terms of accuracy, size and timings to provide quantitative data. This evaluation is aimed to cover as many scenarios as possible, so different databases were employed, including public and private datasets specifically oriented to recreate the complexity of mobile scenarios. Also, publicly available models and traditional approaches were included in the evaluation to carry out a fair comparison. Moreover, given the relevance of template matching and face detection stages, the assessment is complemented with different classifiers and detectors. Finally, a JAVA-Android implementation of the system was developed and evaluated to obtain performance data of the whole system integrated on a mobile phone.
3D face mask presentation attack detection based on intrinsic image analysis
Lei LiZhaoqiang XiaXiaoyue JiangYupeng MaFabio RoliXiaoyi Feng
Keywords:face recognitionfeature extractionimage textureneural netsreflectance imageface maskintrinsic image analysisface presentation attacksface recognition systemsimage reflectanceface imageintrinsic image decomposition algorithm
Abstracts:Face presentation attacks have become a major threat against face recognition systems and many countermeasures have been proposed over the past decade. However, most of them are devoted to 2D face presentation attack detection, rather than 3D face masks. Unlike the real face, the 3D face mask is usually made of resin materials and has a smooth surface, resulting in reflectance differences. Therefore, in this study, the authors propose a novel 3D face mask presentation attack detection method based on analysis of image reflectance. In the proposed method, the face image is first processed with intrinsic image decomposition algorithm to compute its reflectance image. Then, the intensity distribution histograms are extracted from three orthogonal planes to represent the intensity differences of reflectance images between the real face and 3D face mask. After that, given that the reflectance image of a smooth surface is more sensitive to illumination changes, 1D convolutional neural network is used to characterise how different materials or surfaces react differently to illumination changes. Extensive experiments with the public available 3DMAD database demonstrate the effectiveness of the proposed method for distinguishing a face mask from the real one and show that the detection performance outperforms other state-of-the-art methods.
Weighted quasi-arithmetic mean based score level fusion for multi-biometric systems
Herbadji AbderrahmaneGuermat NoubeilZiet LahceneZahid AkhtarDipankar Dasgupta
Keywords:biometrics (access control)sensor fusionstatistical analysissupport vector machinesNIST-BSSR1 FaceWQAM fusion algorithmscore-level fusionhigh-security scenariosmobile user authenticationmultibiometric systemsweighted quasiarithmetic mean based score level fusionscore fusion rulesmultialgorithm systemsNIST-BSSR1 FingerprintNIST-BSSR1 Multimodal
Abstracts:Biometrics is now being principally employed in many daily applications ranging from the border crossing to mobile user authentication. In the high-security scenarios, biometrics require stringent accuracy and performance criteria. Towards this aim, multi-biometric systems that fuse the evidences from multiple sources of biometric have exhibited to diminish the error rates and alleviate inherent frailties of the individual biometric systems. In this article, a novel scheme for score-level fusion based on weighted quasi-arithmetic mean (WQAM) has been proposed. Specifically, WQAMs are estimated via different trigonometric functions. The proposed fusion scheme encompasses properties of both weighted mean and quasi-arithmetic mean. Moreover, it does not require any leaning process. Experimental results on three publicly available data sets (i.e. NIST-BSSR1 Multimodal, NIST-BSSR1 Fingerprint and NIST-BSSR1 Face) for multi-modal, multi-unit and multi-algorithm systems show that presented WQAM fusion algorithm outperforms the previously proposed score fusion rules based on transformation (e.g. <i>t</i>-norms), classification (e.g. support vector machines) and density estimation (e.g. likelihood ratio) methods.
Hot Journals