Amarjot Singh, Devendra Patil, G Meghana Reddy, SN Omkar (2017)
Disguised Face Identification (DFI) with Facial KeyPoints using Spatial Fusion Convolutional Network


This paper introduces a deep learning framework to perform disguised face identification, where the face might have been altered using a variety of physical attributes such as wearing a wig, changing hairstyle or hair color, wearing eyeglasses, removing or growing a beard.




  • The framework first uses a convolutional network called Spatial Fusion deep convolutional network to detect 14 facial keypoints, that were identified as essential for facial identification.
  • The facial key-point detection problem is formulated as a regression problem that can be modeled by the Spatial Fusion Convolutional Network. The CNN takes an image and outputs the pixel coordinates of each key-point.
  • The extracted points are connected to form a star-net structure as shown in the figure below.
  • The orientations of the connected points are used by a classification framework for face identification. In particular, the angles in the Fig 4. are used as the features for classification of face.


Training details

  • The ground truth labels are heat-maps synthesized for each key-point separately by placing a Gaussian with fixed variance at the ground truth key-point position (x, y).

  • The CNN predicts heat-maps for the key-points, and an L2 loss which penalizes the squared pixel-wise differences between the predicted and the ground truth heat-map is used for training.

Datasets used

  • This paper proposes two face disguise (FG) datasets of 2000 images each with (i) Simple and (ii) Complex backgrounds that contain people
    with varied disguises, covering different backgrounds and under varied illuminations. These are shown in Fig 2.