NtKinect: Kinect V2 C++ Programming with OpenCV on Windows10

How to recognize detailed face information (HDFace) with Kinect V2


2016.08.14: created by
2016.11.04: revised by
Japanese English
This article is for NtKinect.h version 1.4 or later
This article includes topics for NtKinect.h version 1.8 or later
To Table of Contents

Prerequisite knowledge


Recognize Detailed Face Information (HDFace)

Getting detailed face data (HDFace) for facial motion capture.

[Notice] In my environment, using Visual Studio 2017 for compilation has become a failure to recognize HDFace with a high probability. It is speculated that this is a bug in the optimization relation, but if you use the function of HDFace it is recommended to use Visual Studio 2015.


If you define USE_FACE before including NtKinect.h, the functions and variables for face recognition become effective. In NtKinect version 1.4 or after, the functions and variables for detailed face recognition (HDFace) become effective, too.

NtKinect

NtKinect's functions for Detailed Face Recognition (HDFace)

type of return value function name descriptions
void setHDFace() version1.4 or later.
After calling setSkeleton() function, this function can be called to recognize the detailed face information (HDFace).
Values are set to the following member variables.
typevariable namedescriptions
vector<vector<CameraSpacePoint>> hdfaceVerticesposition of face part
vector<UINT64> hdfaceTrackingIdskeleton trackingId corresponding to the face
vector<pair<int,int>> hdfaceStatuspair of "FaceModelBuilderCollectionStatus" and "FaceModelBuilderCaptureStatus"
pair<string,string> hdfaceStatusToString(pair<int,int>) version1.4 or later.
hdfaceStatus[index ] is the collection status of data required to create a face model. When it is passed to this function, the pair of state string is returned.
bool setHDFaceModelFlag(bool flag=false) version1.8 or later.
This function set the internal flag to generate individual's face model at a time when data for creating a face model is sufficiently collected.
The default value is false, and individual's face model will not be generated. If you call the setHDFace() function multiple times after calling this function with argument "true", individual face models will be generated at an appropriate timing. Individual's face model is expected to increase the precision of detailed face (HDFace) recognition.
Since the program may become unstable, this function is treated experimentally.
NtKinect

NtKinect's member variables for Detailed Face Recognition (HDFace)

type variable name descriptions
vector<vector<CameraSpacePoint>> hdfaceVertices version1.4 or later.
Position of face parts in CameraSpace coordinate system.
A vector<CameraSpacePoint> holds the position of 1347 points on one human's face.
To handle multiple people, the type of this variable is vector<vector<CameraSpacePoint>> .
vector<UINT64> hdfaceTrackingId version1.4 or after.
vector of trackingId.
hdfaceTrackingId[index ] corresponds to hdfaceVertices[index ].
vector<pair<int,int>> hdfaceStatus version1.4 or later.
state of face recognition.
The state of HDFace recognition for one person is a pair of FaceModelBuilderCollectionStatus and FaceModelBuilderCaptureStatus, and is expressed as pair<int,int> . To handle multiple people, the type of this variable is vector<pair<int,int>> .

FaceModelBuilderCollectionStatus

The value is OR of the next states.
Constant name of FaceModelBuilderCollectionStatus value
FaceModelBuilderCollectionStatus_ Complete 0
MoreFramesNeeded 0x2
LeftViewsNeeded 0x4
RightViewsNeeded 0x8
TiltedUpViewsNeeded 0x10

FaceModelBuilderCaptureStatus

The value is one of the following.
Constant name of FaceModelBuilderCaptureStatus value
FaceModelBuilderCaptureStatus_ GoodFrameCapture 0
OtherViewsNeeded 1
LostFaceTrack 2
FaceTooFar 3
FaceTooNear 4
MovingTooFast 5
SystemError 6
(Caution)

In the sample by Microsoft, FaceModelBuilder is used to generate a face model in order to obtain precise HDFace. But in my environment, calling the ProduceFaceModel() function will result in an error. (2016/08/14).

CComPtr faceModelData;
CComPtr  faceModel;
...
faceModelData->ProduceFaceModel(&faceModel));  // error occurred
Since detailed information can be acquired without ProduceFaceModel() function call, the function is not used in NtKinect.

I would be pleased if you have any information on this subject. (2016/11/04 deleted)

We added the setHDFaceModelFlag(bool) function in NtKinect version 1.8 so that we can choose whether you will generate an individual face model or not. However, since it may cause an exception while generating an individual face model, setting this flag to "true" should be handled experimentally. (2016/11/04 added)


How to write program

  1. Start using the Visual Studio's project KinectV2_face.zip of "How to recognize human face with Kinect V2 in ColorSpace coordinate system" .
  2. This project is set as follows.

  3. Change the contents of "main.cpp" as follows.
  4. main.cpp
    #include <iostream>
    #include <sstream>
    
    #define USE_FACE
    #include "NtKinect.h"
    
    using namespace std;
    
    void putText(cv::Mat& img,string s,cv::Point p) {
      cv::putText(img, s, p, cv::FONT_HERSHEY_SIMPLEX, 0.5, cv::Scalar(255,0,0), 1, CV_AA);
    // rename CV_AA as cv::LINE_AA (in case of opencv3 and later)
    }
    
    string hexString(int n) { stringstream ss; ss << hex << n; return ss.str(); }
    
    void doJob() {
      NtKinect kinect;
      while (1) {
        kinect.setRGB();
        kinect.setSkeleton();
        for (auto person : kinect.skeleton) {
          for (auto joint : person) {
            if (joint.TrackingState == TrackingState_NotTracked) continue;
            ColorSpacePoint cp;
            kinect.coordinateMapper->MapCameraPointToColorSpace(joint.Position,&cp);
            cv::rectangle(kinect.rgbImage, cv::Rect((int)cp.X-5, (int)cp.Y-5,10,10), cv::Scalar(0,0,255),2);
          }
        }
        putText(kinect.rgbImage, "TrackingId", cv::Point(0, 30));
        putText(kinect.rgbImage, "Collection", cv::Point(0, 60));
        putText(kinect.rgbImage, "Capture", cv::Point(0, 90));
        putText(kinect.rgbImage, "Collection", cv::Point(0, 120));
        putText(kinect.rgbImage, "Capture", cv::Point(0, 150));
        kinect.setHDFace();
        for (int i=0; i<kinect.hdfaceVertices.size(); i++) {
          for (CameraSpacePoint sp : kinect.hdfaceVertices[i]) {
    	ColorSpacePoint cp;
    	kinect.coordinateMapper->MapCameraPointToColorSpace(sp,&cp);
    	cv::rectangle(kinect.rgbImage, cv::Rect((int)cp.X-1, (int)cp.Y-1, 2, 2), cv::Scalar(0,192, 0), 1);
          }
          int x = 200 * i + 150;
          auto status = kinect.hdfaceStatus[i];
          auto statusS = kinect.hdfaceStatusToString(status);
          putText(kinect.rgbImage, hexString(kinect.hdfaceTrackingId[i]), cv::Point(x, 30));
          putText(kinect.rgbImage, hexString(status.first), cv::Point(x, 60));
          putText(kinect.rgbImage, hexString(status.second), cv::Point(x, 90));
          putText(kinect.rgbImage, statusS.first, cv::Point(x, 120));
          putText(kinect.rgbImage, statusS.second, cv::Point(x, 150));
        }
        cv::imshow("hdface", kinect.rgbImage);
        auto key = cv::waitKey(1);
        if (key == 'q') break;
      }
      cv::destroyAllWindows();
    }
    
    int main(int argc, char** argv) {
      try {
        doJob();
      } catch (exception &ex) {
        cout << ex.what() << endl;
        string s;
        cin >> s;
      }
      return 0;
    }
    
  5. When you run the program, RGB images are displayed. Exit with 'q' key.
  6. Recognized HDFace information is displayed above the RGB image.

    In this example, the information collection status (CollectionStatus, CaptureStatus) of FaceModelBuilder is displayed on the upper part of the screen for each individual. Currently, it seems that you do not have to worry about these conditions (unconfirmed).(2016/11/04 deleted)




  7. This sample prjoect is click here KinectV2_hdface.zip
  8. Since the above zip file may not necessarily include the latest "NtKinect.h", Download the latest version from here and replace old one with it.


[Experimental] Generate Individual Face Model (2016/11/04 added)

  1. If you use NtKinect version 1.8 or later, let's add the next line to "main.cpp". (green letter part in the program)
  2.   kinect.setHDFaceModelFlag(true);
    

    If you call kinect.setHDFaceModelFlag(true) once, a face model will be generated when FaceModelBuilder information is collected sufficiently. Precision of detailed face recognition (HDFace) will be increased when generating a model. However, because the behavior of the program becomes slightly unstable, it is treated experimentally now. (2016/11/04 added)

    main.cpp
    #include <iostream>
    #include <sstream>
    
    #define USE_FACE
    #include "NtKinect.h"
    
    using namespace std;
    
    void putText(cv::Mat& img,string s,cv::Point p) {
      cv::putText(img, s, p, cv::FONT_HERSHEY_SIMPLEX, 0.5, cv::Scalar(255,0,0), 1, CV_AA);
    }
    
    string hexString(int n) { stringstream ss; ss << hex << n; return ss.str(); }
    
    void doJob() {
      NtKinect kinect;
      kinect.setHDFaceModelFlag(true);
      while (1) {
        kinect.setRGB();
        kinect.setSkeleton();
        for (auto person : kinect.skeleton) {
          for (auto joint : person) {
            if (joint.TrackingState == TrackingState_NotTracked) continue;
            ColorSpacePoint cp;
            kinect.coordinateMapper->MapCameraPointToColorSpace(joint.Position,&cp);
            cv::rectangle(kinect.rgbImage, cv::Rect((int)cp.X-5, (int)cp.Y-5,10,10), cv::Scalar(0,0,255),2);
          }
        }
        putText(kinect.rgbImage, "TrackingId", cv::Point(0, 30));
        putText(kinect.rgbImage, "Collection", cv::Point(0, 60));
        putText(kinect.rgbImage, "Capture", cv::Point(0, 90));
        putText(kinect.rgbImage, "Collection", cv::Point(0, 120));
        putText(kinect.rgbImage, "Capture", cv::Point(0, 150));
        kinect.setHDFace();
        for (int i=0; i<kinect.hdfaceVertices.size(); i++) {
          for (CameraSpacePoint sp : kinect.hdfaceVertices[i]) {
    	ColorSpacePoint cp;
    	kinect.coordinateMapper->MapCameraPointToColorSpace(sp,&cp);
    	cv::rectangle(kinect.rgbImage, cv::Rect((int)cp.X-1, (int)cp.Y-1, 2, 2), cv::Scalar(0,192, 0), 1);
          }
          int x = 200 * i + 150;
          auto status = kinect.hdfaceStatus[i];
          auto statusS = kinect.hdfaceStatusToString(status);
          putText(kinect.rgbImage, hexString(kinect.hdfaceTrackingId[i]), cv::Point(x, 30));
          putText(kinect.rgbImage, hexString(status.first), cv::Point(x, 60));
          putText(kinect.rgbImage, hexString(status.second), cv::Point(x, 90));
          putText(kinect.rgbImage, statusS.first, cv::Point(x, 120));
          putText(kinect.rgbImage, statusS.second, cv::Point(x, 150));
        }
        cv::imshow("hdface", kinect.rgbImage);
        auto key = cv::waitKey(1);
        if (key == 'q') break;
      }
      cv::destroyAllWindows();
    }
    
    int main(int argc, char** argv) {
      try {
        doJob();
      } catch (exception &ex) {
        cout << ex.what() << endl;
        string s;
        cin >> s;
      }
      return 0;
    }
    
  3. When you run the program, RGB images are displayed. Exit with 'q' key.
  4. When the skeleton and detailed face information (HDFace) are recognized, face information is displayed on the RGB image.

    During your face recognized, please move the face slowly upside down, left and right. Face information is gradually accumulated and a model of the face is created when sufficient information is gathered.

    Information necessary for generating face model is displayed at the "Status" on the upper left of the screen. (Right, Left, Tilt, etc. ...)

    Sometimes the program ends with an error in the process of generating a face model. Do not call setHdFaceModelFlag(true) when you need stable operation.

  5. Please click here for this sample project KinectV2_hdface2.zip
  6. Since the above zip file may not include the latest "NtKinect.h", Download the latest version from here and replace and use it.


NtKinect

Important Data in HDFace

Of all the 1347 points recognized in HDFace, some important points are defined as follows. The enumeration of FaceFrameFeatures and HighDetailedFacePoints holds them.

Quoted from Kinect.Face.h of Kinect for Windows SDK 2.0
typedef enum _FaceFrameFeatures FaceFrameFeatures;
enum _HighDetailFacePoints {
  HighDetailFacePoints_LefteyeInnercorner= 210,
  HighDetailFacePoints_LefteyeOutercorner= 469,
  HighDetailFacePoints_LefteyeMidtop= 241,
  HighDetailFacePoints_LefteyeMidbottom= 1104,
  HighDetailFacePoints_RighteyeInnercorner= 843,
  HighDetailFacePoints_RighteyeOutercorner= 1117,
  HighDetailFacePoints_RighteyeMidtop= 731,
  HighDetailFacePoints_RighteyeMidbottom= 1090,
  HighDetailFacePoints_LefteyebrowInner= 346,
  HighDetailFacePoints_LefteyebrowOuter= 140,
  HighDetailFacePoints_LefteyebrowCenter= 222,
  HighDetailFacePoints_RighteyebrowInner= 803,
  HighDetailFacePoints_RighteyebrowOuter= 758,
  HighDetailFacePoints_RighteyebrowCenter= 849,
  HighDetailFacePoints_MouthLeftcorner= 91,
  HighDetailFacePoints_MouthRightcorner= 687,
  HighDetailFacePoints_MouthUpperlipMidtop= 19,
  HighDetailFacePoints_MouthUpperlipMidbottom= 1072,
  HighDetailFacePoints_MouthLowerlipMidtop= 10,
  HighDetailFacePoints_MouthLowerlipMidbottom= 8,
  HighDetailFacePoints_NoseTip= 18,
  HighDetailFacePoints_NoseBottom= 14,
  HighDetailFacePoints_NoseBottomleft= 156,
  HighDetailFacePoints_NoseBottomright= 783,
  HighDetailFacePoints_NoseTop= 24,
  HighDetailFacePoints_NoseTopleft= 151,
  HighDetailFacePoints_NoseTopright= 772,
  HighDetailFacePoints_ForeheadCenter= 28,
  HighDetailFacePoints_LeftcheekCenter= 412,
  HighDetailFacePoints_RightcheekCenter= 933,
  HighDetailFacePoints_Leftcheekbone= 458,
  HighDetailFacePoints_Rightcheekbone= 674,
  HighDetailFacePoints_ChinCenter= 4,
  HighDetailFacePoints_LowerjawLeftend= 1307,
  HighDetailFacePoints_LowerjawRightend= 1327
};

typedef enum _HighDetailFacePoints HighDetailFacePoints;
enum _HighDetailFacePoints {
  HighDetailFacePoints_LefteyeInnercorner= 210,
  HighDetailFacePoints_LefteyeOutercorner= 469,
  HighDetailFacePoints_LefteyeMidtop= 241,
  HighDetailFacePoints_LefteyeMidbottom= 1104,
  HighDetailFacePoints_RighteyeInnercorner= 843,
  HighDetailFacePoints_RighteyeOutercorner= 1117,
  HighDetailFacePoints_RighteyeMidtop= 731,
  HighDetailFacePoints_RighteyeMidbottom= 1090,
  HighDetailFacePoints_LefteyebrowInner= 346,
  HighDetailFacePoints_LefteyebrowOuter= 140,
  HighDetailFacePoints_LefteyebrowCenter= 222,
  HighDetailFacePoints_RighteyebrowInner= 803,
  HighDetailFacePoints_RighteyebrowOuter= 758,
  HighDetailFacePoints_RighteyebrowCenter= 849,
  HighDetailFacePoints_MouthLeftcorner= 91,
  HighDetailFacePoints_MouthRightcorner= 687,
  HighDetailFacePoints_MouthUpperlipMidtop= 19,
  HighDetailFacePoints_MouthUpperlipMidbottom= 1072,
  HighDetailFacePoints_MouthLowerlipMidtop= 10,
  HighDetailFacePoints_MouthLowerlipMidbottom= 8,
  HighDetailFacePoints_NoseTip= 18,
  HighDetailFacePoints_NoseBottom= 14,
  HighDetailFacePoints_NoseBottomleft= 156,
  HighDetailFacePoints_NoseBottomright= 783,
  HighDetailFacePoints_NoseTop= 24,
  HighDetailFacePoints_NoseTopleft= 151,
  HighDetailFacePoints_NoseTopright= 772,
  HighDetailFacePoints_ForeheadCenter= 28,
  HighDetailFacePoints_LeftcheekCenter= 412,
  HighDetailFacePoints_RightcheekCenter= 933,
  HighDetailFacePoints_Leftcheekbone= 458,
  HighDetailFacePoints_Rightcheekbone= 674,
  HighDetailFacePoints_ChinCenter= 4,
  HighDetailFacePoints_LowerjawLeftend= 1307,
  HighDetailFacePoints_LowerjawRightend= 1327
};

typedef enum _FaceShapeAnimations FaceShapeAnimations;
enum _FaceShapeAnimations {
  FaceShapeAnimations_JawOpen= 0,
  FaceShapeAnimations_LipPucker= 1,
  FaceShapeAnimations_JawSlideRight= 2,
  FaceShapeAnimations_LipStretcherRight= 3,
  FaceShapeAnimations_LipStretcherLeft= 4,
  FaceShapeAnimations_LipCornerPullerLeft= 5,
  FaceShapeAnimations_LipCornerPullerRight= 6,
  FaceShapeAnimations_LipCornerDepressorLeft= 7,
  FaceShapeAnimations_LipCornerDepressorRight= 8,
  FaceShapeAnimations_LeftcheekPuff= 9,
  FaceShapeAnimations_RightcheekPuff= 10,
  FaceShapeAnimations_LefteyeClosed= 11,
  FaceShapeAnimations_RighteyeClosed= 12,
  FaceShapeAnimations_RighteyebrowLowerer= 13,
  FaceShapeAnimations_LefteyebrowLowerer= 14,
  FaceShapeAnimations_LowerlipDepressorLeft= 15,
  FaceShapeAnimations_LowerlipDepressorRight= 16,
  FaceShapeAnimations_Count= ( FaceShapeAnimations_LowerlipDepressorRight + 1 ) 
};

typedef enum _FaceShapeDeformations FaceShapeDeformations;
enum _FaceShapeDeformations {
  FaceShapeDeformations_PCA01= 0,
  FaceShapeDeformations_PCA02= 1,
  FaceShapeDeformations_PCA03= 2,
  FaceShapeDeformations_PCA04= 3,
  FaceShapeDeformations_PCA05= 4,
  FaceShapeDeformations_PCA06= 5,
  FaceShapeDeformations_PCA07= 6,
  FaceShapeDeformations_PCA08= 7,
  FaceShapeDeformations_PCA09= 8,
  FaceShapeDeformations_PCA10= 9,
  FaceShapeDeformations_Chin03= 10,
  FaceShapeDeformations_Forehead00= 11,
  FaceShapeDeformations_Cheeks02= 12,
  FaceShapeDeformations_Cheeks01= 13,
  FaceShapeDeformations_MouthBag01= 14,
  FaceShapeDeformations_MouthBag02= 15,
  FaceShapeDeformations_Eyes02= 16,
  FaceShapeDeformations_MouthBag03= 17,
  FaceShapeDeformations_Forehead04= 18,
  FaceShapeDeformations_Nose00= 19,
  FaceShapeDeformations_Nose01= 20,
  FaceShapeDeformations_Nose02= 21,
  FaceShapeDeformations_MouthBag06= 22,
  FaceShapeDeformations_MouthBag05= 23,
  FaceShapeDeformations_Cheeks00= 24,
  FaceShapeDeformations_Mask03= 25,
  FaceShapeDeformations_Eyes03= 26,
  FaceShapeDeformations_Nose03= 27,
  FaceShapeDeformations_Eyes08= 28,
  FaceShapeDeformations_MouthBag07= 29,
  FaceShapeDeformations_Eyes00= 30,
  FaceShapeDeformations_Nose04= 31,
  FaceShapeDeformations_Mask04= 32,
  FaceShapeDeformations_Chin04= 33,
  FaceShapeDeformations_Forehead05= 34,
  FaceShapeDeformations_Eyes06= 35,
  FaceShapeDeformations_Eyes11= 36,
  FaceShapeDeformations_Nose05= 37,
  FaceShapeDeformations_Mouth07= 38,
  FaceShapeDeformations_Cheeks08= 39,
  FaceShapeDeformations_Eyes09= 40,
  FaceShapeDeformations_Mask10= 41,
  FaceShapeDeformations_Mouth09= 42,
  FaceShapeDeformations_Nose07= 43,
  FaceShapeDeformations_Nose08= 44,
  FaceShapeDeformations_Cheeks07= 45,
  FaceShapeDeformations_Mask07= 46,
  FaceShapeDeformations_MouthBag09= 47,
  FaceShapeDeformations_Nose06= 48,
  FaceShapeDeformations_Chin02= 49,
  FaceShapeDeformations_Eyes07= 50,
  FaceShapeDeformations_Cheeks10= 51,
  FaceShapeDeformations_Rim20= 52,
  FaceShapeDeformations_Mask22= 53,
  FaceShapeDeformations_MouthBag15= 54,
  FaceShapeDeformations_Chin01= 55,
  FaceShapeDeformations_Cheeks04= 56,
  FaceShapeDeformations_Eyes17= 57,
  FaceShapeDeformations_Cheeks13= 58,
  FaceShapeDeformations_Mouth02= 59,
  FaceShapeDeformations_MouthBag12= 60,
  FaceShapeDeformations_Mask19= 61,
  FaceShapeDeformations_Mask20= 62,
  FaceShapeDeformations_Forehead06= 63,
  FaceShapeDeformations_Mouth13= 64,
  FaceShapeDeformations_Mask25= 65,
  FaceShapeDeformations_Chin05= 66,
  FaceShapeDeformations_Cheeks20= 67,
  FaceShapeDeformations_Nose09= 68,
  FaceShapeDeformations_Nose10= 69,
  FaceShapeDeformations_MouthBag27= 70,
  FaceShapeDeformations_Mouth11= 71,
  FaceShapeDeformations_Cheeks14= 72,
  FaceShapeDeformations_Eyes16= 73,
  FaceShapeDeformations_Mask29= 74,
  FaceShapeDeformations_Nose15= 75,
  FaceShapeDeformations_Cheeks11= 76,
  FaceShapeDeformations_Mouth16= 77,
  FaceShapeDeformations_Eyes19= 78,
  FaceShapeDeformations_Mouth17= 79,
  FaceShapeDeformations_MouthBag36= 80,
  FaceShapeDeformations_Mouth15= 81,
  FaceShapeDeformations_Cheeks25= 82,
  FaceShapeDeformations_Cheeks16= 83,
  FaceShapeDeformations_Cheeks18= 84,
  FaceShapeDeformations_Rim07= 85,
  FaceShapeDeformations_Nose13= 86,
  FaceShapeDeformations_Mouth18= 87,
  FaceShapeDeformations_Cheeks19= 88,
  FaceShapeDeformations_Rim21= 89,
  FaceShapeDeformations_Mouth22= 90,
  FaceShapeDeformations_Nose18= 91,
  FaceShapeDeformations_Nose16= 92,
  FaceShapeDeformations_Rim22= 93,
  FaceShapeDeformations_Count= ( FaceShapeDeformations_Rim22 + 1 ) 
};

typedef enum _FaceAlignmentQuality FaceAlignmentQuality;
enum _FaceAlignmentQuality {
  FaceAlignmentQuality_High= 0,
  FaceAlignmentQuality_Low= 1
};

typedef enum _FaceModelBuilderCollectionStatus FaceModelBuilderCollectionStatus;
enum _FaceModelBuilderCollectionStatus {
  FaceModelBuilderCollectionStatus_Complete= 0,
  FaceModelBuilderCollectionStatus_MoreFramesNeeded= 0x1,
  FaceModelBuilderCollectionStatus_FrontViewFramesNeeded= 0x2,
  FaceModelBuilderCollectionStatus_LeftViewsNeeded= 0x4,
  FaceModelBuilderCollectionStatus_RightViewsNeeded= 0x8,
  FaceModelBuilderCollectionStatus_TiltedUpViewsNeeded= 0x10
};

typedef enum _FaceModelBuilderCaptureStatus FaceModelBuilderCaptureStatus;
enum _FaceModelBuilderCaptureStatus {
  FaceModelBuilderCaptureStatus_GoodFrameCapture= 0,
  FaceModelBuilderCaptureStatus_OtherViewsNeeded= 1,
  FaceModelBuilderCaptureStatus_LostFaceTrack= 2,
  FaceModelBuilderCaptureStatus_FaceTooFar= 3,
  FaceModelBuilderCaptureStatus_FaceTooNear= 4,
  FaceModelBuilderCaptureStatus_MovingTooFast= 5,
  FaceModelBuilderCaptureStatus_SystemError= 6
};

typedef enum _FaceModelBuilderAttributes FaceModelBuilderAttributes;
enum _FaceModelBuilderAttributes {
  FaceModelBuilderAttributes_None= 0,
  FaceModelBuilderAttributes_SkinColor= 0x1,
  FaceModelBuilderAttributes_HairColor= 0x2
};


http://nw.tsuda.ac.jp/