This specification describes support for accessing 3D camera for face tracking and recognition on the Web.
This document was published by the Crosswalk Project as an API Draft. If you wish to make comments regarding this document, please send them to crosswalk-dev@lists.crosswalk-project.org. All comments are welcome.
The APIs described in this document are exposed through
realsense.Face
module.
FaceModule
The FaceModule
interface provides methods to
track and recognize faces for augmented reality applications.
The MediaStream
(described in [[!GETUSERMEDIA]]) passed
to the constructor must have at least one video track otherwise an
exception will be thrown.
Start to run face module on the previewStream
with current configuration.
This method returns a promise.
The promise will be fulfilled if there are no errors.
The promise will be rejected with the DOMException
object if there is a failure.
Note: Please call this method after ready
event,
otherwise you will get a ErrorEvent
.
Stop face module running and reset face configuration to defaults.
This method returns a promise.
The promise will be fulfilled if there are no errors.
The promise will be rejected with the DOMException
object if there is a failure.
Get processed sample including result face data along with processed color/depth image(optional).
This method returns a promise.
The promise will be fulfilled with an ProcessedSample
combining color/depth processed images(only if required and available)
and face module tracking/recognition output data if there are no errors.
The promise will be rejected with the DOMException
object if there is a failure.
The flag to indicate whether want to aquire the color image data. The default value is false.
The flag to indicate whether want to aquire the depth image data. The default value is false.
The interface to configure FaceModule
.
The interface to access face recognition feature.
The MediaStream
instance passed in constructor.
A property used to set the EventHandler (described in [[!HTML]])
for the Event
that is dispatched
to FaceModule
to indicate that it's ready to start
because the previewStream
has been started.
A property used to set the EventHandler (described in [[!HTML]])
for the Event
that is dispatched
to FaceModule
to indicate that the previewStream
has ended
and FaceModule
has already detached from it completely.
A property used to set the EventHandler (described in [[!HTML]])
for the ErrorEvent
that is dispatched
to FaceModule
when there is an error.
A property used to set the EventHandler (described in [[!HTML]])
for the Event
that is dispatched
to FaceModule
when a new processed sample is ready.
A property used to set the EventHandler (described in [[!HTML]])
for the AlertEvent
that is dispatched
to FaceModule
when there is an alert happened.
AlertEvent
interface
The label of the alert event.
The time stamp when the event occurred, in 100ns.
The identifier of the relevant face, if relevant and known.
Recognition
interface
The Recognition
interface provides methods to
access face recognition feature.
Register a detected face into recognition database.
This method returns a promise.
The promise will be fulfilled with the user identifier
registered in recognition database if there are no errors.
The promise will be rejected with the DOMException
object if there is a failure.
The face id which could be gotten
from the detected face data FaceData
.
Unregister an user from recognition database.
This method returns a promise.
The promise will be fulfilled if there are no errors.
The promise will be rejected with the DOMException
object if there is a failure.
The user identifier in recognition database,
could be gotten from the face recognition data RecognitionData
or the return value of the function registerUserByFaceID
.
FaceConfiguration
interface
The FaceConfiguration
interface provides methods to
configure FaceModule
.
Set configuration values.
This method returns a promise.
The promise will be fulfilled if there are no errors.
The promise will be rejected with the DOMException
object if there is a failure.
The face configuration to be set effective.
Note: some configuration items won't take effect while face module is running,
such as TrackingModeType
.
If you need to set it, please stop face module firstly.
Get configuration default values.
This method returns a promise.
The promise will be fulfilled with the default face configuration
if there are no errors.
The promise will be rejected with the DOMException
object if there is a failure.
Get current effective configuration values.
This method returns a promise.
The promise will be fulfilled with current effective face configuration
if there are no errors.
The promise will be rejected with the DOMException
object if there is a failure.
Image
Rect
Point3DFloat
Point2DFloat
AlertConfiguration
DetectionConfiguration
LandmarksConfiguration
RecognitionConfiguration
FaceConfigurationData
The structure describing the alert enable/disable status.
The structure describing the face detection configuration parameters.
The structure describing the face landmarks configuration parameters.
The structure describing the face recognition configuration parameters.
DetectionData
LandmarkPoint
LandmarksData
RecognitionData
FaceData
ProcessedSample
TrackingModeType
enum
Require color data at the module input to run face algorithms.
Require color and depth data at the module input to run face algorithms.
TrackingStrategyType
enum
Track faces based on their appearance in the scene.
Track faces from the closest to the furthest.
Track faces from the furthest to the closest.
Track faces from left to right.
Track faces from right to left.
AlertType
enum
A new face enters the FOV and its position and bounding rectangle is available.
A new face is out of field of view (even slightly).
A tracked face is back fully to field of view.
Face is occluded by any object or hand (even slightly).
Face is not occluded by any object or hand.
A face could not be detected for too long, will be ignored.
PixelFormat
enum
The 32-bit RGB32 color format.
The depth map data in 16-bit unsigned integer.
LandmarkType
enum
Unspecified.
The center of the right eye.
The center of the left eye.
The right eye lid top.
The right eye lid bottom.
The right eye lid right.
The right eye lid left.
The left eye lid top.
The left eye lid bottom.
The left eye lid right.
The left eye lid left.
The right eye brow center.
The right eye brow right.
The right eye brow left.
The left eye brow center.
The left eye brow right.
The left eye brow left.
The top most point of the nose in the Z dimension.
The nose top.
The nose bottom.
The nose right.
The nose left.
The lip right.
The lip left.
The lip center.
The lip upper right.
The lip upper left.
The lip lower center.
The lip lower right.
The lip lower left.
The face border right.
The face border left.
The bottom chin point.
var previewStream;
var ft;
var startButton = document.getElementById('startButton');
var stopButton = document.getElementById('stopButton');
function errorCallback(error) {
console.log('getUserMedia failed: ' + error);
}
// Start stream firstly, then start face module.
startButton.onclick = function(e) {
navigator.mediaDevices.getUserMedia(constraints)
.then(function(stream) {
// Wire the media stream into a <video> element for preview.
previewStream = stream;
var previewVideo = document.querySelector('#previewVideo');
previewVideo.srcObject = stream;
previewVideo.play();
try {
ft = new realsense.Face.FaceModule(stream);
} catch (e) {
console.log('Failed to create face module: ' + e.message);
return;
}
ft.onready = function(e) {
console.log('Face module ready to start');
// The stream got ready, we can start face module now.
ft.start().then(
function() {
console.log('Face module start succeeds');},
function(e) {
console.log('Face module start failed: ' + e.message);});
};
ft.onprocessedsample = function(e) {
console.log('Got face module processedsample event.');
ft.getProcessedSample(false, false).then(function(processedSample) {
console.log('Got face module processedsample data.');
// Use processedSample.faces data to work for you.
console.log('Detected faces number: ' + processedSample.faces.length);
// You can get all avaiable detection/landmarks/recognition data
// of every face from processedSample.faces.
// Please refer to FaceData
interface.
}, function(e) {
console.log('Failed to get processed sample: ' + e.message);});
};
ft.onerror = function(e) {
console.log('Got face module error event: ' + e.message);
};
ft.onended = function(e) {
console.log('Face module ends without stop');
};
ft.ready = false;
ftStarted = false;
onGetConfButton();
}, errorCallback);
};
function stopPreviewStream() {
if (previewStream) {
previewStream.getTracks().forEach(function(track) {
track.stop();
});
if (ft) {
// Remove listeners as we don't care about the events.
ft.onerror = null;
ft.onprocessedsample = null;
ft = null;
}
}
previewStream = null;
}
// Stop face module and stream.
stopButton.onclick = function(e) {
if (!ft) return;
ft.stop().then(
function() {
console.log('Face module stop succeeds');
stopPreviewStream();},
function(e) {
console.log('Face module stop failed');
stopPreviewStream();});
};
var setConfButton = document.getElementById('setConfButton');
var getConfButton = document.getElementById('getConfButton');
var getDefaultConfButton = document.getElementById('getDefaultConfButton');
// Set configuration. Simple configuration example as bellow.
// Please refer to FaceConfigurationData
interface for confData details.
var confData = {
// Set face tracking strategy.
strategy: 'right-left',
// Disable landmarks.
landmarks: {
enable: false
},
// Enable recognition.
recognition: {
enable: true
}
};
setConfButton.onclick = function(e) {
ft.configuration.set(confData).then(
function() {
console.log('set configuration succeeds');},
function(e) {
console.log(e.message);});
};
// Get current configuration.
getConfButton.onclick = function(e) {
ft.configuration.get().then(
function(confData) {
// Use confData values to work for you.
console.log('get current configuration succeeds');},
function(e) {
console.log('get configuration failed: ' + e.message);});
};
// Get default configuration.
getDefaultConfButton.onclick = function(e) {
ft.configuration.getDefaults().then(
function(confData) {
// Use confData values to work for you.
console.log('get default configuration succeeds');},
function(e) {
console.log('get default configuration failed: ' + e.message);});
};