Android SDK Configuration Options

Recognizer

// Cloud user password
Settings.userPassword() : String
 
// Cloud user identifier
Settings.userIdentifier() : String
 
// Cloud user directory
Settings.userDirectory() : String
 
// Cloud user directory for similar mode
Settings.userSimilarDirectory() : String
 
// User source to be reported to the cloud
Settings.userSource() : String
 
// User site to be reported to the cloud
Settings.userSite() : String
 
// Cloud environment
Settings.cloudEnvironment() : CloudEnvironment
 
// Whether to enable detection of an identity, which matches against the existing database of people (identities).
faceRecognizerDetectIdentity(): Boolean
 
// Whether to enable the detection of gender information.
faceRecognizerDetectGender(): Boolean
 
// Whether to enable the detection of age information.
faceRecognizerDetectAge(): Boolean
 
// Whether to enable occlusion detection during recognition.
faceRecognizerDetectOcclusion(): Boolean
 
// Whether to enable mask detection during recognition.
faceRecognizerDetectMask(): Boolean
 
// Whether to enable the detection of sentiment information.
faceRecognizerDetectSentiment(): Boolean
 
// The minimum size of faces to detect. This value is applied after searching the image.
faceRecognizerMinimumFaceSize(): Int
 
// The minimum resolution that a recognition candidate image must have in order to allow the insertion of the candidate image into the Cloud database.
faceRecognizerIdentificationMinimumFaceSize(): Int
 
// The amount of time that the no-smile should last
faceRecognizerNoSmileActionDuration(): Double
 
// The amount of time that the smile should last
faceRecognizerSmileActionDuration(): Double
 
// When smile action recognition is enabled, this property boosts level of face difference tolerated when confirming identity via a face in smiling expression.
// Smile action recognition must first recognize identity at level of difference defined by recognizer.identity-recognition-threshold + recognizer.identity-proximity-threshold-allowance (e.g.  0.54 + 0.0 = 0.54 which is a confident match).
// Face must be non-smiling (serious) at this beginning stage for the specified recognizer.smile-pre-delay duration.
// After initial repeated recognition of non-smiling face for the specified amount of time, recognition will be repeated once face transitions into smiling expression.
// When in smiling expression, identity recognition will be accepted at the level of face difference recognizer.identity-recognition-threshold + recognizer.identity-proximity-threshold-allowance + recognizer.smile-identity-threshold-boost (e.g. 0.54 + 0.0 + 0.13 = 0.67 which is a close match but not certain).
// Thus, when in smiling expression, greater level of face difference is tolerated.  If this value is set to 0.0, smiling face needs to be recognized at the same level of strictness as the initial serious expression face in order for simile action (mapped to smileToActivate actionId event) to be activated.
faceRecognizerSmileActionIdentityRecognitionThresholdBoost(): Double
 
// The identity recognition threshold proximity allowance defines level of difference between faces in addition to what is specified in recognizer.identity-recognition-threshold for which matches will be reported.
// Possible value range is from 0.0 to 4.0.
// Is set to 0.0, no additional face difference beyond what is specified in recognizer.identity-recognition-threshold will be reported on. 
// This means that only 100% (and higher confidence matches) will be reported. 
// When set to value greater than 0, system will report matches less than 100% up to the allowance provided. 
// For example, if recognizer.identity-recognition-threshold = 0.54 and recognizer.identity-proximity-threshold-allowance = 0.38,  system will reports matches with face ranging in level of difference from 0 to 0.92  (0.54 + 0.38) .  Thus SAFR will report on all faces that have some similarity (see below table on how to interpret face difference level).  100% similarity score will be given to faces with difference level of 0.54.
//
// Guidelines on interpreting face difference level are as follows:
// 0.00 - identical face image match
// <0.00  0.30]   -  extremely confident match  (e.g. high security area entry under controlled environmental conditions)
// <0.30  0.45]   -  very confident match   (e.g. unlock door for un-monitored facility)
// <0.45  0.54]   -  confident match  (e.g. unlock door for monitored facility)
// <0.54  0.67]   -  close match but not certain
// <0.67  0.84]   -  possible match with low confidence
// <0.84  0.92]   -  similar face with no confidence of match
// <0.92  4.00]   -  face without significant similarity
//
// Similarity score (% match) is computed based on difference level as follows:
// similarityScore = (2 - SQRT(faceDifferenceLevel)) / (2 - SQRT(recognizer.identity-recognition-threshold))
faceRecognizerIdentityRecognitionThresholdProximityAllowance(): Double
 
// Enables the smile threshold values
faceRecognizerSmileActionSmileTransitionThresholdsEnabled(): Boolean
 
// The threshold in which there is no smile
faceRecognizerSmileActionNoSmileThreshold(): Double
 
// The threshold in which there is a smile
faceRecognizerSmileActionSmileThreshold(): Double
 
// The identity recognition threshold represents maximum level of difference (distance) between two faces for them to be considered a certain identity match (100% match). 
// Possible range of values is between 0.0 and 4.0.
// Value of 0.0 means that for two faces to match at 100%, the images representing them  would need to be identical in every pixel.
// Value of 4.0 means that any two images are considered a certain match.  Default value of 0.54 is calibrated to reflect match certainty appropriate for secure access use-cases.
//
// Guidelines on the meaning of the value of this threshold are as follows:
// 0.00 - identical face image match
// <0.00  0.30]   -  extremely confident match  (e.g. high security area entry under controlled environmental conditions)
// <0.30  0.45]   -  very confident match   (e.g. unlock door for un-monitored facility)
// <0.45  0.54]   -  confident match  (e.g. unlock door for monitored facility)
// <0.54  0.67]   -  close match but not certain
// <0.67  0.84]   -  possible match with low confidence
// <0.84  0.92]   -  similar face with no confidence of match
// <0.92  4.00]   -  face without significant similarity
faceRecognizerIdentityRecognitionThresholdCamera(): Float
 
// The identity recognition threshold for similar mode.
faceRecognizerIdentityRecognitionThresholdSimilar(): Float
 
// Valid values are in the range of 0.0 - 1.0. This is the maximum occlusion value that is allowed when inserting new recognition candidate images into the Cloud database. If the face is occluded with a value greater than this then the face will not be added, but if it is less than or equal to this value then it will be added
faceRecognizerMaxOcclusion(): Double
 
// Mask detection confidence at which system is to assume presence of the mask. Possible value range is between 0 and 1.0 . By increasing the value, number of false positive mask detection will be decreased at the expense of lower true positive mask detections. By decreasing this value, number of true positive mask detections will be increased at the expense of higher false positive mask detections.
faceRecognizerMaskThreshold(): Double
 
// Mask model "sensitive", "precise", "normal"
faceRecognizerMaskModel(): String
 
// The maximum clip ratio on either side the recognition candidate might have
faceRecognizerClippingTolerance(): Double
 
// The maximum clip ratio on either side the insertion candidate might have
faceRecognizerIdentificationClippingTolerance(): Double
 
// This offset adjusts recognizer.identity-recognition-threshold to be applied to masked faces.
// Thus, definition of 100% identity match can be customized for masked faces.  By increasing this offset, 100% identity match definition will be loosened thus allowing more true positives to be included under 100% match criteria at the expense of more false positives.  By decreasing this offset, 100% match for masked faces will be made more strict thus reducing the number of false positive at the expense of fewer true positives.
// For example, if recognizer.identity-recognition-threshold = 0.54 and recognizer.identity-masked-threshold-offset = -0.09, 100% identity match for faces without masks will be declared at face level difference of 0.54 (confident match), while 100% identity match for masked faces will be declared at face level difference of 0.45 (a very confident match).   0.45 face level difference is arrived at by adding recognizer.identity-masked-threshold-offset to recognizer.identity-recognition-threshold.  For above example, 0.54 - 0.09 = 0.45 .
// Note that for this offset to be applied, mask must be detected on a face.  That is only possible if mask detection is enabled via recognizer.detectMask = true property.
faceRecognizerIdentityMaskedThresholdOffset(): Double
 
// The maximum number of similar people to fetch
faceRecognizerSimilarLimit(): Int
 
// The minimum CPQ that a recognition candidate must have in order to allow the insertion of the candidate image into the Cloud database.
faceRecognizerIdentificationMinimumCenterPoseQuality(): Double
 
// The minimum FSQ that a recognition candidate must have in order to allow the insertion of the candidate image into the Cloud database
faceRecognizerIdentificationMinimumFaceSharpnessQuality(): Double
 
// The minimum FCQ that a recognition candidate must have in order to allow the insertion of the candidate image into the Cloud database
faceRecognizerIdentificationMinimumFaceContrastQuality(): Double
 
// The minimum center pose quality that a face image must have before we try to recognize the face.
faceRecognizerMinimumCenterPoseQuality(): Double
 
// The minimum face sharpness quality that a face image must have before we try to recognize the face.
faceRecognizerMinimumFaceSharpnessQuality(): Double
 
// The minimum face contrast quality that a face image must have before we try to recognize the face.
faceRecognizerMinimumFaceContrastQuality(): Double
 
// The minimum resolution a recognition candidate must have in order to allow merging
faceRecognizerMergingMinimumFaceSize(): Int
 
// The minimum CPQ that a recognition candidate must have in order to allow merging
faceRecognizerMergingMinimumCenterPoseQuality(): Double
 
// The minimum FSQ that a recognition candidate must have in order to allow merging
faceRecognizerMergingMinimumFaceSharpnessQuality(): Double
 
// The minimum FCQ that a recognition candidate must have in order to allow merging
faceRecognizerMergingMinimumFaceContrastQuality(): Double

Detector and ObjectTracker

// High sensitivity detector input size [0 - Extra small; 1 - Small; 2 - Normal; 3 - Large]. 
// The higher the value, the better the accuracy but at the cost of speed.
Settings.retinaInputSize(): Int
 
// The initial (1 of 3) face candidate threshold that is used during face detection when using "detector.detect-faces-service" set to normal.
// This parameter is for highly advanced usage of "normal" face detection service.
// It should be used only if optimizing face detector for performance on specialized type of data.
// This parameter has no effect when "detector.detect-faces-service" is set to other than "normal"  (e.g. "high-sensitivity").
initialCandidateThreshold(): Float
 
// The middle (2 of 3) face candidate threshold that is used during face detection when using "detector.detect-faces-service" set to normal.
// This parameter is for highly advanced usage of "normal" face detection service.
// It should be used only if optimizing face detector for performance on specialized type of data.
// This parameter has no effect when "detector.detect-faces-service" is set to other than "normal"  (e.g. "high-sensitivity").
middleCandidateThreshold(): Float
 
// The final (3 of 3) face candidate threshold that is used during face detection when using "detector.detect-faces-service" set to normal.
// This parameter is for highly advanced usage of "normal" face detection service.
// It should be used only if optimizing face detector for performance on specialized type of data.
// This parameter has no effect when "detector.detect-faces-service" is set to other than "normal"  (e.g. "high-sensitivity").
finalCandidateThreshold(): Float
 
// Filter candidates after nms
retinaFilterThreshold(): Float
 
// Whether to enable face detection
faceDetectorEnabled(): Boolean
 
// Whether to generate and send recogniser hint for detected faces
faceDetectorSendRecognizerHint(): Boolean
 
// Selected face detector model 1 - standard; 2 - high definition
faceDetectorModel(): Int
 
// The expansion factor for detected face thumbnails
faceDetectorObjectThumbnailSizeExpansionFactor(): Float
 
// The minimum number of consecutive recognition attempts that we must run and produce the same person identity before we lock onto this identity and learn it
// (Insert it into the server database).
objectTrackerMinimumNumberOfIdenticalRecognitionsToLearn(): Int
 
// The time in-between reconfirmation attempts.
// We reconfirm the identity of a tracked face from time to time after it has been recognized as a person X.
objectTrackerReconfirmationTimeInterval(): Microseconds
 
// The object detector is advised to search for objects of at least this size.
// This value is applied while searching the image.
faceDetectorMinimumSearchedFaceSize(): Int
 
// The minimum size of faces to accept from the detector.
// Only faces with at least this size are eligible for recognition.
faceDetectorMinimumRequiredFaceSize(): Int
 
// Determines for how many frames more we continue to keep a tracked face around after we have failed to detect it in the most recent frame.
// This makes the tracker resilient against intermittent loss of face.
objectTrackerMaximumLingerFrames(): Int
 
// The minimum number of consecutive recognition attempts that we must run and produce the same person identity before we lock onto this identity.
objectTrackerMinimumNumberOfIdenticalRecognitionsToLock(): Int
 
// Whether to enable correlation of tracked faces and detected faces by comparing the change in area.
objectTrackerFaceSizeCorrelation(): Boolean
 
// Whether identity of tracked face is removed every time recognition fails on high quality (merge level) face
objectTrackerRemoveIdentityOnFailedRecognition(): Boolean

Event Reporting

// Whether to enable event reporter
eventsReportEnabled(): Boolean
 
// Wheteher report events for recognized people only.
// If this is true events are only reported for people who are recognized otherwise events are reported for all people detected.
eventsReportRecognizedPersonEventsOnlyCamera(): Boolean
 
// If this is true then stranger person events are allowed to be generated,
// otherwise the stranger person events will be converted to unidentified/unrecognized person events that are generated instead.
eventsReportStrangerPersonEvents(): Boolean
 
// If this is valid then this will be used for the lower bound of the stranger age range.
// This means the person will be classified as a stranger only if their age is above this value
// (or their age is not available/wasn't detected) otherwise they will be classified as an unidentified person.
eventsStrangerMinAge(): Int
 
// If this is valid then this will be used for the upper bound of the stranger age range.
// This means the person will be classified as a stranger only if their age is below this value
// (or their age is not available/wasn't detected) otherwise they will be classified as an unidentified person.
eventsStrangerMaxAge(): Int
 
// Reports events for speculated people. 
// Speculated people are faces that aren't a 100% match but are close.
eventsReportSpeculatedPersonEvents(): Boolean
 
// Enables the inclusion of face thumbnails in event reports.
eventsReportSaveFaceImage(): Boolean
 
// Enables the inclusion of scene images in event reports.
eventsReportSaveSceneImage(): Boolean
 
// Delay the event reporting to the server by this amount in seconds.
eventsReportDelay(): Double
 
// The minimum allowed recognized person event duration in seconds.
// Events shorter than this value will not be reported.
eventsReportMinRecognizedPersonEventDuration(): Double
 
//The minimum allowed unrecognized person event duration in seconds.
// Events shorter than this value will not be reported.
eventsReportMinUnrecognizedPersonEventDuration(): Double
 
// The minimum allowed stranger event duration in seconds.
// Events shorter than this value will not be reported.
eventsReportMinUnrecognizedPersonEventDuration(): Double
 
// Whether the event reporter should monitor for event replies after adding the event
eventsEventReplyEnabled(): Boolean
 
// Whether the event reporter should update the face/scene/etc images for the event on the image server if the images are marked as being changed during an update.
eventsReportUpdateImages(): Boolean
 
// Filter for reported event types PersonEventType.person; PersonEventType.action.
eventsReportEventTypes(): Set<PersonEventType>?

RGB Liveness Detection

// Whether RGB liveness detector is enabled
livenessDetectorEnabled(): Boolean
 
// The minimum CPQ value required for liveness model to be evaluated
livenessMinCPQ(): Float
 
// The minimum sharpness value required for liveness model to be evaluated
livenessMinSharpness(): Float
 
// The minimum contrast value required for liveness model to be evaluated
livenessMinContrast(): Float
 
// The minimum face required for liveness model to be evaluated
livenessMinFaceSize(): Int
 
// The minimum Face context size (normalized %) for liveness detection
livenessMinFaceContextSize(): Float
 
// Liveness detection threshold - face with liveness value >= this threshold is considered alive
livenessDetectionThreshold(): Float
 
// Fake detection threshold - face with liveness value < this threshold is considered fake
livenessFakeDetectionThreshold(): Float
 
// Initial threshold used to short-circuit stage 2 liveness
livenessInitialDetectionThreshold(): Float
 
// Liveness detector scheme [ 0 - Texture Unimodal; 1 - Context Unimodal; 2 - Strict Multimodal; 3 - Normal Multimodal; 4 -Tolerant Multimodal]
livenessDetectorScheme(): Int
 
// Number of frames to evaulate live detections
livenessEvaluationFrameCount(): Int
 
// Number of frames to evaulate fake detections
livenessFakeEvaluationFrameCount(): Int
 
// Minimum confirmations threshold to conclude liveness
livenessConfirmationsThreshold(): Float

See Also