The Android SDK offers two camera implementations: VisionCameraSource and CameraXSource. (We recommend that you use CameraXSource.) A preview (CameraXSourcePreview or VisionCameraSourcePreview) should be added to the view hierarchy and passed into the constructor.
Relevant sample code can be found in the createCameraSource method within the sample app.
This can happen on lower end devices, in which case you can try the following:
Reduce Camera frame resolution. Loop through the returned camera profiles in createCameraSource and select a lower resolution.
final CameraProfile cp = cameraProfiles.get(facing).get(0); // Select another instead of the first one
If you're using a High Sensitivity detector, try reducing the detector input size.
settings.setRetinaInputSize(...) // 0 - Extra small; 1 - Small; 2 - Normal; 3 - Large
Switch to the Standard detector. (Note that this will adversely impact the detector's accuracy.)
//faceDetectorServiceConfig.detectorModel = FDRuntimeOptions.FD_STANDARD; // Uncomment to enable
DetectedObject is used to obtain a face bitmap; via the DetectedObject.getThumbnail() method in particular. The method returns a downscaled bitmap where the size depends on face's aspect ratio. (The smaller side will be 240px.)
If a different size is needed, the DetectedObject also includes the pixel bounds, required to extract a face from the original frame.
The sample code can be found in ShareManger.share() where face thumbnail is saved to the local cache storage and shared via Android share intent.
Initialize livenessActionRecogniser.
livenessActionRecognizer = new LivenessActionRecognizer(settings);
livenessActionRecognizer.setDelegate(this);
Pass tracking results to the liveness recognizer.
if (livenessActionRecognizer != null) {
livenessActionRecognizer.update(trackingResult);
}
Read liveness info once received.
final Boolean isLivenessConfirmed = trackedObject.isLivenessConfirmed();
if (isLivenessConfirmed) {
//trackedObject.getLivenessConfidence()
}
}
Pass the liveness object action to the object event log.
@Override
public void didRecognizeObjectAction(ObjectActionRecognizer objectActionRecognizer, ObjectAction objectAction, TrackedObject trackedObject) {
if (objectActionRecognizer == livenessActionRecognizer) {
DispatchQueue.Main.async(() -> overlay.onFaceLivenessConfirmed(trackedObject));
// Propagate the objectAction to the objectEventLog in order to report it
if (objectEventLog != null) {
objectEventLog.update(trackedObject, objectAction);
}
}
}
Use AglVideoView to play rtsp:// streams.
The sample code can be found in VidiaFragment.java. Search for aglView references.
Set the decoder type option to AglOptions.DECODER_MEDIACODEC.
The sample code can be found in VidiaFragment.java. Search for aglView references.
Initialize ObjectEventLog and PeopleIndexer and pass TrackingResult in ObjectEventLog.update().
The sample code can be found in VidiaFragment.java. Search for objectEventlog references.
The demo app provides an example service which plays rtsp:// streams and feeds frames into the ObjectTracker for face detection. A notification is posted when a person from the stream is recognised.
Usually, the service should be started/stopped from Activity's onPause() and onResume() callbacks.
This isn't SDK-related but there is a sample code to schedule start/stop of the service, located in VidiaFragment.java. Search for CameraProcessorService references.