This is the base class of all ArgusKit objects. The C functions provided by ARKObject can be used with all ArgusKit object types. ARKObject provides APIs for memory management and to generate a debug description of an object that is printed to the console.
The ARKEnvironment object encapsulates the cloud service to which the ArgusKit should connect to execute cloud-based functions like face recognition.
An object tracker configuration object stores the configuration information for an object tracker, the object detector, and the face recognizer. The configuration object is initialized with default values allowing the tracker to detect and recognize faces. You may also instantiate a configuration object based on one of a number of predefined configuration presets. A preset encapsulates all the configuration information needed to use the object tracker for a specific type of task, for example, to recognize faces or to recognize and learn faces.
After you have instantiated a configuration object, set the cloud account, cloud environment and the face recognizer directory name. This is the minimum required information that you need to provide to allow the object tracker to successfully detect and recognize faces.
If you want to detect faces only without recognizing them, you can turn off the face recognizer stage by setting the Recognizer.Enabled property to false
.
The object tracker is the heart of ArgusKit. It receives a stream of video frames that it analyzes to find objects. It tracks found objects as long as they remain visible in the video stream. Object detection, recognition and the tracking are executed in real time. An object tracker is connected to a delegate defined by a set of C callback functions you pass to the object tracker at creation time. The object tracker informs its delegate at every frame about the current state of the objects it is tracking. The delegate can then use the tracked object APIs to learn what kind of objects the tracker found and what their current spatial location, size, and state are.
A tracking result object contains a snapshot of the current state of the object tracker. The object tracker delivers a new tracking result at every frame boundary. The tracking result contains a list of tracked object which have appeared in the current frame, which have disappeared and which have changed their current state in the current frame. For example, if a tracked object was detected in a previous frame and the object tracker has now been able to recognize the tracked object as a specific identity, then the tracked object is included in the list of updated tracked objects.
iOS: In the sample application, this is wrapped by the Swift object TrackingResult. The CameraViewController.swift file contains a function called handleTrackingResult that illustrates how the tracking result can be used to get the bounding box and other properties for the face.
A tracked object represents a single and unique instance of an object the object tracker has been able to detect in the inout video stream and is actively tracking. A tracked object has a type indicating whether it is a badge or the face of a person. A tracking object also has an axis-aligned bounding box informing you where in the input video frame the tracked object can be found and what its size is. This bounding box can be used with the original frame passed to the object tracker to extract the thumbnail image from the video frame.
A tracked badge is a tracked object that encodes an integer number in the form of a special pattern.
A tracked face represents a human face and may be linked to a person. If the object tracker is able to recognize the face as belonging to a specific person, the tracked face provides a reference to the corresponding person object. The tracked face also contains other detected attributes about the face if they are enabled such as center pose quality, yaw, roll, and pitch.
A person object provides information about a person who has been registered with the ArgusKit face recognition service. Each person has a unique identifier. You may assign a name and a set of tags to a person with the help of a ARKPersonChange object and the ARKObjectTrackerApplyPersonChange() object tracker function.
A person change object stores attributes applied to the person record on file in the ArgusKit face recognition service. This object allows you to change a person's name, tags, age, or gender information.
A VideoPlayer allows you to play back a video file or a HTTP/RTSP video stream.
Note: The video player supports video streams only; it does not support audio streams.
Create an instance of a video player and register an event handler with the DidDecode event property to receive the decoded video frames. Next, start playback by setting the Paused property to false
. You can pause playback at any time by setting the Paused property to true
.
You can stop playback altogether in preparation of disposing of the video player by calling Stop(). You can stop playback and dispose of the player at the same time by calling Dispose().
The video player enforces the video clock if the URL points to a video file, but it does not enforce the video clock if the URL points to a HTTP or RTSP video stream. In this case, the video player decodes video frames as fast as they arrive from the camera. It does this to minimize the video playback latency.
A video frame cache is a pool of cached video frames allowing you to efficiently create a new video frame.
A video frame object encapsulates a single decoded video frame that is passed to an object tracker instance.