Adapting cameras and face trackers
Camera capture and face tracking are two important parts of the DeepAffex solution and there are numbers of third-party camera and face tracker components available in the market. To allow you to use them in your own app, Anura Core SDK provides adapter interfaces.
Anura Core SDK provides default implementations for camera and face tracker adapters. We recommend using the default implementations as demonstrated in the include Sample Apps.
Camera Adapter
Core SDK for Android provides the CameraAdapter
in interface. You can
implement it as shown below:
public interface VideoSource extends Module {
/**
* Creates a camera source instance, which is used to capture video frames and
* pass them to the down streaming pipeline.
* @param name the name of this camera source
* @param core {@link Core} the core instance
* @param format {@link VideoFormat} the format of this video source
* @param cameraAdapter {@link CameraAdapter} the implementation of camera adapter
* @return a camera source instance
*/
static VideoSource createCameraSource(
@NonNull String name,
@NonNull Core core,
@NonNull VideoFormat format,
@NonNull CameraAdapter cameraAdapter) {
return new CameraSourceImpl(name, core, format, cameraAdapter);
}
}
Face Tracker
Core SDK for Android and iOS both provide a face tracker interface:
Android provides FaceTrackerAdapter
:
public class UserFaceTracker implements FaceTrackerAdapter {
... ...
}
public interface VideoPipe extends VideoSource {
/**
* Creates a face tracker pipe instance
*
* @param name the name of this pipe
* @param core {@link Core} the core instance
* @param format {@link VideoFormat} the format of this video pipe
* @param faceTracker the implementation of {@link FaceTrackerAdapter}
*/
static VideoPipe createFaceTrackerPipe(
@NonNull String name,
@NonNull Core core,
@NonNull VideoFormat format,
@NonNull FaceTrackerAdapter faceTracker) {
return new FaceTrackerPipeImpl(name, core, format, faceTracker);
}
... ...
}
and iOS provides FaceTrackerProtocol
:
public protocol FaceTrackerProtocol {
init(quality: FaceTrackerQuality)
func trackFace(from videoFrame: VideoFrame, completion: ((TrackedFace) -> Void)!)
func lock()
func unlock()
func reset()
var trackingBounds: CGRect { get set }
var quality: FaceTrackerQuality { get set }
var delegate: FaceTrackerDelegate? { get set }
}