Anura Web Core SDK npm package exports the following three modules:
Measurement is the main module which allows you to build your own web applications for Affective AI. In order to prepare the Measurement module, you will need the followings:
token and refreshToken pair to authenticate.Study ID to set the list of DeepAffex points.mediaElement which is an instance of HTMLDivElement
in the DOM.The SDK uses the input MediaStream to make a measurement and displays the
results in the mediaElement.
import { Measurement, faceAttributeValue, faceTrackerState } from "@nuralogix.ai/anura-web-core-sdk";
const mediaElement = document.getElementById("media-element");
if (mediaElement && mediaElement instanceof HTMLDivElement) {
const settings: Settings = {
mediaElement,
// apiUrl: "api.deepaffex.ai", [Optional]
assetFolder: "./assets",
metrics: false,
};
const measurement = await Measurement.init(settings);
await measurement.prepare(token, refreshToken, studyId);
}
Note:
If the optional apiUrl is not set in the SDK, it will be automatically determined based on the
token's region. This effectively ties the frontend region to the token's region, which should
be suitable for most use cases. If the optional apiUrl is explicitly set in the SDK, the
frontend will communicate with that URL, regardless of the token's region. In this case, it is
the implementor’s responsibility to ensure compatibility and prevent potential issues.
Results will always be stored in the token's region.
Data processing will occur in the frontend's region, as determined by your implementation.
If the backend only
registers a license
and returns a device token to perform an anonymous measurement, the token’s region will
match the region specified in the backend's API_URL.
However, if the backend first registers a license to obtain a device token and then uses that
token to log in a user and
return the new tokens to the frontend, the region will be determined by the user token, not
the device token. This means that if the user was originally created in the eu-central but
the device token is in na-east, then the region would be eu-central.
Measurement module is designed to be used with an HTMLDivElement element (referred to as
mediaElement), into which it injects an HTMLVideoElement element and an SVG-based mask.
The measurement module emits Drawables via
facialLandmarksUpdated event. The SVG mask overlays the video and drawables
can be used to update the mask.
The prebuilt mask classes are not minified, allowing developers to use it as a
reference for creating their own mask implementations. You can get a copy of a
mask from node_modules\@nuralogix.ai\anura-web-core-sdk\lib\masks folder and
start customizing it. Having multiple masks allows runtime mask swapping. In
an alternative you can any of the prebuilt masks without modification.
import { AnuraMask, type AnuraMaskSettings } from "@nuralogix.ai/anura-web-core-sdk/lib/masks/anura";
// Optional Anura Mask Settings
const anuraMaskSettings: AnuraMaskSettings = {
starFillColor: '#39cb3a',
starBorderColor: '#d1d1d1',
pulseRateColor: 'red',
pulseRateLabelColor: '#ffffff',
backgroundColor: '#ffffff',
countDownLabelColor: '#000000',
faceNotCenteredColor: '#fc6a0f',
/** must be > 0 and <= 1 */
diameter: 0.8,
/** must be > 0 and <= 1 */
topMargin: 0.06,
/** must be > 0 and <= 1 */
bottomMargin: 0.02,
shouldFlipHorizontally: true,
};
const mask = new AnuraMask(anuraMaskSettings);
or
import { TeleMask } from "@nuralogix.ai/anura-web-core-sdk/lib/masks/tele";
const mask = new TeleMask();
There are two steps for loading a mask:
await measurement.setMediaStream(mediaStream);
const success = measurement.setObjectFit(mask.objectFit);
if (success) {
measurement.loadMask(mask.getSvg());
await measurement.startTracking();
}
Note that when you initialize the mask, the loading state is set to true.
When the face tracker is ready for tracking frames, you need to set the
loading state to false.
measurement.on.faceTrackerStateChanged = async (state) => {
if (state === faceTrackerState.READY) {
mask.setLoadingState(false);
}
};
measurement.on.facialLandmarksUpdated = (drawables: Drawables) => {
if (drawables.face.detected) {
mask.draw(drawables);
if (drawables.percentCompleted >= 100) {
// When the measurement is complete and the app is waiting for
// the final results, you can set the mask back to loading state.
mask.setLoadingState(true);
}
} else {
console.log("No face detected");
}
};
Controls the visibility of the mask.
mask.setMaskVisibility(false); // Hides the mask
mask.setMaskVisibility(true); // Shows the mask
Updates the mask with intermediate facial tracking results.
measurement.on.resultsReceived = (results: Results) => {
const { points, resultsOrder, finalChunkNumber } = results;
// Intermediate results
if (resultsOrder < finalChunkNumber) {
mask.setIntermediateResults(points);
}
};
Resizes the mask when the mediaElement is resized.
measurement.on.mediaElementResize = (event: MediaElementResizeEvent) => {
mask.resize(event.detail);
};
Suppose you have the following HTML Select element that allows you to select between two masks.
<select title="mask-select" id="masks-list">
<option value="anura">Anura Mask</option>
<option value="tele">Tele Mask</option>
</select>
The following script is just an example of how to change the mask at runtime. This is useful when you want to display different masks for various runtime environments or display another mask if a condition is met. Note that changing a mask at runtime triggers MediaElementResizeEvent twice.
import { AnuraMask } from "@nuralogix.ai/anura-web-core-sdk/lib/masks/anura";
import { TeleMask } from "@nuralogix.ai/anura-web-core-sdk/lib/masks/tele";
const masksList = document.getElementById('masks-list');
const mediaElement = document.getElementById("media-element");
let mask = new AnuraMask();
masksList.addEventListener('change', (e) => {
mask = e.target.value === 'tele'
? new TeleMask()
: new AnuraMask();
const success = measurement.setObjectFit(mask.objectFit);
if (success) measurement.loadMask(mask.getSvg());
});
Anura Web Core SDK comes with two optional helper modules. These modules help you obtain a MediaStream either from an attached webcam or a video file and pass it to the measurement module.
It is NOT mandatory to use Camera Controller or Video Controller modules and you can use your own method to obtain a Media Stream.
import helpers, {
CameraControllerEvents,
type CameraStatusChanged,
type SelectedCameraChanged,
type MediaDeviceListChanged
} from "@nuralogix.ai/anura-web-core-sdk/lib/helpers";
const { CameraController } = helpers;
const {
CAMERA_STATUS,
SELECTED_DEVICE_CHANGED,
MEDIA_DEVICE_LIST_CHANGED
} = CameraControllerEvents;
const camera = CameraController.init();
const onSelectedDeviceChanged = async (e: SelectedCameraChanged) => {
const { deviceId } = e.detail;
console.log('Selected deviceId', deviceId);
};
const onMediaDeviceListChanged = async (e: MediaDeviceListChanged) => {
const { mediaDevices } = e.detail;
console.log('Media devices changed', mediaDevices);
};
const onCameraStatus = async (e: CameraStatusChanged) => {
const { isOpen } = e.detail;
if (isOpen) {
const { capabilities } = e.detail;
console.log({ capabilities });
}
};
camera.addEventListener(
SELECTED_DEVICE_CHANGED,
(onSelectedDeviceChanged as unknown as EventListener)
);
camera.addEventListener(
MEDIA_DEVICE_LIST_CHANGED,
(onMediaDeviceListChanged as unknown as EventListener)
);
camera.addEventListener(
CAMERA_STATUS,
(onCameraStatus as unknown as EventListener)
);
import helpers, { type VideoControllerSettings } from "@nuralogix.ai/anura-web-core-sdk/lib/helpers";
const { VideoController } = helpers;
const videoControllerSettings: VideoControllerSettings = {
mimeCodec: 'video/mp4',
// callback for when video is loaded
videoLoadedCallback: async () => {
videoStream.videoElement.addEventListener("playing", async () => {
});
videoStream.videoElement.play();
},
// callback for when video ends
videoEndedCallback: async () => {
}
}
// Initialize video controller
const videoStream = VideoController.init(videoControllerSettings);