Basic Usage
Here we discuss basic usage of the DeepAffex Library for blood-flow extraction. We use Python here for clarity but the API is the same in all the languages.
DFX Factory
A DFX Factory
is the primary entrypoint into the library. A Factory
object
is constructed by calling a parameterless constructor.
factory = dfxsdk.Factory()
The next step is to use the SDK ID
to obtain study configuration data from a
POST call to the Cloud API's
Studies.Sdkconfig
endpoint and to use it to initialize the Factory
. (This assumes that you have
registered, logged in, obtained a token and selected a study as discussed in
the authentication chapter.)
We pass the SDK ID
, Study ID
and the hash of the current data on hand. The
first time we call this endpoint, the hash would be an empty string. If there is
updated study configuration data available, we will get at 200 response and the
body will contain the base64 encoded study configuration data and its hash. If
our hash we sent is up to date, then we will get a 304 response. Please cache
the study configuration data and its hash for future use.
sdk_id = factory.getSdkId()
study_cfg_data = # Cloud API call with sdk_id, studyid and current_hash
if not factory.initializeStudy(study_cfg_bytes):
print(f"DFX factory creation failed: {factory.getLastErrorMessage()}")
If the initialization had no errors, our next step is to create a DFX
Collector
.
DFX Collector
A Collector
collects Frame(s)
containing Face(s)
and produces chunks of data
containing blood flow information (ChunkData
.) To create a collector, we call
the createCollector
method of an initialized Factory
object.
collector = factory.createCollector()
The collector uses frame timestamp information to determine when a chunk of data is ready to be sent to the DeepAffex Cloud for processing. Thus, before we can start using it, we have to set some important properties - the anticipated framerate, the duration of each chunk of data and the duration of the measurement (set indirectly by setting the number of chunks to be sent.)
collector.setTargetFPS(fps)
collector.setChunkDurationSeconds(chunk_duration_s)
collector.setNumberChunks(number_chunks) # measurement duration = chunk_duration_s * number_chunks
Next, we create a measurement on the DeepAffex Cloud using a POST call on the
Measurements.Create
endpoint.
At this stage, we are ready to start collecting blood flow data. However, a few points are to be noted.
- The minimum chunk duration is 5 seconds. Intermediate results will be available at this interval.
- The chunk duration may not evenly divide the total duration of a measurement
e.g., in the case of a video of predetermined length. In that case, we set the
number of chunks to be one more than the quotient. For the last chunk, when we
have received the last frame, we use the
forceComplete
method of the collector. - In a live camera measurement, we could delay the start of blood flow data collection, until the person's face is in the frame and certain conditions like lighting etc have been met. This is the DFX Constraints system and is discussed in more detail in the next section.
Assuming we have an image source producing frames and a face tracker that can
track each frame and produce facial landmark information, we proceed by calling
startCollection
on the collector object and adding frames to it.
collector.startCollection()
Adding frames
To add frames to the collector:
-
we first wrap our native image format into a DFX
VideoFrame
object (which tells the collector things like the timestamp of the frame and the channel order of the frame.)dfx_video_frame = dfxsdk.VideoFrame(image, frame_number, # Relative frame number frame_timestamp_ns, # Relative frame timestamp in nanoseconds dfxsdk.ChannelOrder.CHANNEL_ORDER_BGR)
-
Then, we create a DFX
Frame
object by passing ourVideoFrame
to the collector.dfx_frame = collector.createFrame(dfx_video_frame)
-
Then, we add the DFX
Face
s containing face tracking information to theFrame
.for dfx_face in dfx_faces: dfx_frame.addFace(face)
-
Finally, we create regions (from where the facial bloodflow information will be extracted) and we extract the information. We also check and see if a chunk of data is ready and whether the measurement has ended.
collector.defineRegions(dfx_frame) result = collector.extractChannels(dfx_frame) if result == dfxsdk.CollectorState.CHUNKREADY or \ result == dfxsdk.CollectorState.COMPLETED: chunk_data = collector.getChunkData() if chunk_data is not None: chunk = chunk_data.getChunkPayload() # Send the chunk to the DeepAffex Cloud if result == dfxsdk.CollectorState.COMPLETED: # Exit our image collection loop
Note:
getChunkData
may return aNone
(or anullptr
in C++). This will happen if the library wasn't able to extract enough bloodflow information from the frames that were passed in.
Decoding results
Real-time results that are returned via the Measurements.Subscribe
endpoint
are in JSON format.
For older applications, the Collector
also has a deprecated
decodeMeasurementResult
function that can decode the Protobuf-encoded binary
results that are received on the WebSocket-only
Measurements.Subscribe
endpoint.
decoded_result = decodeMeasurementResult(payload)
This method will be removed in future versions of the library and you should rely on JSON results instead.
Some details about DFX Face
's follow in the next section