A Novel Four-Dimensional Facial Assessment System
The automated objective grading systems that currently exist are either not yet able to take into account the complex three-dimensional morphology of the face or have inadequate usability and feasibility. However, today’s smartphone integrated three-dimensional hardware or consumer-based red, green and blue depth sensors have the ability to collect detailed four-dimensional facial data inexpensively and in real time. They just haven’t been integrated into a practical system for use in a standard clinical setting.
A study by George A. Petrides, B.Med., M.D.(Hons.), et. al., published in Plastic and Reconstructive Surgery (September 2022), evaluated the usability and feasibility of a proof-of-concept automated four-dimensional facial assessment system using a red/blue/green depth sensor (OpenFAS).
Related: Appiell Announces Results of Pilot Study of its Digital Platform
Details of the Study
The study used the Intel (Santa Clara, Calif.) RealSense SR300, which was connected to a laptop running the OpenFAS application. Participants were asked to mimic a sequence of facial expressions shown on the screen. Each frame was landmarked, with automatic anthropometric calculations performed. The study’s authors compared the ground-truth positions of manually annotated landmarks with the automatically calculated landmarks to estimate landmark accuracy.
The study included 18 participants, nine of whom were healthy adults and nine who were patients with facial nerve palsy. The sessions lasted approximately 106 seconds each. The assessment system automatically annotated 61.8% of the landmarks within roughly 1.575 mm of their ground-truth locations.
The authors concluded that in routine settings the OpenFAS is a usable and feasible option for an automated four-dimensional facial assessment system. While further studies will need to be conducted, this lays the critical groundwork necessary to create a facial assessment system that addresses the limitations of existing devices.