Facial Recognition Technology in Medicine: A Use-Based Ethical Framework

Facial recognition technology is everywhere. Pew Research found more than half of adults trust law enforcement with facial recognition but fewer trust tech companies, advertisers, and landlords. The data signifies not only that the user matters, but that use matters. Tracking facial reactions to public ads and displays was the least popular use cited by Pew with only 15 percent approval. Technology makes society over-protective. Small- and large-scale surveillance are now common. The go-out-and-play generation of children has been replaced by a generation tracked by parents, findable on snap maps and find my iphone. Privacy even in the wilderness is difficult to achieve. But in the healthcare setting, the surveillance model is especially fraught with unsettling ethical dilemmas.

Safety or Surveillance?

In medicine, facial recognition technology uses vary considerably from check-in at an appointment to genetic analysis. App developers and proponents argue facial recognition is a patient safety tool. There is an alarming precursor to that—are there enough patient mix-ups to justify such a need? I would assert that hospital bracelets, bar codes, and check-in processes like a signature are effective. If a hospital is mixing up patients regularly, there are likely underlying causes and many more pressing ethical problems. Presumably, such a hospital would be sanctioned, shut down, or unable to afford the high-tech solution.

Facial recognition technology emotion
Photo by Anh Nguyen on Unsplash

Facial Recognition Technology in Surveillance of Emotion

Doctors could use facial recognition technology to better gage the mood and emotional state of patients, parents, or caregivers. “Another case is to use facial recognition to evaluate facial cues to interpret the emotional state of patients including emotions such as anger, fear, disgust or sadness to aid in providing appropriate assistance.” To me, such a use would further tip an already unlevel playing field. The hospital or doctor would have data beyond that which they could discern just by looking (perceptive people sense anger and disgust without the need for facial recognition technology). The person seeking or refusing care, receiving a diagnosis, or their caregivers would not be equally equipped. In the relationship between the doctor and the person seeking or refusing care, the patients, parents, and caregivers would not be able to assess the doctor’s mood or emotional state other than by using their senses and social skills, the old-fashioned way.

I consider mood and reactions to medical information private. Doctors tend to want to deliver results in a responsible way. In bioethics, a paternalistic notion that physicians should worry about patient reaction and possibly protect them by releasing less information or withholding some incidental genetic results is a prevailing theme, an open dilemma. In bioethics literature and many discussions, a common thread of “what would they do with the information?” confirms the old-school view that doctors steeped in knowledge must be gatekeepers of information relevant to another person’s body. One article asserts that “normative rationality” and a lack of “genetic literacy” should counteract the “soft-paternalism” requiring disclosure of certain incidental findings. Adding facial recognition to assess the person as a receiver of information, doctors might release a more complete picture to someone who is confirmed to be objective, happy, and complacent than someone who is inquisitive, angry, or sad.

It is unclear how any organization would gather consent to facial recognition technology used in this way. Without consent, it seems deceitful. Hospitals could assess a waiting room full of people and determine that some are more anxious, sad, or angry, and then use the data to prep the doctor delivering a diagnosis or surgical recap. I suspect no caregivers would consent to that. As a nonmedical use, non-consensual facial recognition to assess anyone’s mood seems to violate ethics and privacy. If what is essentially a security camera is used to assess emotion, HIPAA is arguably inapplicable.

Photo by Mitchell Luo on Unsplash

Similarly, facial recognition technology could identify healthcare worker burnout, fatigue, or depression, but such workers may want privacy and to shield themselves from overbearing places of work. It may be to their disadvantage if an employer forces a leave of absence,  mental health assessment, or a change in position.

In the medical use realm, HIPAA covers pictures and images and essentially covers facial recognition technology to a degree. It is not clear whether HIPAA provides enough protection and whether there is a safe enough way to store personally identifiable images or templates. But in the arena of ascertaining mood or emotional state HIPAA would generally not govern. HIPAA is more for the patient seeking care and diagnosis and not for circumstances of unexpected surveillance.

Facial Recognition Technology and Patient Surveillance

The use of facial recognition technology in monitoring patient medication compliance or safe habits at home or in nursing homes and hospitals through apps expands the surveillance state and must be approached cautiously. While some people living alone might welcome such surveillance, many would not if they were aware of how much information were collected and how. Consent is crucial to invasive uses.

Facial recognition technology man relaxing wellness
Photo 112100107 / Facial Recognition Technology © Kaspars Grinvalds | Dreamstime.com

How Medical Data Mixes with Wellness, Lifespan, and Cosmetics

Wellness programs within medicine may want to identify longevity markers in the broad population and in people seeking care. Facial recognition technology can assess weight, BMI, and blood pressure but so can a scale, some math, and a blood pressure cuff. Facial averageness, facial adiposity, and skin condition contribute to health information as well as arguably detrimental psychological information. For example, “attractiveness” is linked to facial adiposity and skin condition. Yet adiposity also is a measure of certain health outcomes like severe or frequent colds and flu, blood pressure, longevity, BMI, and women’s physical and psychological health. Facial recognition technology could increase demand for cosmetic surgery if studies continue to link societal standards of attractiveness to data collected to predict health or diagnose disease.

Improved Genetic Diagnosis

Algorithms to detect minor facial features associated with a genetic disorder are more effective than the trained eye of a physician. Face2gene is an app that allows doctors to identify genetic differences. Genetic syndromes like Cushing’s Syndrome, Cornelia de Lange syndrome, Mowat-Wilson syndrome and  many others could be diagnosed without a human interaction. When facial recognition is used to diagnose genetic differences, syndromes, or diseases, people would have the opportunity to consent. Genetic Information Nondiscrimination Act (GINA) does not cover genetic data from facial recognition. It is unclear whether the Americans with Disabilities Act would protect people diagnosed by facial recognition technology. Privacy would become even more important to prevent employment discrimination.

Facial recognition technology replace doctor
Photo by Online Marketing on Unsplash

In the future, people may just check a “smart mirror” or look into a screen for broad diagnostics and a doctor, if one were assigned at all, would play a smaller personal role in diagnostics. The algorithms behind facial technology outperform clinicians. To some people (probably me), the ability to see a machine instead of a doctor would be a welcome development. To many others, machine diagnosis would feel impersonal. In the genetics realm, where informed consent is easily achieved, better diagnosis, especially when a cure is available, is a welcome addition to the tools at the disposal of both doctor and patient.

How to Create a Use Test—What Ethical Parameters Matter?

Diagnosis and Treatment

Not every use of facial recognition technology fits a simple category and within categories, some uses will be better than others. One test of moral validity of the use of facial recognition must be whether the use would diagnose and cure a disease more efficiently than or at least with equal efficiency to a current diagnostic tool. That is, alleviating human suffering is a clearly better use than encouraging cosmetic change or adding to the widespread surveillance state already in place. People must have the opportunity to control how the image or template is stored as it cannot be deidentified.

Within this category of use test, doctors must consider alternate ways to obtain the information. Facial recognition technology has less bodily intrusion than blood tests or physical examinations. It could be costly to implement. But it could save money as eventually it may lead to less need for clinicians and diagnosticians. The future of work arguments will apply, but for now, doctors use the technology and it seems biometrics, bioinformatics, and jobs that mix medicine and tech are increasing.

For wellness, people should have the opportunity to consent if they would like their doctor to analyze their longevity potential or the many crossovers between health and beauty. If they agree to use facial recognition technology, the person must be able to control what data is collected.

The Big Picture: Facial Recognition Technology and Surveillance

As a society, it is important to push back against widespread surveillance. In the healthcare setting, if a waiting room or a doctor’s office has facial recognition technology, people seeking care, parents, and caregivers must be alerted to the fact their images could be collected, especially for an unwanted analysis of emotion. Law enforcement, corporations, and hospitals have different priorities, yet each can be complicit in the others’ motives. Hospitals wanting emotional information or wanting to keep patients safe by tracking their every expression may find themselves subpoenaed or forced to aid law enforcement. Monitoring people is serious business. In the healthcare realm, monitoring for anything outside the scope of medicine without the express consent of the person surveilled would undermine trust and respect.

Feature Image ID 207892406 © BiancoBlue | Dreamstime.com

Similar Posts