News

#How Myers-Briggs and AI are being misused

#How Myers-Briggs and AI are being misused

Say you’re a job-seeker who’s got a pretty good idea of what employers want to hear. Like many companies these days, your potential new workplace will give you a personality test as part of the hiring process. You plan to give answers that show you’re enthusiastic, a hard worker and a real people person.

Then they put you on camera while you take the test verbally, and you frown slightly during one of your answers, and their facial-analysis program decides you’re “difficult.”

Sorry, next please!

This is just one of many problems with the increasing use of artificial intelligence in hiring, contends the new documentary “Persona: The Dark Truth Behind Personality Tests,” premiering Thursday on HBO Max.

The film, from director Tim Travers Hawkins, begins with the origins of the Myers-Briggs Type Indicator personality test. The mid-20th century brainchild of a mother-daughter team, it sorts people based on four factors: introversion/extraversion, sensing/intuition, thinking/feeling and judging/perceiving. The quiz, which has an astrology-like cult following for its 16 four-lettered “types,” has evolved into a hiring tool used throughout corporate America, along with successors such as the “Big Five,” which measures five major personality traits: openness, conscientiousness, extraversion, agreeableness and neuroticism.

“Persona” argues that the written test contains certain baked-in prejudices; for example, the potential for discriminating against those not familiar with the type of language or scenarios used in the test. 

And according to the film, incorporating artificial intelligence into the process makes things even more problematic.

The technology scans written applications for red-flag words and, when an on-camera interview is involved, screens applicants for facial expressions that might contradict responses. 

Four Generations of Briggs Meyers Women.
Four Generations of Briggs Meyers Women.
HBO Max

“[It] operates on 19th-century pseudo-scientific reasoning that emotions and character can be standardized from facial expressions,” Ifeoma Ajunwa, associate professor of law and director of the AI Decision-Making Research Program at University of North Carolina Law, told The Post via email.

Ajunwa, who appears the film, says the potential for bias is huge. “Given that the automated systems are usually trained on white male faces and voices, the facial expressions or vocal tones of women and racial minorities may be misjudged. In addition, there is the privacy concern arising from the collection of biometric data.”

One widely used recruiting company, HireVue, would analyze applicants’ “facial movements, word choice and speaking voice before ranking them against other applicants based on an automatically generated ’employability’ score,” the Washington Post reported. The company has since stopped the practice, it announced just last month

Although they claim “visual analysis no longer significantly added value to the assessments,” the move followed an outcry over potentially damaging effects.

Cathy O’Neil is a data science consultant, author of “Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy,” and one of the experts interviewed in “Persona.” Her company, O’Neil Risk Consulting & Algorithmic Auditing (ORCAA), provided an audit of practices at HireVue following their announcement.

“No technology is inherently harmful; it is just a tool,” she told The Post via email. “But just as a sharp knife can be used to cut bread or kill a man, facial recognition could be used to harm individuals or communities . . . This is particularly true because people often assume that technology is objective and even perfect. If we have blind faith in something deeply complex and profoundly opaque, that is always a mistake.”

A typical question from the Myers-Briggs personality test.
A typical question from the Myers-Briggs personality test.
HBO Max

There have been a spate of legislative actions around the use of facial algorithms in recent years. But New York City is the first to introduce a bill that would specifically regulate their use in the hiring process. It would compel companies to disclose to applicants that they are using the technology, and conduct an annual audit for bias.

Just as a sharp knife can be used to cut bread or kill a man, facial recognition could be used to harm individuals or communities.

Data science consultant Cathy O’Neil

But Ajunwa thinks this doesn’t go far enough. It is “a necessary first step to preserving the civil liberties of workers,” she said. But “what we need are federal regulations that attach to federal anti-discrimination laws and which would apply in all states, not just New York City.”

To those who knew Isabel Briggs Myers, seeing the test, hand-in-hand with AI, being used to ruthlessly determine whether people are “hirable” seems a far cry from her original intention, which was to help users find their true callings.

As one of Briggs Myers’ granddaughters says in the film, “I think there are ways it’s being used that she would want to correct.”

If you liked the article, do not forget to share it with your friends. Follow us on Google News too, click on the star and choose us from your favorites.

For forums sites go to Forum.BuradaBiliyorum.Com

If you want to read more News articles, you can visit our News category.

Source

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button
Close

Please allow ads on our site

Please consider supporting us by disabling your ad blocker!