Technology

#Why you should be very wary of AI that ‘processes’ college video applications

#Why you should be very wary of AI that ‘processes’ college video applications

To the graduating class of 2021 I have but one piece of advice for you: watch out for snake oil.

Artificial intelligence is a backbone technology that’s as important as the internet or electricity. But it’s also a field so rife with scams that even institutions of higher learning are getting duped by smooth-talking marketing teams and BS AI.

The long and short of it is that colleges and universities are implementing AI systems to process admissions videos.

Here’s a quote from an executive at one of these companies taken from an article in The Hill today:

Hopeful students applying to institutions that partner with Kira undergo a video interview process in which they will not encounter another live person. Instead, video- and text-based prompts lead applicants through a series of questions. Their answers are then used to evaluate things like leadership potential, verbal and written communication skills, comprehension of key concepts, drives and motivations, and professionalism.

The company claims it’s trying to put the “humanity” back into applications. It says its AI systems can speed up the process by helping overwhelmed administrators out.

Here’s more:

When schools express interest in it, they are presented with an AI-based tool that takes video data, and analyzes personality traits and behaviors. We take the very same footage that you view as an admissions person to get a sense of the applicant, and we have them run it through a series of algorithms. Schools are then able to run the algorithms, which give them AI-based data to then compare to what their human reviewers said.

The idea behind the technology is to help the human reviewer ask questions of themselves: Did I see these traits or qualities? Am I missing something? So the emphasis is not on using AI to replace the human aspect of the process. Our whole focus is on helping the human be a better evaluator of other humans.

Snake oil 2.0

Right up front, here’s the big problem: AI can’t do half the crap it’s purported to do and none of what the aforementioned company appears to be peddling. A good rule to remember is that AI cannot do anything a person couldn’t do given enough time.

AI’s really good at doing things like sifting through a million images and figuring out which ones are cats. Humans are much, much better at these kinds of tasks but it takes us a really long time. The benefit to using AI for these tasks is that it speeds things up.

This is called automation and, usually, it’s a good thing.

The problem is when people fudge the lines about what humans and AI can and can’t do. Not only do people get discriminated against, which can cause real harm, but it absolutely muddies up the market with AI snake oil.

And most often, this comes in the form of predictive AI or emotion recognition.

Predictive AI is the most abused, lied-about, and dangerous AI paradigm there is bar none. You can use predictions for good: ie, predicting what a customer might purchase in order to keep the right stock in your warehouse. The down side here is that the AI could be wrong and you might overstock.

But any time predictive AI or emotion recognition is used in a situation where humans could be adversely affected or groups of humans could be disproportionately affected, it’s impossible to implement and use ethically.

Here’s why

Humans cannot predict the future nor tell what another human is thinking by looking at their face. There is no mathematical or scientific method by which “company fit” or “sincerity” can be determined empirically. The only way you can be sure if someone will be a good fit, is to hire them and wait around until they either don’t fit or an arbitrary amount of time has passed.

This means algorithms aren’t using science and math to determine anything, they’re doing what I like to refer to as: counting the aliens in the lemons.

That’s an analogy for how AI is trained, and it goes like this:

I can predict with 100% accuracy how many lemons in a lemon tree are aliens from another planet.

Because I’m the only person who can see the aliens in the lemons, I’m what you call a “database.” If you wanted to train an AI to see the aliens in the lemons, you’d need to give your AI access to me.

I could stand there, next to your AI, and point at all the lemons that have aliens in them. The AI would take notes, beep out the AI equivalent of “mm hmm, mm hmm” and start figuring out what it is about the lemons I’m pointing at that makes me think there’s aliens in them.

Eventually the AI would look at a new lemon tree and try to guess which lemons I would think have lemons in them. If it were 70% accurate at guessing which lemons I think have lemons in them, it would still be 0% accurate at determining which lemons have aliens in them.

Because lemons don’t have aliens in them.

The point is: We can’t determine how “professional” a college student is or whether they have “leadership” skills by watching a video of them and neither can an AI. Professional is subjective.

As an example, I have tattoos on my hands and neck. I also struggle with eye contact, hate touching people to shake hands, and despise small talk. Yet, I also have years of experience leading teams in high stress environments such as the Iraq war and US Navy counter-narcotics operations. Do you think any ten people would rate me the same when it comes to professionalism or company/school fit?

How do these systems perform on minority groups? Are they trained on databases filled with videos sent from autistic individuals? Can the AI account for culturally-specific aphorisms, analogies, and experiences? Do students with diseases that affect speech patterns, motor control, or facial muscles have to declare their illnesses ahead of time and share their personal information in order to be treated fairly, or have the developers also trained their AI to compensate for these conditions?

Have the developers trained the AI on databases containing as many Black persons, trans persons, and religious persons wearing face coverings as it has white people?

The answer is no, they haven’t. In each case. Even when these systems are built by a diverse team of developers and trained on large datasets purported to be filled with diverse subjects: they’re full of bias and they’re entirely subjective. No human can tell if another human is actually being sincere or not: all we can do is guess.

Higher learning

When you build an AI system to do the guessing for you, the system is a snake oil scam. Just like “rose water” and cocaine can’t cure diphtheria, AI absolutely cannot determine which college applicants will be a good fit based on sentiment analysis.

These systems are meant to pass the buck. The companies that build them use fine print in the form of “human in the loop” jargon to make it seem like these are “tools” to be used in order to aid humans in doing their jobs. And that makes no sense.

If humans are going to investigate every application that’s flagged “not worthy,” then what’s the point of the AI? To keep a human in the loop, you need to investigate every instance where the AI might be denying a human any privilege.

Worse, humans tend to trust machines more than themselves. This is especially true when they’ve paid money for that machine and been told it can magically tell what people were thinking and feeling.

[Study: People trust the algorithm more than each other]

The bottom line here is that AI that judges humans on anything that cannot be expressly represented in mathematics – such as our height or running speed – is snake oil.

It’s bad enough that there are humans out there who believe things such as “a strong handshake” are good indicators of an individual’s potentiate as a worker, now we’re automating subjective discrimination?

It’s embarrassing to know many prestigious institutions of higher learning are using this crap. They’re either in on the snake oil scam, or they’re victims of it too. Either way, it’s something students might want to consider before spending the time, energy, and money it takes to apply to the universities that use these systems.

Here’s some more information on bias, predictive AI, and sentiment analysis:

Greetings Humanoids! Did you know we have a newsletter all about AI? You can subscribe to it right here.

If you liked the article, do not forget to share it with your friends. Follow us on Google News too, click on the star and choose us from your favorites.

For forums sites go to Forum.BuradaBiliyorum.Com

If you want to read more like this article, you can visit our Technology category.

Source

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button
Close

Please allow ads on our site

Please consider supporting us by disabling your ad blocker!