News

#How personalized AI could turn into a ‘frenemy’

Google reportedly plans to create an AI-based “life coach” to offer users advice on a range of life’s challenges — from navigating personal dilemmas to exploring new hobbies to planning meals.

Given that people already search the web for such advice, this may seem a natural extension of the main service Google already provides. But take it from an artificial intelligence researcher: The combination of generative AI and personalization that such an app represents is new and potent, and its placement in a position of intimate trust is troubling.

Yes, anxiety has greeted many recent developments in AI. Since the release of ChatGPT, many have worried about runaway rogue AIs. In March, more than 1,000 technology professionals, many AI pioneers among them, signed an open letter warning the public about this danger.

But most discussions about the risks of AI imagine a future in which hyper-capable AIs outdo humans at skills we think of as our forte. The rise of AI coaches, therapists and friends points to a different possibility. What if the most immediate risk from AI systems is not that they learn to outperform us, but that they become the greatest “frenemies” we have ever had?

For better or worse, AI systems are far from mastering many tasks that humans perform well. Building reliable self-driving cars has been much harder than computer scientists anticipated. ChatGPT can string together fluent paragraphs, but it isn’t close to crafting high-quality magazine articles or short stories.

On the other hand, long before ChatGPT arrived, we had behind-the-scenes AI algorithms that excelled at hooking us onto the next viral video or keeping us scrolling just a little longer. Over the last two decades, these algorithms have given us ways to entertain ourselves endlessly and changed the face of our culture.

Personalized versions of ChatGPT-like AIs, embedded within a wide range of apps, will have the capabilities of these algorithms on steroids. Your Netflix movie recommender can only see what you do on Netflix; these AI-charged apps will also read your emails, texts, and even listen in on your private conversations. Combining this data with ChatGPT-scale artificial neural networks, they will often be able to predict your wants and needs better than your closest real-life friends. And unlike your human friends, they will always be just one click away, 24 hours a day.

But here’s the “frenemy” part: Just like earlier systems for generating recommendations, these AI confidantes will be ultimately designed to create revenue for their developers. This means that they will have incentives to manipulate you into clicking on ads forever, or to make sure you never cancel that subscription.

The ability of these systems to continually generate new content will worsen their harmful impact. 

These AIs will be able to use pictures and words newly created for you personally to soothe, amuse and agitate your brain’s reward and stress systems. The dopamine circuits in our brains developed through millions of years of evolution. They were not designed to resist the onslaught of continual stimulation tailored to fit your most intimate hopes and fears.

Add to this generative AI’s well-known struggles with truth. ChatGPT is notorious for lying, and your AI frenemies will be similarly unreliable narrators. At the same time, your perceived intimacy with them could make you less likely to question their authority.

Like friendships with humans who manipulate and lie, our relationships with our AI frenemies will often end in tears. Many of us could be controlled by these “tools,” as the line between what we genuinely want and what the AI thinks we want gets ever blurrier. A lot of us will be lost in a digital amusement park, disengaged from society or parroting AI-generated falsehoods. Meanwhile, as the AI race heats up, technology companies will be tempted to ignore the risks of their products (reportedly, Google’s AI safety team raised concerns about the AI life coach, but the project went ahead anyway). 

We are at an inflection point as personalized generative AI begins to take off, and it is imperative that we confront these challenges directly. The Biden administration’s AI bill of rights has emphasized the right to opt out from automated systems and the need for consent in data collection. But humans manipulated by powerful AI systems may not be able to opt out or meaningfully consent, and the lawmakers need to recognize this fact.

Designing policies that limit the harms of our AI frenemies without hurting broader AI innovation requires careful discussion. But one thing is certain: cognitive agency — the ability to act on our genuine free will — is a fundamental aspect of being human. It is essential to both our pursuit of happiness and our citizenship in a democracy. We need to make sure that we don’t lose this right because of carelessly deployed technology.

Swarat Chaudhuri is a professor of Computer Science and the director of the Trustworthy Intelligent Systems Laboratory at the University of Texas at Austin. He is a member of the 2023 cohort of the OpEd Project Public Voices Fellowship. Follow him at @swarat.

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.

If you liked the article, do not forget to share it with your friends. Follow us on Google News too, click on the star and choose us from your favorites.

For forums sites go to Forum.BuradaBiliyorum.Com

If you want to read more News articles, you can visit our News category.

Source

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button
Close

Please allow ads on our site

Please consider supporting us by disabling your ad blocker!