Technology

Meta plans to automate many of its product risk assessments

An AI-powered system could soon take responsibility for evaluating the potential harms and privacy risks of up to 90% of updates made to Meta apps like Instagram and WhatsApp, according to internal documents reportedly viewed by NPR.

NPR says a 2012 agreement between Facebook (now Meta) and the Federal Trade Commission requires the company to conduct privacy reviews of its products, evaluating the risks of any potential updates. Until now, those reviews have been largely conducted by human evaluators.

Under the new system, Meta reportedly said product teams will be asked to fill out a questionaire about their work, then will usually receive an “instant decision” with AI-identified risks, along with requirements that an update or feature must meet before it launches.

This AI-centric approach would allow Meta to update its products more quickly, but one former executive told NPR it also creates “higher risks,” as “negative externalities of product changes are less likely to be prevented before they start causing problems in the world.”

In a statement, Meta seemed to confirm that it’s changing its review system, but it insisted that only “low-risk decisions” will be automated,  while “human expertise” will still be used to examine “novel and complex issues.”

If you liked the article, do not forget to share it with your friends. Follow us on Google News too, click on the star and choose us from your favorites.

If you want to read more like this article, you can visit our Technology category.

Source

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button
Close

Please allow ads on our site

Please consider supporting us by disabling your ad blocker!