Facebook and Instagram's parent company, Meta Platform (META, Financial), have rolled out a new plan to tackle the rising problem of celebrity investment scams using deepfake adverts. From this month, Meta will experiment with facial recognition on a sample of 50,000 celebrities and other public figures across the world. The plan was made so that opt-out is possible at any given time and to expedite and increase the efficiency of isolating and eliminating fake advertisements.
This facial recognition system will get triggered when Meta AI thinks it is dealing with a scam ad, copying the images from the ad to the public figure's photos on Facebook and Instagram. However, specific disclaimers state that if a match is established, the ad is a scam, and then it would be immediately banned. This is a change of tune from earlier this year when Meta had reduced facial recognition deployment because of privacy issues. Meta's director of global threat disruption, David Agranovich, said that facial data will be erased once the match test is completed, stressing privacy and protective measures of the submitted personal data.
Meta has had to act as social, political, and regulatory bodies increase pressure toward controlling scams that falsely use the images of popular people to lure people into investment scams. Various individuals, including celebrities, have also been impersonated in these scams, the latest involving Martin Lewis and Gina Rinehart. Significant financial losses and legal actions have been filed against Meta, with one notable figure being mining magnate Andrew Forrest.
Agranovich concedes that some scams can still be missed through the technology eye. He acknowledges that changing the users' evolution of scammers' tactics will take more time than it looks. He further points out that Meta is already establishing new tools and policies on fraud prevention, which will go to significant lengths to protect both.