Meta’s A.I. Chatbot Raises Privacy Concerns as it Expands Worldwide
Last month, Meta made headlines when it announced its plans to expand its artificial intelligence services globally. The company informed users in Europe that starting on June 26, their public information would be used to train its A.I. services, including the popular chatbot.
This move sparked privacy concerns and backlash among users, who questioned where the policy change would be implemented next. While users in the United States are already familiar with Meta A.I. using public posts to train its A.I., the lack of specifics about data usage raised red flags for privacy watchdogs.
Meta defended its actions by stating that it is complying with privacy laws and that the information gathered will enhance user experiences in specific regions. The company’s chatbot, powered by LLaMA 3, is designed to respond to a wide range of prompts, similar to other popular A.I. assistants like Siri and Alexa.
Despite the backlash, Meta did not specify how exactly the public information would be used, leading to further concerns about data privacy. In the U.S., users have no option to opt out of sharing their public social media posts with Meta A.I., as there are no specific privacy laws governing this practice.
For users in Europe, Meta provided instructions on how to opt out of data sharing through the Meta Privacy Center. However, watchdog groups like NOYB have filed complaints in multiple European countries, expressing concerns about the broad use of data by Meta’s A.I. services.
As the debate over data privacy and the use of A.I. continues, users are left wondering about the implications of sharing their information with Meta’s expanding A.I. services.