Snap’s My AI has been under scrutiny by governments worldwide for quite some time. Now, in a recent development, the Information Commissioner’s Office (ICO), the regulatory body overseeing privacy in the UK, has issued a warning against Snap’s AI chatbot and raised questions about its potential risks to children’s privacy.

According to the report, the ICO’s concerns arose from Snap’s failure to assess the privacy risks associated with My AI before its launch. While this notice does not imply a breach, the ICO is concerned about the personal data of children aged 13 to 17 who fall under the Children’s Design Code, a set of data protection rules implemented in 2021. As a result, if Snap fails to address these issues, the ICO may take measures to block the chatbot in the UK.

Why is Snap’s My AI a big privacy concern for the UK regulator?

Snap first introduced its ‘My AI’ in February for Premium subscribers, providing them with a virtual friend, who offered advice and answered questions. However, after initially testing the feature on premium users, the company rolled it out to everyone, including underage children.

Despite the company’s claims that the chatbot has strict moderation and safeguarding features, there have been multiple instances of AI providing inappropriate advice. For example, the chatbot once advised a 15-year-old user on how to hide the smell of alcohol, and in another instance, it provided advice on sexual experiences to a 13-year-old user.

Snap’s response

In response to these allegations, Snap emphasized its commitment to user privacy and stated, “We are closely reviewing the ICO’s provisional decision. My AI underwent a thorough legal and privacy review process before becoming publicly available, consistent with our standard product development approach. We will continue to collaborate with the ICO to ensure they are comfortable with our risk assessment procedures.”

See also  Amazon Fire Max 11 Review: Turn Up The Heat

Source link