The Lead Privacy Regulator of Facebook in Europe has now questioned the company, demanding detailed data on the voice-to-text feature in its Messenger App.

Bloomberg reported Facebook is using human contractors to transcribe the app user’s audio messages. Facebook is now under pressure to prove its compliance with the EU privacy law. The privacy policy of Facebook makes no explicit mention of the fact that actual people listen to the recordings. Facebook’s help center page includes a “note” stating “Voice to Text uses machine learning.” However, it does not confirm that employees are working for Facebook listening to it power the feature.

A representative of the Irish Data Protection Commission confirmed the ongoing engagement with Apple, Google, and Microsoft. This was related to the processing of personal data in the form of a manual transcription of audio recordings. He also confirmed that Facebook believes that its data processing system is compliant with its GDPR obligations. There is an ever-increasing hype around AI voice assistants which is glossing over a far less high tech backend. The lashings against machine learning marketing guff have been used to cloak the mechanical human components required for the technology to live up to the claims.

Following Bloomberg’s report, Facebook confirmed suspending human transcriptions earlier this month. The company has joined Apple and Google in halting manual reviews of audio snippets for their own voice AIs. Facebook was questioned on informing users that human contractors might be used to transcribe their voice chats or audio messages on the Facebook Messenger app. The scrutiny is doubled as checks were made on whether Messenger users’ consent was captured to this form of data processing. The privacy regulators also confirmed the suspension of human reviews.

The company yet has not responded to the questions in detail, but a spokesperson from the company has stated that – much like Apple and Google, Facebook has also paused human review of audio more than a week ago. Facebook also disclosed the audio snippets that were sent to contractors as masked and de-identified. They said that the collection was done only when users had opted for transcription on Messenger. In addition, Facebook assured that the data were only used for improving the transcription performance of the AI. It also reiterated that it never listens to people’s microphones without device permission or explicit activation by users.

Facebook has recently come up with a manipulative consent flow to nudge European users to switch on to the facial recognition technology. It is rolling back its earlier stance adopted in response to the previous regulatory intervention, of switching the tech off across the bloc. Bundling up consent into general T&Cs section for using the product is also unacceptable for compliance under EU privacy law. In such cases, the bloc’s General Data Protection Regulation should have explicit consent to be purpose limited, as well freely given and fully informed.

If Facebook is simply relying on legitimate interests to process Messenger users’ audio snippets to enhance its AI’s performance, it needs to balance its personal interests against any risk to user privacy. Voice AIs are usually problematic in this respect as audio recordings have chances to capture the personal data of non-users too if people are near a device. Multiple reports have already claimed a lack of confidence on the contractors employed by tech giants for routinely overhearing intimate details captured through a range of products. These risks include the ability to record audio and stream user’s private data to the cloud for processing. The entire industry waits to see how Facebook will deal with audio transcription privacy risk.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.