Skip to content

Zoom AI Training: Should Zoom Use Your Data Without Your Consent?

Zoom found itself facing heated backlash earlier this year after updating its terms of service to allow the use of customer content to train its AI models. The intense social media outrage led Zoom to quickly walk back the controversial policy change. But can Zoom be trusted to keep its word given its shaky track record on privacy? Let‘s closely examine Zoom‘s AI training controversy and what users need to know to protect their data.

Zoom‘s Rocky Relationship with Privacy

In March 2023, Zoom stunned its user base of over 300 million meeting participants by granting itself a "perpetual, worldwide, non-exclusive, royalty-free, sublicensable, and transferable license" to use something it termed "customer content" to train its AI services and products. This included video, audio, transcripts, chat messages, whiteboard content – essentially anything generated during Zoom meetings.

Understandably, privacy advocates and consumers alike reacted with outrage at the idea of Zoom profiting off users‘ private data, including sensitive business meetings, without offering compensation. The backlash across social media platforms like Twitter was immediate and unrelenting. Within a few months, Zoom completely reversed the controversial policy change in September 2023.

So problem solved right? Not so fast. Zoom‘s long history of questionable privacy practices provides plenty of reasons to doubt its latest promises.

In 2021, Zoom paid a hefty $85 million to settle a class action lawsuit alleging it had blatantly lied about offering end-to-end encryption for meetings. A Federal Trade Commission investigation exposed how Zoom knowingly shared user data with third parties like Facebook between 2016-2021 without proper consent.

According to the FTC, Zoom accessed users‘ LinkedIn data to discover personal information and enabled thousands of unnecessary third-party apps to access data like users‘ OS types, devices, and network carriers.

If Zoom already deceived consumers about selling data in the past, it seems plausible they could go back on pledges not to use data for AI training. A close look at the updated privacy policy shows Zoom still claims the right to leverage customer content for product development and improvements. That appears contradictory to the statements ruling out AI training uses.

Ultimately, Zoom‘s reversal only occurred in response to vocal public pressure, making it doubtful the policy would have shifted otherwise. Users have little reason to place faith in Zoom self-regulating its use of private user data given its history of privacy violations.

Zoom‘s Expanding AI Capabilities

Zoom IQ, recently rebranded Zoom AI Companion, currently offers fairly limited AI functionality like providing post-meeting transcripts. The transcripts allow meeting hosts to view an AI-generated summary of the discussion after the meeting concludes.

But Zoom has much bigger plans in store for its AI offering. The company aims to evolve the AI Companion into a sophisticated virtual assistant by 2024.

This upgraded AI assistant promises abilities like automatically pulling up information from past meetings, integrating with third-party apps, and more as described by Zoom. It‘s intended to function like a robust AI chatbot integrated directly into the Zoom platform.

If Zoom keeps its promise not to leverage customer content, where will it get the data necessary to train such an advanced AI system? Third-party datasets? Synthetic data? For now Zoom has not provided details, adding uncertainty around how its AI will develop.

No Opt-Out Options for Customers

Currently Zoom customers have zero ability to opt out of the company collecting data that could potentially be used for AI training purposes. Users must either consent to the terms of service or stop using Zoom entirely.

Meeting moderators do have the option to disable the existing AI Companion capabilities for specific meetings if desired. However, it remains unclear whether users will retain control over AI usage once the assistant functionality expands in 2024 as planned.

Many privacy advocates argue consenting in, rather than merely having the ability to opt out, should be required when it comes to use of customer data for AI training.

You‘re Training AI Every Day

And Zoom is far from the sole tech giant leveraging user data to advance AI algorithms and products. Internet giants like Facebook, Google, Apple, Microsoft, and Amazon routinely gather vast troves of user data to improve AI technologies.

According to Statista, Meta‘s Facebook alone boasted over 2.93 billion monthly active users as of Q4 2022. TikTok surpassed 1 billion monthly active users in 2021. All of these users constantly generate valuable behavioral data for AI – often without fully informed consent.

Even private messaging platforms like WhatsApp (owned by Meta) and Apple‘s iMessage collect user chats that could be mined for AI. Encrypted alternatives like Signal offer more privacy protection.

In fact, our data is continuously harvested to feed the voracious appetite of Big Tech‘s AI empires. And with no easy opt-outs available, users have limited recourse beyond avoiding services completely or reducing use.

How Can You Better Protect Your Data from AI?

Until regulators address the mass harvesting of private data for AI training, individuals must take privacy into their own hands. Here are 5 expert tips to avoid feeding the AI beast:

1. Use Private Browsers

Switch from traditional browsers like Chrome or Safari to private options like Brave or Tor browser. Private browsers help block trackers and data mining, cutting down on your digital footprint.

2. Choose Encrypted Messaging Apps

Instead of Messenger or iMessage, use encrypted messaging apps like Signal or Telegram that can‘t be accessed by parent companies. Keep private chats more private.

3. Limit Social Media App Use

Delete social media apps from your phone and only access accounts through web browsers. The apps collect far more personal data than browsers.

4. Never Share Sensitive Info with AI

Be wary what you share with AI chatbots. They don‘t need your personal details to function. Provide as little identifying info as possible.

5. Use a VPN

A trusted VPN encrypts your traffic, masks your digital identity, and blocks ad trackers. This makes it much harder for tech companies to profile you and access your data.

In summary, advancing AI through private user data without consent is concerning. Until companies like Zoom prove themselves ethical stewards of customer information, smart internet users will remain rightfully wary. Stay vigilant and utilize all tools available to guard your privacy.

nv-author-image

Streamr Go

StreamrGo is always about privacy, specifically protecting your privacy online by increasing security and better standard privacy practices.