Don't want Gmail to sift through your emails? Too late, their AI is going to do it.

Google's Gemini AI is coming soon to Gmail where it will be able to craft AI generated responses, summarize conversations, and more.

A close up of an iPhone with Gemini app widgets on the screen.
If you ever dreamt of having a robot read, summarize, and draft your emails your dreams (or nightmares) will be coming true. Google is planning to blend Gemini AI into their Google Workspace platform which means their Large Language Model (LLM) will be getting full access to your emails and more. Ready or not, here comes Gemini.

It looks like the AI hype bubble continues to inflate, with more and more tech companies looking for problems that they believe their AI might solve. Google is no different. Following the trail of OpenAI, Google, Microsoft, and Amazon are all scrambling to get their AI products into the hands of customers. One major push with this trend is to build in AI LLM's into existing products like productivity apps and office suites, regardless of user interest. Even some privacy-respecting companies like DuckDuckGo are getting in on the action by introducing their DuckDuckGo AI Chat. While DDG makes it clear that they do not save your chats with their robot and that your data is not used to train conversations, Big Tech companies are not so transparent when discussing how your data is being used.

Regardless of if you actually wanted these AI features, they are being served up whether you like it or not. What threats to your privacy are posed by the LLM wave and is it really a good idea to grant them access to all your emails? Let's talk about it!

What is Google Gemini?

Originally named Bard, Google's Gemini is an AI chatbot created as a response to the explosive popularity of OpenAI's ChatGPT platform. Google's frantic rush to develop a product similar to ChatGPT even called for an emergency meeting with the founders Sergey Brin and Larry Page, despite them having resigned as CEO's back in 2019. Gemini is built upon Google's large language model, also named Gemini which has somewhat of a questionable development history as it was trained on transcripts of YouTube videos leading to concerns of copywrite violations. But this is not enough: In order to gather enough training data, Google looked to their own massive global source of human communications. We shouldn't be asking "how do I use Google's Gemini AI?" but rather "how is Gemini using me and my data?"

Updates to their privacy policy show that Google is using any publicly available information to train the LLM writing bot:

"For example, we use publicly available information to help train Google's AI models and build products and features like Google Translate, Bard and Cloud AI capabilities."

This begs the question of whether or not your private information also played a role in training Gemini/Bard. The chatbot itself claimed to have been trained using Google Search and Gmail Data although Google was quick to try and dispute this digital-freudian slip.

This suspicion becomes even more murky following Google's announcement that they will introduce new AI features for their Google Workspace products, which will allow users to have Gemini draft emails, summarize conversations, and even more. With all of these features being freely available in your Google account, what is actually happening with your data if you grant the AI access to your personal information?

LLM's and your email

In their press release, Google showed off a slideshow with a prominent big blue button nestled into the top corner of an open Gmail account. With a click of a button Google's chatbot can search through the entirety of your email history to create conversation summaries with certain contacts, draft automated replies, and even create pricing comparisons. This feature has been called "Gemini Pro in Workspace Labs" and if you think it will be used by private persons alone, think again. The business usage of such a system would seem enticing at first glance, but there are legitimate privacy concerns regarding its use.

Personally, I don't want my proctologist's office allowing Google's AI platform access to my personal medical information. This seems like an all-around shit idea. While there is little available information about how medical data could be protected from this kind of tool, this must be considered before LLM's find their way into the medical industry. Concerns about the degree to which personally identifiable information from customers is being made available to AI have also been raised if this AI chatbot starts to become widely implemented by businesses looking to cut down on the cost of employing support or administrative team members.

Consequences for your privacy in the Google ecosystem

I don't need to keep beating a dead horse, but your privacy is not the priority for Google. Targeted ads, even in Gmail, cross-site tracking across the internet, and the unknown use of your data in training AI models is just the tip of the Google/Privacy iceberg.

Adding generative artificial intelligence to this mixture doesn't paint a better picture.