AI is for some reason forced on us in almost every aspect of life, from phones and apps to search engines and even drive-thru. The fact that you currently have a web browser with an AI assistant burning chatbots indicates that the way some people use the internet to search and consume information today is very different from a few years ago.
However, AI tools are increasingly seeking total level of access to personal data, pretending they need to work. This type of access is not normal and does not need to be normalized.
Until recently, it would be right to question why the seemingly harmless free “flashlight” or “calculator” apps from the app store are trying to request access to contacts, photos, and even real-time location data. These apps may not need that data to work, but if you think you can make a dollar or two by monetizing your data, you’ll request it.
These days, AI isn’t that different.
As an example, we’ll take Comet, Perplexity’s latest AI-powered web browser. Comet allows users to use the built-in AI search engine to find answers and automate daily tasks such as email and calendar event summary.
In a recent browser practice, TechCrunch discovered that when Prplexity requests access to a user’s Google Calendar, the browser asks for broad permissions from the user’s Google account, and includes the ability to manage drafts and send emails, including the ability to get contacts, viewing, editing events, and even copies of the entire employee on all calendars.

The embarrassment says that much of this data is stored locally on the device, but it still grants the company’s right to access and use personal information, such as improving the AI model of everyone else.
Confusion doesn’t just require access to your data. For example, there is a tendency for AI apps to save time by transcription of calls and work meetings, but you need an AI assistant to access private, real-time conversations, calendars, contacts, and more. Meta is also testing the limits of what AI apps can request access. For example, tapping a photo saved in the camera roll of a user that has not yet been uploaded.
Signal President Meredith Whitaker recently compared the use of AI agents and assistants to “put them in the brain into the bottle.” Whittaker explained how some AI products can promise to perform all sorts of mundane tasks, such as booking tables at restaurants or booking tickets for concerts. But to do that, AI will say that to open a browser you will need to open a website (which can allow AI access to saved passwords, bookmarks, browsing history), a credit card, a calendar to make reservations, and a calendar that marks the dates.
There are serious security and privacy risks associated with the use of data-dependent AI assistants. By granting access, you can instantly and irreversibly hand over the entire snapshot of your most personal information at that moment, such as messages, messages, calendar entries dating back to the year. All this saves you from having to actively think about it, either to perform a task that ostensibly saves time, or to the point of Whittaker.
It also accepts that you have given AI agents permission to act autonomously on your behalf, and requires you to place a huge amount of trust in technology that tends to make things wrong or keep things firm. With AI, you need to trust the profit-seeking companies developing these AI products. It relies on data to improve the performance of AI models. When things go wrong (and they do a lot), it is common practice for people from AI companies to look into your private prompts and understand why things didn’t go well.
From a security and privacy perspective, a simple cost-benefit analysis of connecting AI to the most personal data is not worth giving up access to personal information. AI apps seeking these levels of permissions should ring the alarm bell so that the flashlight app wants to know where you are at any time.
Given the set of data you are handing over to AI companies, ask yourself if what you get from it is really valuable.