Connect with us

Resources

Two Chairs, One Strategy: AI vs. Privacy

kokou adzo

Published

on

AI vs. Privacy

Gartner has said that 73% of enterprises experienced at least one AI-related security incident in the past 12 months, with an average cost of $4.8 million per breach. I believe this happens largely because most AI developers either don’t publish a proper privacy policy or build their products around aggressive data collection—including browsing history, location, and personal documents. Unsurprisingly, this leads to serious privacy and security risks.

So, how do we manage this dilemma — the growing tension between AI adoption and data privacy? Let’s take a closer look.

What’s going on?

I am believing that privacy is a fundamental human right, and can’t help but notice how data-related scandals around AI usage are snowballing fast.

For example, Italy’s data protection agency fined ChatGPT maker OpenAI 15 million euros ($15.58 million) after closing an investigation into use of personal data by the generative artificial intelligence application in December 2024.

Academic researchers from UC Davis, University College London, and others published a systematic audit of generative AI browser extensions, revealing that many (including Harpa, MaxAI, Merlin, Sider, Monica, ChatGPT for Google, Wiseone, TinaMind, Copilot, and Perplexity) collect and transmit detailed browsing histories and personal data — even in “private” mode or on sensitive websites.

What´s more, Perplexity´s CEO openly said that its new browser, Comet, will track everything users do online to sell ‘hyper personalized’ ads.

This, sadly, brings us to a critical crossroad where AI companies seem determined to speedrun through every privacy mistake from the last two decades. Remember when “Don’t Be Evil” was more than just a cute slogan to abandon when ad revenue beckoned? I do.
It seems like Silicon Valley collectively deleted the “Cambridge Analytica” or “Google’s $5 billion Incognito Mode lawsuit” folders from their memory banks. We’re watching AI companies sprint enthusiastically into the same privacy disaster that gave us congressional hearings, record fines, and worldwide trust issues.

What does this mean for your company?

First of all, it threatens corporate data leaks – again. Here’s the uncomfortable truth: even if your company doesn’t officially use AI in its operations, chances are your employees are using it on corporate or personal devices.

You probably remember the now-infamous case where Samsung engineers uploaded confidential source code to ChatGPT. Can you be 100% sure that someone on your sales team isn’t pasting the details of a sensitive deal into a generative AI tool right now, just to speed up a client presentation? I’m not.

Even if you trust your team implicitly, there’s still a risk of their digital twin being created based on AI search patterns — which could be used for hyper-targeted phishing or behavioral hacking to access internal systems.

And let’s not forget about regulatory risks. The EU AI Act, which came into force earlier this year, sets strict requirements for data tracking. Fines can reach up to 7% of global annual revenue or €35 million — whichever is higher. That’s more than the penalties under GDPR.

And since “don’t use AI” is not really an option (see above), the question becomes — what now?

How do we minimize the risks?

Here’s the good news: it is possible to build your own private AI assistant based on generative models — one that ensures true anonymity for users by preventing their queries from being ‘fingerprinted’. As we did at Aloha Browser last year. As all queries are mixed together and stripped of any external data, like time zone, operating system or IP address, it becomes impossible to identify individual users or collect their information to create a digital twin for fraudulent actions. Yes, this requires serious R&D investment — but it’s possible. If we can do it, you probably can too.

But if building your own stack isn’t feasible, opt for AI platforms that don’t use your data to train foundational models. Also look for tools that guarantee data privacy by design.

Equally important is training your employees so they understand the risks of AI apps and the consequences of creating digital twins of themselves without realizing it.

Finally, I believe it’s crucial for business leaders to work with regulators around the world — not only to help shape legislation like the EU AI Act and GDPR, but also to ensure these laws can be properly enforced in the face of rapidly evolving AI technologies. Because privacy isn’t the enemy of innovation — it’s its future.

Andrew FrostAbout the author : Andrew Frost Moroz BIO

Andrew Frost is a founder of Aloha Browser. With his 20+ years of experience in managing mobile projects, he is a big privacy advocate and an ambassador for a user-centric design which helps Aloha to win its market share. His passion lies in making the seemingly convoluted appear simple, ensuring that technology empowers and enriches lives. Andrew has extensive knowledge of the mobile market inside and out, from working through his career on ZigBee, 3G and various VPN standards to launching consumer and enterprise applications for companies like Conde Nast, Allianz and more.

LinkedIn : www.linkedin.com/in/frostaloha/

Kokou Adzo is the editor and author of Startup.info. He is passionate about business and tech, and brings you the latest Startup news and information. He graduated from university of Siena (Italy) and Rennes (France) in Communications and Political Science with a Master's Degree. He manages the editorial operations at Startup.info.

Advertisement

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Most Read Posts This Month

Copyright © 2024 STARTUP INFO - Privacy Policy - Terms and Conditions - Sitemap

ABOUT US : Startup.info is STARTUP'S HALL OF FAME

We are a global Innovative startup's magazine & competitions host. 12,000+ startups from 58 countries already took part in our competitions. STARTUP.INFO is the first collaborative magazine (write for us ) dedicated to the promotion of startups with more than 400 000+ unique visitors per month. Our objective : Make startup companies known to the global business ecosystem, journalists, investors and early adopters. Thousands of startups already were funded after pitching on startup.info.

Get in touch : Email : contact(a)startup.info - Phone: +33 7 69 49 25 08 - Address : 2 rue de la bourse 75002 Paris, France