Resources
Two Chairs, One Strategy: AI vs. Privacy

Gartner has said that 73% of enterprises experienced at least one AI-related security incident in the past 12 months, with an average cost of $4.8 million per breach. I believe this happens largely because most AI developers either don’t publish a proper privacy policy or build their products around aggressive data collection—including browsing history, location, and personal documents. Unsurprisingly, this leads to serious privacy and security risks.
So, how do we manage this dilemma — the growing tension between AI adoption and data privacy? Let’s take a closer look.
What’s going on?
I am believing that privacy is a fundamental human right, and can’t help but notice how data-related scandals around AI usage are snowballing fast.
For example, Italy’s data protection agency fined ChatGPT maker OpenAI 15 million euros ($15.58 million) after closing an investigation into use of personal data by the generative artificial intelligence application in December 2024.
Academic researchers from UC Davis, University College London, and others published a systematic audit of generative AI browser extensions, revealing that many (including Harpa, MaxAI, Merlin, Sider, Monica, ChatGPT for Google, Wiseone, TinaMind, Copilot, and Perplexity) collect and transmit detailed browsing histories and personal data — even in “private” mode or on sensitive websites.
What´s more, Perplexity´s CEO openly said that its new browser, Comet, will track everything users do online to sell ‘hyper personalized’ ads.
This, sadly, brings us to a critical crossroad where AI companies seem determined to speedrun through every privacy mistake from the last two decades. Remember when “Don’t Be Evil” was more than just a cute slogan to abandon when ad revenue beckoned? I do.
It seems like Silicon Valley collectively deleted the “Cambridge Analytica” or “Google’s $5 billion Incognito Mode lawsuit” folders from their memory banks. We’re watching AI companies sprint enthusiastically into the same privacy disaster that gave us congressional hearings, record fines, and worldwide trust issues.
What does this mean for your company?
First of all, it threatens corporate data leaks – again. Here’s the uncomfortable truth: even if your company doesn’t officially use AI in its operations, chances are your employees are using it on corporate or personal devices.
You probably remember the now-infamous case where Samsung engineers uploaded confidential source code to ChatGPT. Can you be 100% sure that someone on your sales team isn’t pasting the details of a sensitive deal into a generative AI tool right now, just to speed up a client presentation? I’m not.
Even if you trust your team implicitly, there’s still a risk of their digital twin being created based on AI search patterns — which could be used for hyper-targeted phishing or behavioral hacking to access internal systems.
And let’s not forget about regulatory risks. The EU AI Act, which came into force earlier this year, sets strict requirements for data tracking. Fines can reach up to 7% of global annual revenue or €35 million — whichever is higher. That’s more than the penalties under GDPR.
And since “don’t use AI” is not really an option (see above), the question becomes — what now?
How do we minimize the risks?
Here’s the good news: it is possible to build your own private AI assistant based on generative models — one that ensures true anonymity for users by preventing their queries from being ‘fingerprinted’. As we did at Aloha Browser last year. As all queries are mixed together and stripped of any external data, like time zone, operating system or IP address, it becomes impossible to identify individual users or collect their information to create a digital twin for fraudulent actions. Yes, this requires serious R&D investment — but it’s possible. If we can do it, you probably can too.
But if building your own stack isn’t feasible, opt for AI platforms that don’t use your data to train foundational models. Also look for tools that guarantee data privacy by design.
Equally important is training your employees so they understand the risks of AI apps and the consequences of creating digital twins of themselves without realizing it.
Finally, I believe it’s crucial for business leaders to work with regulators around the world — not only to help shape legislation like the EU AI Act and GDPR, but also to ensure these laws can be properly enforced in the face of rapidly evolving AI technologies. Because privacy isn’t the enemy of innovation — it’s its future.
About the author : Andrew Frost Moroz BIO
Andrew Frost is a founder of Aloha Browser. With his 20+ years of experience in managing mobile projects, he is a big privacy advocate and an ambassador for a user-centric design which helps Aloha to win its market share. His passion lies in making the seemingly convoluted appear simple, ensuring that technology empowers and enriches lives. Andrew has extensive knowledge of the mobile market inside and out, from working through his career on ZigBee, 3G and various VPN standards to launching consumer and enterprise applications for companies like Conde Nast, Allianz and more.
LinkedIn : www.linkedin.com/in/frostaloha/

-
Resources4 years ago
Why Companies Must Adopt Digital Documents
-
Resources3 years ago
A Guide to Pickleball: The Latest, Greatest Sport You Might Not Know, But Should!
-
Resources4 weeks ago
TOP 154 Niche Sites to Submit a Guest Post for Free in 2025
-
Resources2 years ago
Full Guide on AnyUnlock Crack and Activation Code