spot_img

Date:

Share:

How generative AI is leaking companies’ secrets

Beneath the surface of GenAI’s outputs lies a massive, mostly unregulated engine powered by data – your data. And whether it’s through innocent prompts or habitual oversharing, users are feeding these machines with information that, in the wrong hands, becomes a security time bomb.

A recent Harmonic report found that 8.5% of employee prompts to generative AI tools like ChatGPT and Copilot included sensitive data – most notably customer billing and authentication information – raising serious security, compliance, and privacy risks.

Since ChatGPT’s 2022 debut, generative AI has exploded in popularity and value – surpassing $25 billion in 2024 – but its rapid rise brings risks many users and organisations still overlook.

“One of the privacy risks when using AI platforms is unintentional data leakage,” warns Anna Collard, SVP Content Strategy & Evangelist at KnowBe4 Africa. “Many people don’t realise just how much sensitive information they’re inputting.”

Your data is the new prompt

It’s not just names or email addresses that get hoovered up. When an employee asks a GenAI assistant to “rewrite this proposal for client X” or “suggest improvements to our internal performance plan,” they may be sharing proprietary data, customer records, or even internal forecasts. If done via platforms with vague privacy policies or poor security controls, that data may be stored, processed, or – worst-case scenario – exposed.

And the risk doesn’t end there. “Because GenAI feels casual and friendly, people let their guard down,” says Collard. “They might reveal far more than they would in a traditional work setting –      interests, frustrations, company tools, even team dynamics.”

In aggregate, these seemingly benign details can be stitched into detailed profiles by cybercriminals or data brokers – fuelling targeted phishing, identity theft, and sophisticated social engineering.

A surge of niche platforms, a bunch of new risks

Adding fuel to the fire is the rapid proliferation of niche AI platforms. Tools for generating product mock-ups, social posts, songs, resumes, or legalese are sprouting up at speed – many of them developed by small teams using open-source foundation models. While these platforms may be brilliant at what they do, they may not offer the hardened security architecture of enterprise-grade tools. “Smaller apps are less likely to have been tested for edge-case privacy violations or undergone rigorous penetration tests and security audits,” says Collard. “And many have opaque or permissive data usage policies.”

Even if an app’s creators have no malicious intent, weak oversight can lead to major leaks. Collard warns that user data could end up in:

  • Third-party data broker databases
  • AI training sets without consent
  • Cybercriminal marketplaces following a breach

In some cases, the apps might themselves be fronts for data-harvesting operations.

How generative AI is leaking companies' secrets

From individual oversights to corporate exposure

The consequences of oversharing aren’t limited to the person typing the prompt. “When employees feed confidential information into public GenAI tools, they can inadvertently expose their entire company,” explains Collard. “That includes client data, internal operations, product strategies – things that competitors, attackers, or regulators would care deeply about.”

While unauthorised shadow AI remains a major concern, the rise of semi-shadow AI – paid tools adopted by business units without IT oversight – is increasingly risky, with free-tier generative AI apps like ChatGPT responsible for 54% of sensitive data leaks due to permissive licensing and lack of controls, according to the Harmonic report.

So, what’s the solution?

Responsible adoption starts with understanding the risk – and reining in the hype. “Businesses must train their employees on which tools are ok to use, and what’s safe to input and what isn’t,” says Collard. “And they should implement real safeguards – not just policies on paper.

“Cyber hygiene now includes AI hygiene.”

“This should include restricting access to generative AI tools without oversight or only allowing those approved by the company.”

“Organisations need to adopt a privacy-by-design approach when it comes to AI adoption,” she says. “This includes only using AI platforms with enterprise-level data controls and deploying browser extensions that detect and block sensitive data from being entered.”

As a further safeguard, she believes internal compliance programmes should align AI use with both data protection laws and ethical standards. “I would strongly recommend companies adopt ISO/IEC 42001, an international standard that specifies requirements for establishing, implementing, maintaining and continually improving an Artificial Intelligence Management System (AIMS),” she urges.

Ultimately, by balancing productivity gains with the need for data privacy and maintaining customer trust, companies can succeed in adopting AI responsibly.

As businesses race to adopt these tools to drive productivity, that balance – between ‘wow’ and ‘whoa’ – has never been more crucial.

spot_img
spot_img

━ More like this

How agentic AI will drive the next evolution of customer experience in South Afric

South African customers now expect problems to be solved before they even complain – on WhatsApp, in‑app, or over the phone – often in...

Introducing multi-model intelligence in Researcher

Today, Researcher—Microsoft 365 Copilot's deep research agent for work—takes a significant step forward. Designed to tackle complex research in the flow of work, Researcher...

AI is changing the rules of cloud migration

Six percent. That’s how many database migrations have finished on time. Cloud migrations are even more complex, moving entire workloads across environments in processes...

AI Has Turned Biometric Security Into a Fraud Target, New Data Shows

The systems designed to verify identity and secure financial transactions are rapidly becoming the weakest link in the fight against fraud, as new data...

AI won’t replace digital designers, but it will redefine them

The burning question among digital designers today is whether they need to anticipate artificial intelligence replacing their skills. But if designers are ready to...
spot_img