spot_img

Date:

Share:

When cyber attackers are using AI, your defence needs to do the same

Cyber threats have become increasingly sophisticated thanks to the use of Artificial Intelligence (AI), and attacks can now be executed rapidly and scaled beyond anything a human is capable of. Add in Machine Learning (ML), and attacks can now adapt and evolve in real time, becoming more sophisticated and stealthier than ever. Traditional security measures are simply no longer effective; we need to counter the offensive AI with the use of defensive AI. More than that, however, we need to understand that humans remain the weakest link in any security chain, and awareness of threats and security measures is a critical component in any robust and resilient cyber defence strategy.

The human element

When it comes to social engineering, AI has changed the game for bad actors.

Attackers leverage AI and ML tools to analyse social media profiles, online activity, and other publicly available information to create increasingly tailored and convincing phishing messages. This vastly increases the likelihood of success.

While AI can be used in several ways to counter this, from fully automated firewalls and policy management to segmentation, firmware updates, and more, this is not a foolproof solution. Humans are still essential links in the security chain.

Education and awareness are critical. We need to be mindful of how we share personal information and what information we place online in the public domain to safeguard our own privacy. Regular training and awareness can help educate people on cyberattack techniques and best practices for adopting a security-driven culture.

In addition, the human element remains essential in verifying what AI tools are doing; while AI can speed processes and automate manual tasks, people provide the contextual understanding of nuance that AI struggles with.

People are also critical in ensuring that ethical considerations are taken into account when building AI models and when using and processing data. To counter today’s threats, it has become vital to create ‘human in the loop’

defence models, where AI works in tandem with human analysts to respond to threats.

Collaboration is critical

Managed Security Service Providers (MSSPs) can be an invaluable asset for businesses in providing guidance on best practices and industry standards related to AI. This can help organisations understand the ethical implications of AI and security and develop appropriate strategies to address ethical risk. This includes assessing fairness and transparency in the design of both algorithms and processes. MSSPs can also assist with providing education and training, documenting and communicating processes, and implementing and managing solutions. This includes how algorithms are selected, trained, and deployed, as well as how data is collected, processed, and used for analysis.

Working with MSSPs in collaboration with regulatory bodies can help organisations align security objectives, ensure compliance, and assist in implementing ethical practices effectively. Working together is the key to successfully implementing AI, especially when it comes to AI as part of a cyber defence strategy. It is essential to build trust between humans and AI, invest in robust defence systems, and monitor for emerging threats, and this requires human oversight as the ultimate decision-maker. Organisations, MSSPs, and regulatory bodies, by working together, can create collaborative ecosystems that foster the development of solutions built on trust that can enhance security posture and mitigate cyber risk effectively.

spot_img
spot_img

━ More like this

AI Has Turned Biometric Security Into a Fraud Target, New Data Shows

New data shows nearly 9 in 10 failed identity checks in Southern Africa are linked to AI-driven impersonation.  The systems designed to verify identity and...

Online scam exposure remains widespread despite high levels of self-assurance, Kaspersky reports

A recent Kaspersky survey highlights a considerable gap between consumers’ confidence in identifying online scams and their actual exposure to cyber threats. According to...

Identity under siege: The new order of security in 2026

The threat model has changed as artificial intelligence lowers the barrier to entry for cybercrime. Attack velocity and threat veracity have increased exponentially. Impersonation...

High-severity incidents at a minimum: Kaspersky experts reveal a steady decline over the years

According to the ‘Anatomy of a Cyber World: Global Report by Kaspersky Security Services’, there has been a noticeable decline in the percentage of high-severity incidents...

Kaspersky discovers new SparkCat variant bypassing App Store and Google Play security

 Kaspersky has identified a new variant of the SparkCat Trojan in the App Store and in Google Play — a year after the crypto-stealing...
spot_img