back to top
17.1 C
Johannesburg
spot_img
More

    Date:

    Share:

    #CuratingtheFuture: Finding Cybersecurity’s Place in the World

    Johannesburg, 17 February25: We’ve become accustomed to passwords, fingerprint readers, and antivirus software. We use one-time pins to authenticate transactions and activate VPN software to stay safe on public networks. We are well aware of threats like identity theft, ransomware, and phishing emails fuelling a cybercrime scourge that has cost the world just shy of $10 trillion in 2024.

    Yet, cybersecurity remains marginalised as a technology topic. This is starting to change, and there are three trends showing that we are moving towards making cybersecurity more of a social and business cause.

    Risk, not response, will lead security

    Cyber threats continue to dominate risk registers and publications such as the Allianz Risk Barometer report and IRMSA’s Risk Report 2024. Companies and their leaders are very aware of and even proactive about cyber threats.

    However, there is an increasingly costly arms race between cybercrime and cybersecurity, made worse by a tendency to focus on buying and deploying security solutions in response to every security problem. But this is a zero-sum game where the customer eventually loses due to cost or complexity.

    Instead, experts are advocating risk as the departure point, determining the most mission-critical assets in an organisation, then building security to focus on those and spreading from there. Analyst firm Gartner has coined this as continuous threat exposure management (CTEM).

    While this approach seems obvious, it is a radical departure from how cybersecurity tactics and sales models operate. But it works very well, and we’ll see more support for risk-prioritised cybersecurity in 2025.

    AI will amplify data governance issues

    After many years as a technical or fringe business practice, artificial intelligence (AI) has become the focus of most business leaders (whether they understand it or not). Data quickly became synonymous with AI, which needs that information to function.

    This marriage has amplified data problems. Companies could skirt the data issue. As long as they adhered to regulations and felt satisfied with their usable data, they didn’t bother too much with managing, cleaning, and securing most of their data.

    But AI has tipped that cart over. For example, data leakage used to be about disgruntled employees stealing customer lists or someone irresponsibly sending financial statements via a private email address. Now, companies worry that someone will feed their year-end spreadsheet into ChatGPT. The catch is that the person does this to improve productivity – they want to do a better job faster – and who would discourage that?

    Better data management and integration are rising from technology black sheep to top executive priorities, a trend highlighted by numerous surveys, including the MIT Technology Review. This trend is inseparable from data governance and security. During 2025, businesses will increasingly prioritise data security and governance as a competitive strategy, rather than just a compliance exercise.

    Deepfakes will make users more sceptical and aware

    AI’s impact won’t only concentrate on data. I believe it will affect user behaviours and attitudes as well. Humans are the weakest link in security and information. Our prejudices and distractions make it easier for criminals to goad us into harmful actions. The classic example is to click on a dangerous link in a phishing email or to fall for a romance scam on social media.

    Until recently, we trusted our critical thinking, regardless of its accuracy. We assume we’re above average when it comes to judging threats and opportunities, and we quickly believe that the bad things that happen to others won’t happen to us. This attitude has been a gift to cybercriminals, leading to over 91% of all cyberattacks starting with a phishing email. Training people to become security aware is crucial, but it’s not enough. People don’t make these mistakes because they are stupid. It’s much more complicated than that.

    People make these mistakes because they are overconfident in their ability to spot a scam or an attack. My personal view is that generative AI – specifically deepfake content – will affect that dramatically. AI-generated content means we can no longer trust what we see or hear.

    There are early signs of this growing scepticism, such as a majority of people feeling conscious concern about deepfakes. During 2025 and beyond, I anticipate we’ll see this concern grow, and perhaps we can weaponise it against the scourges of fake news and cybercrime.

    spot_img

    ━ More like this

    The often-overlooked Achilles heel of local cyber defence

    Though the world of cybersecurity is perpetually shifting, one truth has remained constant: humans are generally the weakest link in an organisation's defence. Despite...

    Africa is rapidly banking the unbanked: A skills gap is inviting cyberthreats just as quickly

    As African banks continue to bring financial services to millions of the unbanked, an escalating crisis is building: a severe shortage of cybersecurity skills....

    Telecommunications and IT Companies Form the Next Frontier for Both Growth and Cyber Crime across Africa

    JOHANNESBURG, South Africa - There is both good news and bad news for telecommunications and IT companies across Africa. The positive news, according to global...

    South Africa’s national cyber defence gap is showing – again (this time thanks to SAA)

    South African Airways (SAA) on Wednesday shared details of its preliminary investigation into the recent cyberattack on its digital systems, but it is just...

    Don’t Let AI Steal Your Face (or Your Crypto)

    South Africa, Johannesburg, 14 May 2025 - Africa’s growing crypto community is facing a new and unprecedented threat: AI-powered fraud. The same technology that powers...
    spot_img

    LEAVE A REPLY

    Please enter your comment!
    Please enter your name here