spot_img

Date:

Share:

Microsoft releases Cyber Pulse, a new briefing on Agentic AI

Microsoft released Cyber Pulse, a new digital briefing for business leaders that examines how the security landscape is evolving with AI. The briefing focuses on how organisations are deploying AI agents and what it takes to secure, govern, and scale them responsibly.

Microsoft’s new research finds that 80% of the Fortune 500 is deploying active agents built with low-code/no-code tools. This signals a major shift: AI agents are no longer the domain of specialists – they’re an integral part of operations available for anyone to use. The catch: some are sanctioned by IT, others are not. Many agents are unsanctioned, unobserved, or over-privileged.

“As AI adoption accelerates, too few leaders have visibility into the agents operating across their enterprise,” says Kerissa Varma, Chief Security Advisor at Microsoft Africa. “Unsupervised or ungoverned agents can quickly escalate cyber and business risk, threatening security, business continuity, and reputation. AI agents bring enormous opportunity, but without proper oversight, even one agent’s risky behaviour can amplify internal threats and create new failure modes organisations are unprepared to manage.”

This Cyber Pulse brief outlines why leaders should demand visibility into their agents, and how to ensure the safe and trustworthy implementations of agents into their organisations. A few additional findings from the briefing include:

  • AI agent adoption is accelerating across all regions with EMEA accounting for approximately 42% of all active agents globally.
  • AI agents are scaling at pace across all industries; with financial services, manufacturing, and retail leading in agent adoption. Financial services, including banking, capital markets, and insurance, now represents about 11% of all active agents worldwide. Manufacturing accounts for 13% of global agent usage, showing widespread adoption in factories, supply chains, and energy operations. Retail represents 9%, with agents used to improve customer experience, inventory management, and frontline processes.
  • Only 47% of organisations report having GenAI-specific security controls in place.
  • 29% of employees admit to using unsanctioned AI agents at work.

Rapidly deploying AI agents without strong oversight can outpace security and compliance controls, creating opportunities for shadow AI and increasing the risk that agents with too much access or wrong instructions become unintended “double agents”.

“Organisations urgently need effective governance and security to safely adopt agents, promote innovation, and reduce risk. Just like human users, AI agents must be protected with strong observability, governance, and Zero Trust principles,” says Varma.

In the same way organisations secure human employees, Zero Trust for agents requires:

  • Least privilege access: Give every user, AI agent, or system only what they need, no more.
  • Explicit verification: Always confirm who or what is requesting access using identity, device health, location, and risk level.
  • Assume compromise can occur: Design systems expecting that attackers will get inside.

Getting the most out of AI agents

Frontier firms are using the AI wave to modernise governance, reduce unnecessary data exposure, and deploy enterprise‑wide controls. They’re also driving a cultural shift; business leaders may set the AI vision, but IT and security teams are now equal partners in observability, governance, and safe experimentation.

It starts with observability, as you can’t protect what you can’t see and you can’t manage what you don’t understand. Observability is having a control plane across all layers of the organisation, including IT, security, developers, and AI teams to understand what agents exist, who owns them, what systems and data they touch, and how they behave.

Observability includes five core areas:

  • Registry: A centralised registry acts as a single source of truth for all agents across the organisation and helps prevent agent sprawl, enables accountability, and supports discovery while allowing unsanctioned agents to be restricted or quarantined when necessary.
  • Access control: Each agent is governed using the same identity‑ and policy‑driven access controls applied to human users and applications.
  • Visualisation: Real‑time dashboards and telemetry provide insight into how agents interact with people, data, and systems. Leaders can see where agents are operating, understanding dependencies, and monitoring behaviour and impact, supporting faster detection of misuse, drift, or emerging risk.
  • Interoperability: Agents operate across Microsoft platforms, open‑source frameworks, and third‑party ecosystems under a consistent governance model. This interoperability allows agents to collaborate with people and other agents across workflows while remaining managed within the same enterprise controls.
  • Security: Built‑in protections safeguard agents from internal misuse and external threats. Security signals, policy enforcement, and integrated tooling help organisations detect compromised or misaligned agents early and respond quickly, before issues escalate into business, regulatory, or reputational harm.

With AI adoption accelerating, this level of end‑to‑end visibility and governance is essential to maintaining control, which is why Microsoft developed Agent 365 to transform the enterprise wide need for transparency and oversight into a practical, scalable capability.

Agent 365 is Microsoft’s unified control plane for managing AI agents across an organisation. It provides a centralised, enterprise-grade system to register, govern, secure, observe, and operate AI agents, whether they are built on Microsoft platforms, opensource frameworks, or third-party systems.

This unified control plane provides the strategic foundation organisations need to align teams and accelerate their AI journey responsibly.

“Enterprises that will lead in the next phase of AI adoption are those that move fast and bring business, IT, security, and developers together to observe, govern, and secure their AI transformation,” concludes Varma.

For more information, visit the Cyber Pulse site.

spot_img
spot_img

━ More like this

OpenserveSASMA 2026 Places Technology, AI and Digital Storytelling at the Centre of South Africa’s Creator Economy

The launch of OpenserveSASMA 2026 at Truffles on the Park in Sandton placed a clear spotlight on the growing relationship between technology, creativity and...

Kaspersky discovers vulnerability in Qualcomm Snapdragon chips that can lead to data loss & device compromise

Kaspersky ICS CERT discovered a hardware-level vulnerability affecting Qualcomm chipsets that are widely used in a range of consumer and industrial devices, including smartphones...

Otinga.io leads landmark AI Hackathon with impact.com, bringing 250+ global engineers to Cape Town

Otinga.io, the AI-powered innovation partner that helps enterprises run structured hackathons and innovation programmes, announced its role as the event facilitation and platform partner...

Yellow Card Publishes 2026 Report on Data Protection and AI Governance: A Strategic Blueprint for Financial Institutions in Emerging Markets

Yellow Card, the largest licensed stablecoin-based infrastructure provider for emerging markets, has released its 2026 Report on Data Protection and Artificial Intelligence Governance in...

The AI Skills South African Creatives Need – Before the Global Gap Widens

As artificial intelligence reshapes the global economy, a growing skills divide is emerging in the marketing and creative industries, and South African professionals may need to...
spot_img