spot_img

Date:

Share:

Your mining supply agreement was written for humans, but your trucks are driving themselves

In August 2025, in Benavides v Tesla, Inc (Benavides), a Miami jury awarded USD 240 million in damages against Tesla, including USD 200 million in punitive damages, in a wrongful death lawsuit related to its Autopilot system. The jury found that Tesla’s marketing created a misleading perception of safety, encouraging drivers to rely on a system not designed for certain conditions. The verdict arose in the automotive context, but the liability questions at its centre, namely, when does a manufacturer answer for what its autonomous system did, and what must it disclose about the system’s limits, are the same questions that South African mining operations will face as AI-integrated yellow plant equipment becomes standard.

The deployment of autonomous systems and AI-powered machinery in African mining is accelerating. The contractual arrangements governing these deployments have not kept pace. Most supply agreements for AI-integrated equipment still closely resemble plant hire contracts from ten years ago: bilateral, human-focused, and silent on who bears the risk when a machine makes a decision on its own. That silence will produce disputes, and the agreements in their current form are not equipped to resolve them.

Contractual gaps that could fuel AI-related disputes

Force majeure and foreseeable risk: In 2019, at BHP’s Jimblebar mine in Australia, two autonomous trucks collided during heavy rainfall, an incident BHP attributed to heavy rainfall deteriorating road surfaces, though regulators noted the system had not been programmed to limit speed in such conditions. A separate driverless truck collision occurred at Fortescue Metals, also in Australia, within the same period. At Escondida in Chile, the world’s largest copper mine, a union flagged what it called a huge risk to worker safety, less than one month after BHP completed a five-year autonomous vehicle rollout. And so, a pattern emerges: AI systems fail in response to conditions the operating environment routinely produces. These failures occurred in comparatively controlled mining environments.

South African operations contend with load-shedding and chronic connectivity disruptions, amongst other issues, as defining features of the landscape, conditions that are more volatile, less predictable, and to which any AI-dependent system will inevitably be exposed. Supply agreements include broad force majeure clauses, potentially broad enough for a supplier to try to characterise a system failure triggered in this manner as an unforeseeable event beyond its control and thereby disclaim liability entirely. But where those conditions are endemic and well-documented, that characterisation is difficult to sustain and any well-drafted agreement should foreclose it expressly.

Indemnities and the absent human: Standard indemnities are constructed around the acts or omissions of a supplier’s personnel. When an autonomous system makes a decision without human instruction, the supplier has a credible argument that no identifiable act or omission of its personnel caused the harm and that the indemnity therefore does not respond. Benavides exposed precisely this gap: even where a human operator was present but distracted, the court held that the manufacturer’s design choices independently contributed to the loss. The implication is that liability attached not because a person failed, but because the system was designed in a way that allowed failure. Most indemnity clauses in current supply agreements are not drafted to capture that distinction, and until they are, the risk of an unindemnified loss arising from an autonomous decision remains unresolved.

Tristan is an innovation lawyer at Webber Wentzel Fusion

Production downtime: Where AI systems become a linchpin of operations, their failure can cause significant financial loss without any physical damage occurring at all. That loss is consequential in nature and routinely excluded by standard limitation clauses. The insurance position compounds the problem, as most business interruption policies respond only where the loss follows physical damage to insured property. Where an AI system causes production downtime without any such damage such as a software malfunction, a failed autonomous routing decision, a sensor misread, there is no physical damage trigger, and the BI policy is unlikely to respond. Even where a mine holds broader cover, losses of this kind may fall squarely within a cyber-loss or computer systems failure exclusion. The result is that for mines operating under production targets and offtake obligations, unplanned AI-caused downtime is a core commercial exposure that is neither covered by the supply agreement nor insured against, and that gap needs to be closed contractually before it is tested in practice.

The OEM who is not at the table: Supply agreements are typically bilateral. The Original Equipment Manufacturer (OEM) who designed the AI system, controls its training data, and deploys software updates is usually not a party. Section 61 of the Consumer Protection Act imposes strict liability on every participant in the supply chain, producer, importer, distributor, and retailer, for harm caused by defective or unsafe goods. That liability applies to mining companies notwithstanding the juristic person threshold in section 5(2)(b). But section 61 requires a supplier-consumer relationship, and where the OEM is a foreign entity with no South African presence and no direct transactional link to the mine, that chain may not extend far enough. This means enforcing strict liability against the party who actually designed and trained the AI system is a considerably more demanding exercise than enforcing a back-to-back warranty against a local supplier.

Most agreements leave the question of the yellow plant operator unanswered: Most supply agreements for yellow plant mining equipment include obligations to appoint locally recruited operators, consistent with Social and Labour Plan commitments and broader transformation requirements. What those agreements do not address is whether the mine and/or the supplier is equipped to train those operators to the standard required to interact safely and competently with an AI-integrated machine, or whether the supplier or manufacturer is prepared to deliver that training. Who holds the licence to operate the machine? What competency standard applies? When an undertrained operator is placed in a nominal supervisory role over a system that operates beyond their real-time understanding, the liability consequences of that gap will not be resolved by referring back to a clause that was never drafted with this scenario in mind.

The statutory aspect: The contractual gaps noted earlier do not exist in a legal vacuum. Section 2(1) of the Mine Health and Safety Act 29 of 1996 requires every mine employer to ensure, as far as is reasonably practicable, that the mine is designed and operated to be safe and free from health risks to employees. This obligation cannot be waived or limited by agreement. If a mine uses AI-powered equipment under a supply contract that does not specify responsibility for system design, operator training, or fail-safe measures, the mine may still be found negligent for not doing everything reasonably possible to ensure safety, regardless of contract terms. The duties under the MHSA and the contractual obligations must be viewed together, not separately.

Data access: AI systems generate operational data that may be the only evidence capable of establishing what happened and why after an incident. In present agreements, the position on who owns and who can access that data is ambiguous, and in any dispute that ambiguity will serve the party with the least to explain.

What to do now

Every supply agreement governing AI-integrated equipment must address:

  • Force majeure clauses must expressly exclude AI failures caused by foreseeable South African conditions.
  • Indemnities must extend to harm caused by autonomous system decisions, not only harm attributable to identifiable human conduct.
  • Liability caps must be stress-tested against the actual risk profile of the technology, including production downtime.
  • Back-to-back manufacturer warranties should be secured from original equipment manufacturers as a condition of the supply arrangement.
  • Insurance cover must be confirmed in writing, specifically for AI-integrated equipment, before deployment.
  • Data access rights to AI operational logs must be a non-negotiable contractual entitlement, with clear obligations on the supplier to preserve and produce that data following any incident.

Dispute resolution clauses should be agreed at the contracting stage. The clause should provide that any dispute may first be referred to mediation conducted by a mediator with demonstrable technical experience in AI systems, not merely legal expertise. Where mediation fails, the clause should expressly empower the arbitrator to appoint an independent AI technical expert as an assessor. As Benavides indicates, future disputes will turn less on whether a human made a mistake, and more on whether the manufacturer or supplier did enough to anticipate failure through system design and deployment. Those are questions that require specialist technical input, and the time to agree on that mechanism is before the dispute arises.

The agreements being signed today will govern the disputes of tomorrow, and right now, most of them are not equal to the task. Every gap identified above is a gap that will be filled, one way or another, either by the parties at the drafting table or by an arbitrator after the loss has occurred. The question is not whether these disputes will arise. It is whether the contract will have anything useful to say when they do.

*Tobia is a senior associate at Webber Wentzel and Tristan is an innovation lawyer at Webber Wentzel Fusion

spot_img
spot_img

━ More like this

Boosting Your Support and Safety on Meta’s Apps With AI

We’re rolling out the Meta AI support assistant globally on Facebook and Instagram, providing 24/7 help for account issues like updating your password and settings...

Kaspersky shares tips for updating your digital habits for an AI-driven world

 As smart devices with artificial intelligence (AI) tools, and always-on services become part of everyday life, the cybersecurity habits many people formed a few...

How Digital Assistants Will Deliver for Your Business

As a business leader in the South African retail, insurance or public sector, you know how critical your people are. But imagine adding a...

HP Imagine 2026: New AI-Driven WXP Capabilities and Integrated Insights Improve Digital Workforce Experiences

Enables companies to move from insight to action more quickly and intelligently, strengthening the value of their IT investments Today, at HP Imagine 2026, HP...

Why AI voice agents are being met with open arms

For years, the South African contact centre industry has been building the foundations of digital customer service. While the traditional automated systems that aided them...
spot_img