An IT crash can bring your organisation to a standstill. But an AI system can also go off the rails, producing unexpected, uncontrollable or even harmful outcomes. And again, the clock starts ticking.
“Many Executives don’t have a clear picture of what can happen when AI fails,” says Friso Spinhoven, Head of Responsible AI at Conclusion. “But make no mistake: AI is software, and like any software, it can fail.”
An AI crash can take many forms: incorrect information, false promises or poor decisions. One airline, for example, faced a lawsuit after its AI chatbot promised non-existent discounts. The court sided with the customer, ruling that the chatbot’s promises should be honoured.
So what should executives do? According to Friso, you need to implement three actions immediately: get an overview, take responsibility, and communicate. “Don’t get stuck in the technical details. Quickly assess the impact, identify who’s affected and who needs to be informed. Be honest. And bring in support—legal, technical and communications.”
Just like with a ‘regular’ IT disruption, leadership is put to the test. “Show that you’re in control, even if it doesn’t feel that way. Take responsibility and lead. Indecision often causes more damage than making the wrong one.”
Preparation is key. “Know where your AI is running, what decisions it’s making and who you can turn to after an incident. Using AI without governance is like driving without brakes. It might go well for a while, until it doesn’t.”
Many organisations are investing in AI, but few consider what could go wrong. “And that makes you vulnerable,” says Friso. “Not everything is a ‘technical error’. Ethical missteps can also cause major issues.”
One organisation’s AI algorithm disproportionately flagged people with a migration background for fraud checks. Technically, the system worked fine, but it caused public outrage and reputational damage. “Examples like this show why you have to keep a critical eye on AI.”
“AI applications increasingly affect the core of your operations: from customer interaction to risk assessment and HR decisions. That’s why AI should be treated as critical infrastructure,” Friso argues. “Assign responsibilities and be prepared for the day things go wrong. Because you don’t want your first AI crisis to be your first drill.”
Want to know more about
Data & AI solutions?
Alway up-to-date
Newsletter