I am a business interested in your listed service
I am an expert interested in your network
I have a project in mind
I can’t find a specific service
I’d like to discuss something else
I’d like to join your expert network
I offer a service that's not listed
I'd like to discuss something else
The management agenda of the risks associated to the procuring, developing and adoption of AI across the enterprise moves into the C-Suite. Artificial Intelligence. AI. The two words that have more than quadrupled in search interest since the advent of ChatGPT. And we do not need to look far to see it included in business strategies or road maps alongside a promise of greater productivity.
Any productivity gains, however, are not free. They bring a breadth of new challenges unique to AI. These span from how businesses will use AI, new roles and employment impacts, and the speed to which AI is morphing. These impacts can be productivity on one hand, and on the other, the loss of workforce and trust.
Think of a cross between ChatGPT and Siri, let’s call them “Service Agents”. As Service Agents mature, they will augment or automate many of the activities of our current customer service workforce; directly translating into productivity gains.
But, as enticing as productivity gains are, and as the use of AI matures, we begin to ask questions such as “How can executives be certain that the AI’s responses align with how the organisation has chosen to engage with its customers?” and “How do businesses know the AI is operating within the guardrails that also govern the human workforce?”
As AI progresses, these questions move from simple questions to complex requirements of governance and risk management. Issues that board directors need to be across, that the C-Suite needs to understand and that the executive teams need to manage.
We have seen it before with cyber-security. Technology issues in businesses can be hard to grasp. The longer a company takes to get on top of issues, the more exposed it is.
AI can be even more difficult for executives to grasp. We typically observe a vast underestimation of the size and materiality of the business risks created by AI.
Not only is AI fast-changing but its self-learning and dynamically responsive nature – and what this means to business – is something that business has never had to deal with.
Self-learning. Think of that for a moment. How do executives and board members know that the self-learning capabilities of the AI technology that an inhouse data scientist has created continues to remain on track to deliver the objectives for which it was intended? AI is so categorically different in the way it works that businesses must manage it as a “socio-technology” as opposed to a traditional software technology that is well understood. The “socio” refers to the impact it has on humans, the way it is developed, and the need for humans to interact and manage AI for productive engagement. All of this suggests a need for proactive risk management. Executives should consider a lens they are familiar with; portfolio risk management, balancing an integrated pool of potential benefits with a broad set of drivers for potential loss. Despite the familiarity of risk management practices, the breadth of AI risks far exceeds traditional business risk measures.
DSG.ai, an AI specialist firm, has assembled more than 100 distinct dimensions of AI risk (The AI Risk Ontology) for active governance, risk and compliance (GRC) management if AI is to be scaled sustainably. When adopted, it provides executive visibility across the full AI risk landscape with the adaptability to allow for continuous change as other forms of AI technology (for example Generative AI and Agents) mature.
Those who see the significance of these issues recognise they cannot wait for regulation and are beginning to stand up AI internal audit teams and implement specialised AI governance systems to constantly monitor AI risks and performance. Counterintuitively, AI risk management done well actually improves productivity and momentum, as opposed to slowing down innovation.
One company in the ASX 20 recently undertook a deeply specialised AI audit and uncovered a high degree of inherent risk but did not have the capability to manage and remediate in ways that meet recognised global standards.
This prevented the organisation from effectively identifying sudden shifts in performance and found that their risk exposure was actually far greater than what was understood. The lack of ability to monitor for incidents also meant that any AI errors are most likely to go undetected until an irreversible and visible impact on customers is recorded. Board directors, the C-Suite and executives need to understand that effective AI risk management must not be left as the sole responsibility of technologists and those enamoured with the technology.
Just as we have seen the far-reaching impacts of cyber risk, AI needs to be addressed today instead of kicking the risk-shaped can down the road.
Dr Elan Sasson is CEO of DSG.AI, a global AI company operating in Europe and APAC helping enterprise businesses manage AI at scale through a range of risk management solutions.
Click here to read the article (behind pay wall) on The Australian