AI as a Force for Good: Putting Trust into Technology
Recently, BSI commissioned a collection of insightful articles around the theme of Shaping Society 5.0, which collectively explores how AI innovations can be an enable that accelerates progress. It highlights the importance of building greater trust in the technology, as many expect AI to be commonplace by 2030.Theuns Kotze, Managing Director, Assurance IMETA at BSI, explores how AI can be used to address cyber vulnerabilities, shape trust and partner with individuals and organizations on their digital journey. One of the biggest conversations of our time is around AI and how we can use it to change lives for the better, shape the way we work, boost efficiency, and accelerate innovation. From image recognition to linguistic understanding, AI covers a broad spectrum of technologies, which are opening up a vast scope of possibilities for organizations.
Recently, when asked how they would like to see AI shaping our future by 2050, in India prioritized a range of positive impacts, from reducing social inequality (37%) to making it easier for doctors to diagnose medical conditions (34%) to improving education (32%). India has, understandably, immense hope for how AI can be a force for good.
Nevertheless, the complexity of AI algorithms may leave organizations feeling overwhelmed by AI’s capabilities or concerned about whether to trust AI. AI is not magic, nor does need it to be mysterious – in fact, it offers the opportunity to drive progress across society in a myriad of positive ways, provided it is effectively managed and well-governed.
The magnitude of ways AI can shape our future means we are seeing some degree of hesitation toward the unknown. Whilst our poll showed that engagement with AI is markedly high in India - (64% already use AI every day at work; the global average is 38%), 91% of people in India agreed that trust is needed when it comes to cyber security in relation to AI. That is not surprising, given there is personal information and high stakes involved, but it emphasizes that to realize the full benefits of AI, transparency and greater communication about its uses are paramount.
Rather than organizations feeling they have to start from scratch to assess the risks of AI tools and mitigate them, guardrails already exist. What is needed – for the public as well as for organizations – is a greater understanding of these checks and balances, and recognition that human involvement will always be needed if we are to make the best use of this technology. Fear of the unknown could prevent people from adopting AI tools. Greater knowledge of the guidance around it has the potential to free people to make not just good but great use of this technology in every area of life and society.
Agreed standards and principles of best practice that can evolve alongside the technology and its applications could pave the way to ensure, for example, that data is not misused, and that the inputs being applied to AI tools are fair and equitable. These can include existing international standards that can help manage risk (such as Information technology — Artificial intelligence — Guidance on risk management (ISO 23984) and the forthcoming AI management standard (ISO 42001).
There are also guardrails organizations can put in place. We have a set of Enterprise-wide AI principles to form the backbone of any AI system or model we build in the Innovation team. We adhere to principles that treat everyone fairly and mitigate against this risk of data bias. The ethical use of AI is a key principle that drives our innovation. Likewise, the context of our use case requires transparency. We understand what the code is doing, what model is being created and how it's applying different weightings to the different attributes of the data that's going through. This means we can provide our human ability to understand and interpret the outputs from the AI system.
The opportunity for AI to be a force for good for society is immense. To bring that goodness to fruition requires openness, transparency and trust. As people understand the potential of AI and their power to use it as a tool, embedding guardrails and building greater trust is critical.