Stratink Commentaries: Read Our Expert's Latest Insights

AI Pact Signatories are Sending a Message About Responsible AI

Written by Shorbori Purkayastha | Oct 9, 2024 7:03:56 AM

Without a doubt, Artificial Intelligence (AI) is now becoming integral to the strategic agendas of nations and businesses. Call it the centrepiece of the living room, if you will. But that also entails an obligation for trust, assurance and responsibility or to repurpose the French novelist Honoré de Balzac’s phrase - a certain sense of noblesse oblige.

Amid a rapid development and deployment of AI, many countries are attempting to strike a balance between innovation and regulation while looking at AI governance strategy. How this technological shift is adopted will redefine geopolitics as much as it will shape businesses and economics.

Admittedly, the AI policy landscape is still nascent. But we are witnessing international cooperation and intergovernmental initiatives.

An AI Index report by Stanford University reveals that several countries have passed at least one AI-related bill, while the EU’s comprehensive risk-based approach with the AI Act stands out as a blueprint for global AI governance.

Although the legally binding AI Act came into force last month on 1 August 2024, most of the compliances will only be applicable by the next two years. In the meantime, to address the policy vacuum, the EU has proposed the AI Pact - a voluntary initiative aimed at encouraging organizations to prepare for the implementation of AI Act measures.

About 100 signatories from a healthy mix of companies including Big Tech firms, IT and telecos, healthcare, banking, automotive and aeronautics companies, multinationals, fintech and software companies have signed the pledge. The core focus of this pact is to encourage companies to build trust in AI technologies, uphold ethical and responsible AI development while also fostering the exchange of knowledge and information on best practices.

OUR TAKE – STAYING AHEAD OF THE CURVE

A caveat: a voluntary pact such as this that some companies which are aggressively pursuing AI development have opted out of, may not have any real impact. Nonetheless, it marks a significant step forward for companies that are looking at having AI at the core of their businesses going forward.

Notably, a 2024 survey by Edelman suggests that globally, trust in AI tech and companies building and selling AI tools dropped to 53%, compared to 61% five years ago.

While building trust in AI will require a significant effort, taking a voluntary pledge communicates a commitment to responsible AI. It’s a strategic and forward-thinking approach.

It builds trust with consumers by demonstrating transparency, accountability, and a commitment to privacy and security.

It can help companies lead conversations on future regulations by influencing the framework and best practices as early adopters.

There could be a competitive advantage in preparing for compliance ahead of time, reducing future regulatory costs and operational disruptions.

Lastly, it can foster innovation and internal transformation through collaboration with other industry leaders.