Tuesday, January 21, 2025

Responsible AI: Good governance with or without the government

In his 2024 Nobel Prize banquet speech, Geoffrey Hinton, often described as the “godfather of AI,” warned the audience about a variety of short-term risks, including the use of AI for massive government surveillance and cyber attacks, as well as near-future risks including the creation of “terrible new viruses and horrendous lethal weapons.” He also warned of “a longer-term existential threat that will arise when we create digital beings that are more intelligent than ourselves,” calling for urgent attention from governments and further research to address these risks.

While many AI experts disagree with Hinton’s dire predictions, the mere possibility that he’s right is reason enough for greater government oversight and stronger AI governance among corporate providers and users of AI. Unfortunately, what we’re seeing is the kind of fractured government regulation and industry foot-dragging we saw in response to privacy concerns nearly a decade ago, even though the risks related to AI technologies have far more potential for negative impact.

To be fair, Responsible AI and AI Governance will feature prominently in industry conversation, as it has the past two years. Enforcement season is officially kicking off for EU AI Act regulators, and South Korea has recently followed suit with its own sweeping AI regulation. Industry associations and standards bodies including IEEE, ISO, and NIST will continue to beat the drum of AI control and oversight, and corporate leaders will advance their Responsible AI programs ahead of increasing risk and regulation.   

But even with all this effort, many of us can’t help feeling like it’s just not enough. Innovation is still outpacing responsibility, and competitive pressures are pushing AI providers to accelerate even faster.  We’re seeing amazing advancements in robotics, agentic and multi-agent systems, generative AI systems, and much more, all of which have the potential to change the world for the better if Responsible AI practices were embedded from their beginning. Unfortunately, that’s rarely the case. 

Avanade has spent the past two years refreshing our Responsible AI practices and global policy to address new generative AI considerations and to align with the EU AI Act. When we work with clients to build similar AI Governance and Responsible AI programs, we typically find strong agreement from business and operational departments that it’s important to mitigate risk and comply with regulation, but from a practical standpoint, they find it hard to rationalize the effort and investment. With our understanding of increasing government oversight and greater risk from emerging AI capabilities, here’s how we work to them to overcome their concerns:

  • Good AI Governance is just good business. In addition to the benefit of risk reduction and compliance, a good AI governance program will help a business get a handle on AI spending, strategic alignment, re-use of existing tech investments, and better allocation of resources. The return on investment is apparent without having to project some arbitrary calculation of losses avoided.
  • Tie Responsible AI to brand value and business outcomes. Employees, customers, investors, and partners all choose to associate with your organization for a reason, much of which you describe in your corporate mission and values. Responsible AI efforts help extend those values into your AI projects, which should help improve important metrics like employee loyalty, customer satisfaction, and brand value.
  • Make responsibility a pillar of the innovation culture. It’s still too common to see “responsible innovation” and similar programs exist alongside of – and distinct from – innovation programs. As long as these remain separate, responsible innovation will be a line item that’s easy to cut. It’s important to have responsible innovation and responsible AI subject matter experts to guide policy and practice, but the work of responsible innovation should be indistinguishable from good innovation.
  • Get involved in the RAI ecosystem. There is an impressive array of industry associations, standards bodies, training programs, and other groups actively engaging organizations to contribute to guidelines and frameworks. These groups can serve as valuable recruiting grounds or opportunities to establish thought leadership for leaders willing to make the investment. As more government agencies and customers are asking questions about responsible AI practices, demonstrating the seriousness of your commitment can go a long way toward establishing trust.

There’s a persistent myth that the tech industry is a battleground between the strong-arm techno-optimists and the underdog techno-critics. But the vast majority of business and tech executives we work with in AI don’t seem to fall clearly into either camp. They tend to be pragmatists, working every day to push their company forward with the best tech available without significantly increasing risk, cost, or non-compliance issues. We believe it’s our job to support this pragmatism as much as possible, making sure Responsible AI practices are simply another core requirement of any successful AI program.

Related Articles

Latest Articles