Regulators around the world are taking a renewed look at AI and it’s no surprise, what with all the warnings about the potential of an extinction event. But even if AI doesn’t go SkyNet on us, there are still risks — privacy, bias, and misinformation chief among them.
According to a report by PWC, nearly all business leaders report that their company is prioritizing at least one initiative related to AI systems in the near term but only 35% of executives say their company will focus on improving the governance of AI systems. A company does not have to be developing AI models to face AI risk, experts say, since major enterprise vendors are furiously adding cutting-edge AI to their products and services.
But the potential for harm is dramatic.
Experts are warning of AI risks related to privacy, safety, misinformation, and bias — all the way up to the extinction of the entire human race. One petition, signed by Elon Musk, Steve Wozniak, and other tech leaders, has already garnered more than 30,000 signatures. Another petition released at the end of May by the Center for AI Safety was signed by Bill Gates, OpenAI CEO Sam Altman, US Congressman Ted Lieu, and many others.
Monitoring Pending Regulation
As a result, CIOs need to stay on top of what’s happening in this space, to protect their companies and customers. More than that, they need to get ahead of the regulations, said Priya Iragavarapu, VP of digital technology services at AArete, a management consulting firm.
“This is something that CIOs need to anticipate even before it becomes law,” she said. “You can’t wait until the regulations take place.”
“We have messaging now from the White House and Capitol Hill about doing something on the national level,” said Matthew Miller, principal for cyber security services at KPMG US.
Some jurisdictions are not waiting. New York City, for example, passed an ordinance in April regulating the use of AI hiring tools. But the biggest thing to keep an eye on is the European Union’s Artificial Intelligence Act.
The act bans “intrusive and discriminatory” use of AI such as biometric identifying systems, predictive policing, and indiscriminate scraping for the creation of facial recognition databases. It also covers high-risk AI that can cause harm to health, safety, or the environment, or that can be used to influence political campaigns, among other regulations. This would not be the first such law, either.
“Around the globe, people are looking at it as a template — similar to how people look at GDPR,” said Miller. General Data Protection Regulation (GDPR) focuses on data privacy and applies to companies that have customers in Europe. It has been used as a model for many similar laws around the world.
This is a fast-changing area, but there are steps that companies can take to prepare themselves for the coming flood of regulations.
Appointing An AI Leader
Many companies have committees looking at AI risks, said KPMG’s Miller. But the most proactive companies are appointing a single person to spearhead the effort.
“When you have a leader, you get a strategy, resources, and a team,” he said. This leader should be looking at the regulations that have been adopted, develop plans to become compliant, and understand the risks.
“They need to balance the opportunities of AI with associated risks and build appropriate functions to make sure it’s being used safely and effectively,” he said. They should also take an inventory of AI projects going on at the company, he added. The AI leader’s position in the corporate hierarchy will vary based on the organization, he said, and, at least at the start, this person might actually serve in more than one role.
Practicing Good Governance
Figuring out what AIs a company is using is no easy task.
First, there are all the public platforms, like OpenAI’s ChatGPT and Google’s Bard, which anyone can access anytime. These systems are trained on the prompts that users give them, which means that if an employee asks the AI for help revising a document with sensitive company information, the AI will then know that information — and might share it with users from competing companies.
Second, there is embedded AI. Enterprise vendors are rapidly adding advanced AI systems to all their platforms. Salesforce has long had AI capabilities but is now adding generative AI as well. Microsoft has announced plans to add generative AI to its entire Office 365 suite, and Google plans to do the same with its Workspace. Even Adobe Photoshop now has generative AI capabilities built in.
Finally, some companies are rolling their own AIs, either using commercial platforms or open-source models. Getting an accurate inventory requires good governance, said Forrester analyst Brandon Purcell.
“CIOs will need to deeply understand their entire technology supply chain”
Brandon Purcell, Forrester
“Companies need to have codified principles in place for how they want to adopt AI,” he said. “Who’s accountable? How do you take action? And then there’s assessment and measurement — are we actually hitting the mark from a governance perspective?”
He added that CIOs will need to deeply understand their entire technology supply chain. “There’s going to be a large language model that’s running somewhere in every process. Understanding where it is, where the vulnerabilities are, how it impacts the entire system downstream — that’s going to be critical.”
The most forward-thinking companies are now demanding software bills of material from their vendors, he said. That could include the training data used to train AI models, parameters, fine-tuning, and other key specifications.
“That’s not the same as the ability to interpret them and render them transparent, because the large language models are so massive,” added Purcell. “But you can understand how they were built and where they might be vulnerable.”
CIOs will also need to have the same level of transparency for in-house models, offers Bradley Shimmin, chief analyst for AI platforms, analytics, and data management at Omdia.
“Companies need to make sure that they are prepared to provide whatever may be needed by legislation, to stay compliant,” he said. “These models are very opaque and likely we’ll never be able to reverse engineer how a decision was made in a model. However, with appropriate monitoring and transparency of outputs from a model, we can create a layer of accountability and control.”