Analyst Roundup: Is your AI responsible? | Straight Talk

SUBSCRIBE NEWSLETTER

The latest insights from your peers on the latest in Enterprise IT, straight to your inbox.

The state of values and ethics in artificial intelligence algorithms 

By Pragati Verma, Contributing Editor, Straight Talk

Artificial intelligence is influencing decision making, from setting prices and inventory levels to providing diagnosis for patients in clinics. And with great power comes great responsibility. No wonder, everyone from technology companies and governments to the Vatican are designing ethical guardrails to guide their AI technology. 

Noting that businesses will be held responsible for their decisions, researchers such as Gartner, BCG, Forrester, and IDC, are advising them to prioritize Responsible AI — set of best practices, frameworks and tools for promoting transparency, accountability, balanced, and ethical use of AI technologies. IDC, for example, believes that RAI is crucial to building enterprise trust and Gartner, in its latest “Top-10 trends in data and analytics” report recommends that business leaders should focus their time and money on smarter, faster and more responsible AI. 

Perceptions Don’t Match Reality  

If RAI programs are a business imperative, are organizations ready? The answer is no, according to a recent report from BCG GAMMA, the AI research arm of the Boston Consulting Group. More than half of the organizations overestimate the levels of maturity in deploying RAI models and are less advanced than they believe they are, according to the study, which collected and analyzed data from over 1,000 large organizations. Even organizations that reported rolling out AI at scale overestimated their RAI progress: less than half have a fully mature RAI program. While 26 percent of companies polled said that they've achieved AI deployment at scale, just 12 percent have fully implemented a RAI program as part of their work, it notes. 

Steven Mills, BCG GAMMA’s chief ethics officer and a co-author of the report says, “The results were surprising in that so many organizations are overly optimistic about the maturity of their RAI implementation. While many organizations are making progress, it’s clear the depth and breadth of most efforts fall behind what is needed to truly ensure RAI.” To Mills, the group that believes that they have fully implemented RAI programs are the most concerning. Their overly optimistic view may discourage continued investment, when in reality there are still gaps. 

Although lapses of AI systems can pose serious risks, organizations shouldn’t lose sight of the business benefits, even as they work to mitigate risks, according to Sylvain Duranton, BCG GAMMA’s global leader and report’s co-author. Over 40 percent of the organizations surveyed reported that their primary factor of pursuit for RAI is business benefits — more than twice the percentage who build RAI systems as risk mitigators. “Increasingly, the smartest organizations I’m talking to are moving beyond risk to focus on the significant business benefits of RAI, including brand differentiation, improved employee recruiting and retention, and a culture of responsible innovation—one that’s supported by the corporate purpose and values,” he said during a media roundtable

Sustainable Lens 

IDC’s recent survey pointed out the same mismatch between perceptions and reality. During a webinar, Bjoern Stengel. Senior Research Analyst, Worldwide Business Consulting & ESG Business Services, IDC said that they found the maturity levels for AI are low, but at the same time companies feel confident about their ability to deploy AI in an ethical manner.  

He went on to suggest that companies should use an Environment, Social and Governance approach when thinking about AI projects. His advice: Define a comprehensive set of stakeholders that can be affected by AI. "Customer experience is a major concern around the ethical use of AI and the brand aspect is definitely an important one, too," he said, “Employees are another group to consider from a social impact perspective, especially when it comes to hiring practices.” 

Using an ESG lens will make it easier to measure progress and benchmark performance, he said, "If companies manage these topics properly, there's enough research that shows companies can benefit from integrating ESG into their business, including lower risk profiles, greater financial and operational performance and better employee experience." 

Core Principles 

There is more to RAI than the ESG approach recommended by IDC. And to help organizations navigate the relatively uncharted territory, Forrester analysts have outlined five principals to develop RAI initiatives: 

  1. Fairness and bias: This principle is concerned with ensuring that artificially intelligent systems do not harm people and customers through inequitable treatment. 
  2. Trust and transparency: Since many AI systems are black boxes or unintelligible to human beings, there is often a need for explainability and interpretability.  
  3. Accountability: AI systems are often the result of a complex supply chain that may involve data providers, data labelers, technology providers, and systems integrator. This principle helps define who is to blame when an AI system goes wrong, and how you can prevent it from going wrong in the first place. 
  4. Social benefit. This is about ensuring that AI is used for the greater good of society, such as leveraging AI for developing COVID-19 vaccine. 
  5. Privacy and security. As AI systems are trained and then used to differentiate treatment, they need to respect individuals’ privacy.  

Brandon Purcell, a Principal Analyst at Forrester, points out in a blogpost that the framework might sound simple, but implementation is not. “Unfortunately developing a list of lofty principles and deciding how to put those principles into practice across your organization are two very different things. For example, fairness sounds like a great goal, but there are at least 21 different definitions of fairness you can implement in your AI models. The devil, as always, is in the details,” he wrote. 

Nevertheless, Purcell advises organizations to ensure that they build and deploy RAI systems from the get-go. In another blogpost, he explains why: “AI will continue to err. And it will continue to surface thorny legal and accountability questions, namely: Who is to blame when AI goes wrong?” RAI, he notes, “is key to minimize overall risk, and to preempt your AI system from performing in an illegal, unethical, or unintended way. You will be held accountable for what your AI does, so you’d better make sure it does what it’s supposed to do.” 

Pragati Verma is a writer and editor exploring new and emerging technologies. She has managed technology sections at India’s The Economic Times and The Financial Express.