Skip to main content

By Mousume Roy, APAC Reporter, HCLTech

 

The future of technology in the 21st century is intelligence. From autonomous systems, cybersecurity, automation and RPA to self-driving cars, chatbots and virtual assistants—the impact of AI models are increasing.

According to the International Data Corp. (IDC), worldwide revenues for the artificial intelligence (AI market are forecast to grow 19.6 per cent in 2022 to $432.8 billion. The market is expected to break the $500 billion mark in 2023. There has been a steady development in the field of AI and its growth is exponential. Government agencies and regulators are focusing on addressing the inadvertent negative consequences that may result from the development and deployment of AI.

Policymakers, stakeholders and regulators are grappling to solve these complex issues. In 2020, AI accounted for 20 percent or $75 billion of worldwide venture capital investments. McKinsey has reported that AI could increase global GDP by roughly 1.2 percent per year, adding a total of $13 trillion to the global economy by 2030.

Laws, regulations and ethical standards

In recent news, the UK’s Information Commissioner’s Office said AI-driven discrimination could have “damaging consequences for people’s lives” and lead to someone being rejected for a job or being wrongfully denied a bank loan or a welfare benefit.

IBM recently exited the facial recognition business because of criticism surrounding the racial and gender bias of the recognition software. Amazon and Microsoft followed suit, taking a step in the right direction towards acting responsibly. 

In New South Wales, Australia, a plan to roll out facial recognition technology in every pub and club has been criticized saying the “invasive” measure is an attempt to avoid further crackdowns on poker machines.

In a world where AI plays a part in decisions involving employment and access to justice or healthcare, there is no room for this prejudice and organizations must work hard to get quality, ethical and unbiased data—the heartbeat of responsible AI.

The National Institute of Standards and Technology (“NIST”) is developing an AI Risk Management Framework that addresses risks in the design, development, use and evaluation of AI systems. Recently the FDA released a guidance on AI and machine learning-enabled medical devices and has long supported the use of AI in drug development. The Department of Health and Human Services also issued a notice of proposed rulemaking that would prohibit the use of discriminatory clinical algorithms under the Affordable Care Act.

The European Consumer Organization’s latest survey on AI revealed that more than half of Europeans believed that companies use AI to manipulate consumer decisions, while 60 percent of respondents in certain countries thought that AI leads to greater abuse of personal data.

Commenting on XAI, Phil Hermsen, Solutions Director, Data Science & AI, at HCLTech says, “Ethics plays a significant role as organizations struggle to eliminate bias and unfairness from their automated decision-making systems. Biased data may result in prejudice in automated outcomes that might lead to discrimination and unfair treatment. Regulatory compliance, standards and  policies, such as GDPR and DORA, can lead to an unexpected source of competitive advantage.”

“Organizations with the ability to deliver high quality, trustworthy AI systems that are regulation-ready will give first movers a massive lead, enabling them to attract new customers, and retain old ones.”

Explainable AI (XAI) and the black box 

As policymakers around the world are prioritizing XAI to address a range of ethical AI concerns, PAI research has found that deployed explainable AI techniques are not up to the task of enhancing transparency and accountability for end users.

Black box’ in computing is a device or program that allows end users to see the input (data/questions/information) and output (answers/results). But this gives no view of the processes and workings—how the device or program got to the answer or result. 

With the most AI-based tools, the explainability of why and how it reached a certain conclusion is desired, especially when the output produced is unexpected, incorrect, or problematic

“An organization’s focus on explainable AI relates to trust. If the public doesn't trust AI or understand how AI-powered tools make decisions, they are unlikely to share their data with companies looking to build AI algorithms,” adds Hermsen.

AI doesn’t explicitly share how and why it reaches its conclusions, except that some omniscient algorithm has spoken and concluded. Hence, overcoming the black box problem of AI remains a top priority for 2023.”

Promoting responsible AI governance

Organizations are paying attention to ensure that AI is designed, developed and deployed by processes that protect society and the environment. In 2019, the OECD established AI Principles to promote the use of AI that is innovative, trustworthy and respects human rights and democratic values.

World Economic Forum’s Global AI Action Alliance and the Global Partnership on Artificial Intelligence have established working groups and schemes to translate these principles into best practices, certification programmes and actionable tools.

Governments worldwide are tightening digital regulations on online safetycybersecuritydata privacy and AI The European Union has passed the Digital Services Act (DSA) and the Digital Markets Act (DMA).The DMA aims to ensure more competition in the European Digital Markets, by preventing Big Tech firms from abusing their market power.  Under the new rules, it will be easier for the start-ups to enter the market.

Meanwhile, the DSA aims to modernize the e-Commerce Directive and improve content moderation on social media platforms to address concerns about illegal content, transparent advertising and disinformation. Should a company not comply with the rules, the European Commission can impose fines of up to ten percent of a company’s total annual revenue for DMA violations and six percent for a DSA breach. These penalties are similar to those that can be issued under the General Data Protection Regulation (GDPR).

Responsible AI in action – three imperatives
  • Human-centered design: For AI to be transparent it must be human-centered in its design. As per ISO standards, being human-centered is an approach that allows systems to enhance our effectiveness as humans and improve our well-being. This means in every stage of development, the ‘user’ i.e. the human, should be taken into account, when it comes to both the benefits and potential harm. AI tools need to amplify the capabilities of humans by being transparent, accessible and explainable for all users.
     
  • Ethical AI: There are no practical directives available when it comes to ethical AI However, failing to operationalize ethical AI can lead to inefficient product development along with regulatory, legal and reputational risks. ‘Ethics washing’ or ‘ethics theater’ are growing terms among the technology community, which refers to the practice of fabricating or exaggerating a company’s interest in equitable AI systems that work for everyone. A recent example is Google’s AI ethics board which had no veto power and whose members provoked a backlash immediately leading to its dissolution.
     

    These controversies could be avoided by developing realistic and effective policy measures and governance frameworks, forming inclusion initiatives and committees that include a diverse range of stakeholders, subject matter experts, team members and most importantly, affected members. The leadership and C-Suite involvement is also key to building organizational awareness and incentivizing the identification of ethical risks.

  • Public and Private Spheres: AI is solving a range of complex global problems. However, this transformative process requires a multi-faceted solution that engages both the public and private sectors.

    According to a recent Ernst & Young global study, the disconnect between the public and private sectors can lead to increased ethical risks. For example, on the use of AI for facial recognition policymakers by a wide margin rated “fairness and avoiding bias” and “privacy and data rights” as the two top concerns. Yet private sector priorities on the same question were more evenly distributed, with narrow margins defining the top choices.

    Public-private collaboration is imperative to create innovative governance solutions that can both support the advancement of emerging technologies while providing the right guardrails that protect human rights and social values.