Is the Level of Trust in AI Declining?

October 16, 2023

Register: NRI 2023 Launch

The launch of ChatGPT on November 30th, 2022 captured the imagination of business and society in unprecedented ways. Today, Generative AI systems can compose emails, write poetry or lyrics, write and debug computer programs, and crack test papers among other things, harnessing data to produce various types of output in response to prompts. While the upsides defy our imagination, we need to be equally cognizant of the downsides, so we aren’t blindsided by the inherent dangers from its use.  Here are a few examples.

Legal wrangles

17 authors, including John Grisham, Jodi Picoult and George RR Martin are suing OpenAI, on the front that ChatGPT is a ‘massive commercial enterprise’ reliant on ‘systematic theft on a large scale.’ The suit, organized by the Authors Guild, that includes David Baldacci, Sylvia Day and Elin Hildebrand, is meant ‘to stop this theft in its tracks’ as it will otherwise ‘destroy literary culture.’ OpenAI wants this and other suits dismissed – because the claims ‘misconceive the scope of copyright, failing to take limitations and exceptions including fair use into account, that properly leave room for innovations like the large language models now at the forefront of AI.’

Other interesting cases include a class action motion against Microsoft, GitHub and OpenAI, as Copilot, their code generating AI system uses licensed code without any accreditation; one against AI based art tools companies Midjourney and Stability AI for infringing upon the rights of artists by training their tools on their pictures without permission; and a case filed by Getty Images against Stability AI for using millions of images from their site to train their product, Stable Diffusion.  

AI hallucination

Steven Schwartz and Peter LoDuca, representing Roberto Mata who was claiming damages against the airline Avianca, cited a number of court decisions before New York federal Judge Kevin Castel. The cases cited included Martinez vs Delta and Varghese vs China Southern Airlines, among others. As it turned out, generative AI chatbot ChatGPT had fabricated all the cases. Judge Castel observed, tellingly, that while there is nothing wrong in using technological tools, attorneys must play a gatekeeping role too to ensure accuracy in filings. 

Another case of ‘hallucination’ by generative AI includes Google’s LaMDA, responding to a query, describing a meeting between Mark Twain and Levi Strauss. While the former apparently was in San Francisco at the time, and did work for the jeans company in the mid-1800s, the meeting was entirely fictional. 

Education 

Generative AI has become the favorite tool of students, who can create papers on any subject instantly, unhindered by effort, knowledge or reasoning. The United Nations Educational, Scientific and Cultural Organization (UNESCO) has just published its global guidance for the use of GenAI on 7th September. It discusses ethical, safe, equitable and meaningful use of GenAI, proposes measures to develop coherent policy frameworks to regulate its use in education and research, and provides examples of how it can be creatively used for curriculum design, teaching and research. OpenAI is now credited with releasing ZeroGPT, which uses a series of deep and complex algorithms to figure whether content is plagiarized or machine generated, partially or fully. This is even more significant in the case of institutions of higher learning where essays are a significant part of the admissions process. 

Distortion for political purposes

From November this year, twelve months before the US presidential elections, Google’s political content policy will demand that digital alterations to images and audio/video clips must be explicitly disclosed. Political advertising has always played at the fringes, and generative AI has made that all too easy. 

A recent Microsoft research report caused alarm bells to ring when it reiterated what the US Department of Justice had claimed earlier – that Chinese-controlled Facebook and Twitter (now called X) accounts belonging to an elite group within the Chinese ministry of public security, had used AI to bias the minds of US voters. Microsoft said that their researchers used a multifaceted attribution model, which relies on technical, behavioral and contextual evidence to come to the conclusion.

Propaganda and fake news

Propaganda peaks during wars. The current Russia-Ukraine War is no exception. There was speculation and overheated debate about whether the hundreds of casualties that lay strewn across the streets of Ukrainian town Bucha, were actors playing dead. Digital forensics were used to confirm that this news wasn’t fake. Then there was the case of a mysterious fighter, who was supposed to have single-handedly destroyed 40 Russian fighter jets. The picture of the ‘ghost of Kyiv’ as he was called, eventually turned out to be that of an Argentinian lawyer. 

Managing risk

AI combines Natural Language Processing (NLP), Speech-to-text, Computer Vision, Audio and Sensor Processing, Machine Learning (ML) and Expert and other systems. We have yet to figure out how to remain in control even when these systems use deep learning and reinforcement learning to develop code unaided. 

Harvard Business Review calls out the need for organizations to highlight whenever there is uncertainty regarding generative AI responses – by citing sources, doing security assessments to identify exploitable vulnerabilities, watermarking content, ensuring outputs are accessible to all, while striving to minimize the size of large language models.  

An article published by McKinsey calls for foundational transparency: organizations setting clear standards for model development with consistent documentation, recording data provenance, managing metadata, mapping data, and creating model inventories.   

A paper published by the US Department of Commerce’s National Institute of Standards & Technology (NIST), “AI Risk Management Framework (AI RMF 1.0)”, raises important points relating to minimizing negative impacts in order to improve overall system performance and trustworthiness. It addresses harm to people (individual, community, societal), to organizations (business operations, security breaches, reputation) and to an ecosystem (elements and resources, global financial system, natural resources, environment and the planet). 

The report recognizes that risks also extend to third-party software, hardware and data and how it is used or integrated, particularly if governance structures and technical safeguards are insufficient; that emergent risks must be identified and tracked; that there is a lack of consensus on robust and verifiable measurement methods, because approaches may be oversimplified, gamed and lack critical nuance; that risk must be measured at different stages of the AI lifecycle and that risks in real-world settings could be different from controlled environments. 

Developing a governance framework

It is obvious that governmental regulatory frameworks are critical, and must evolve as AI does. Frameworks ensuring that training data, algorithms, and output will be publicly available for assessing their quality and accuracy, and holding creator and user organizations accountable for negative consequences arising from the deployment and use of such systems, are needed. 

Indian Prime Minister Modi highlighted this at the G20 Summit when talking about the excitement around AI on one side, and ethical aspects, skilling, and the impact of algorithm bias on society at large, on the other. The European Union AI Act, passed in June this year, seeks to classify AI systems according to the risks they pose to users. However, the passing of the Algorithmic Accountability Act by the US which makes it incumbent on companies to assess their automated systems, is awaited. 

Generative AI is stretching the realms of creativity to extremes. Fragile controls and misuse can create unprecedented, unimaginable damage. More than anything else, that calls for concerted actions by governments across beliefs and borders.  

The 2023 edition of the Network Readiness Index, dedicated to the theme of trust in technology and the network society, will launch on November 20th with a hybrid event at Saïd Business School, University of Oxford. Register and learn more using this link

For more information about the Network Readiness Index, visit https://networkreadinessindex.org/


Anil Nair is the former Managing Director, Country Digitization for APJC at Cisco Systems. In his role, he was involved with government, industry leaders, and academia in accelerating national digitization strategies to drive economic growth, create jobs, and build innovative ecosystems across India, China/Taiwan, Australia, Indonesia, Japan, South Korea, Philippines, and Thailand.

He is the recipient of an Award for Professional Excellence by the Indian Institution of Industrial Engineers and has won the Udyog Rattan Award. He completed his Advanced Management degree at ISB-Kellogg Business School in Chicago.