AI Regulation - Global Consensus Quest
“A heart is not judged by how much you love, but by how much you are loved by others,” stated the Tin Man from the Wizard of Oz. The Tin Man was among the earliest pop-cultural references to artificial intelligence (AI). AI has been around for decades and has continued evolving in its power and use. The first person who is publicly known to dive into the realm of possibilities with AI is Alan Turing, a British polymath. In a paper he wrote in 1950, Computing Machinery and Intelligence, he highlighted how people form their realities using available information and their reasoning abilities. This was his premise for his understanding of AI, asking, are machines capable of doing the same?
Technology, at the time, did not enable computers to store information, but only to execute commands. This was a critical piece to the puzzle in further exploring AI, as computational power was crucial to achieving AI milestones. Gordon Moore propounded in 1965 that the capacity and capabilities of computers will continue to evolve every two years, becoming increasingly faster and smaller over time. This became known as Moore’s Law, and until today, it stands true.
One of the hallmark examples of AIs potential in outsmarting human intelligence was when IBM’s AI, Deep Blue, defeated the World Chess Champion Garry Kasparov in 1997 in just 19 moves. This was an incredible technological breakthrough, as there are more possible variations of chess games than there are atoms in the observable universe; meaning that Deep Blue was able to perform complex calculations and learn how to beat an iconic chess Grandmaster.
Deep learning is an AI model that became popular in the 1980’s thanks to John Hopfield and David Rumelhart. Their efforts were aimed at enhancing neural networks, allowing AI to learn through experience. Their success and failures, along with contributions from the likes of Edward Fiegenbaum, the “father of expert systems”. Expert systems are computer programs that enable AI technologies to think and reason as an expert individual or organization would, opening the doors to further understanding AI models and systems.
Fast forward to today, AI has both wowed and frightened the world bringing professionals across sectors to come together and formulate comprehensive strategies to control AIs rapid pace of development and implementation.
Since ChatGPT’s release in November 2022, the adoption and use of generative artificial intelligence (AI) has spread like wildfire. Just as wildfire is unpredictable and dangerous, many perceive AI’s potential similarly. ChatGPT is a large language model (LLM) and uses machine learning to harvest data and identify patterns in the data to make accurate predictions about what words to generate next. With corporations globally investing more than $93 billion by 2021, it comes as no surprise that as the technology has advanced, its implementation in our daily lives and in the professional sphere has experienced a significant up-tick.
So much so that in just the first five days after ChatGPT’s launch, it gained one million users, far out-pacing the likes of Instagram, Spotify, Facebook, and more. Open AI’s DALL-E also garnered a massive user-base quite early on with 1.5 million users creating over 2 million images every day since their launch in September in 2022.
Using AI to generate text and images enables people to perform a myriad of tasks much faster and more efficiently. However, there are some concerns arising with this technology as well. An article from INSEAD, one of the world’s most prestigious graduate business schools, highlights, “from the perspective of consumer trust and safety, the exponential growth of content made possible with technologies such as ChatGPT has made content moderation - a critical issue for our online trust and safety - more challenging for online platforms. Moreover, the role of AI in creating information filters and bubbles has been put in the spotlight.”
The article also discusses that “talent development is another consideration. Puranam cautioned that over-reliance on LLMs can atrophy our skills, particularly in creative and critical thinking. Companies should avoid the myopic view of automating lower-end work just because technology allows for it.” This valid concern highlights the dependence on technology that people create in order to expedite their processes, which may negatively impact a professional's skills and lead them to lethargy, and potentially fraud.
Technology is meant to enable people to enhance their quality of life and their abilities to perform tasks, not replace them. Adam Horlock, Founder and Principal Strategist at Pinnacle Public Relations Agency wrote that, “the controversy around ChatGPT is an example of misplaced faith in the power of artificial intelligence. This technology will continuously disrupt tech and many industries for years to come. It will improve, simplify tasks, get content written faster, and forever change our work. What it will not do is replace human experience and connection.”
While generative AI models are excellent for developing ideas and helping refine text and images, the information it uses is unreliable and it is not capable of generating authentic text, not yet at least. One recent example of a professional using this technology without fully understanding it is a lawyer who developed a part of his case using ChatGPT.
The lawyer, Steven Schwartz, was representing an individual who said that they sustained injuries on a commercial plane from a serving cart. Schwartz used ChatGPT to help him find similar cases, however, as the information it generates is unreliable, it turned out that those cases he intended to use as precedent never actually existed. Upon presenting his conversations with the generative AI, the court found that it had ensured Schwartz that the cases were real and stated that they could be found on reputable legal databases, which was also false.
Pairing such instances with the technology's meteoric rise, industry experts globally are raising red flags to bring it to the attention of regulators and to ask them to act fast.
Earlier in May this year, Sam Altman, OpenAI’s CEO requested collaborative efforts from the US and China on AIs future and development. Aside from the two superpower nations, Europe is also included in efforts to better monitor and control AIs development. According to Forbes, “as lawmakers work through the politics of possible AI regulation, players like Altman will continue speaking to all governments that will listen, attempting to shift and shape regulations so they align with company mission statements and bottom lines.”
There are several AI acts in place from various countries, yet they are not entirely set in the same direction, making it challenging for organizations that develop AI technologies to adhere to a global standard.
The first-ever framework on AI has been proposed by the European Commission, aiming to address risks and establish Europe as a global leader. The framework provides clear requirements and obligations for AI developers, deployers, and users while reducing burdens for businesses, especially SMEs. The proposal introduces risk-based categories, including high risk, limited risk, and minimal or no risk, with specific obligations and assessments for high-risk AI systems.
The regulation ensures safety, fundamental rights, and trust in AI, with strict rules for high-risk applications and transparent use for limited-risk systems. The legislation is designed to be future-proof and adaptable to technological advancements. The regulation is expected to be fully applicable in the second half of 2024.
The US has drafted a “Blueprint for an AI Bill of Rights” which uses five principles as the premise. The five principles proposed include:
- Safe and Effective Systems - Automated systems should be developed, tested, and monitored to ensure safety, effectiveness, and adherence to standards.
- Algorithmic Discrimination Protections - systems should be designed to avoid unjustified discrimination and promote equity, including through proactive assessments, representative data, accessibility, and oversight
- Data Privacy - Individuals should have control over their data, with protections against abusive practices and privacy violations, including consent, transparency, and restrictions on surveillance.
- Notice and Explanation - People should be informed about the use of automated systems, with clear explanations of outcomes and system functioning, as well as accessible and up-to-date notice.
- Human Alternatives, Consideration, and Fallback - Individuals should have the option to opt out of automated systems, access human alternatives, and seek timely human consideration and remedies for problems encountered.
The “blueprint” emphasizes that these principles should be applied to all automated systems that have the potential to impact people’s rights, opportunities, or access to critical resources or services. The framework aims to protect civil rights, equal opportunities, and privacy while accounting for technology’s evolution.
While China has been called on to draft its own AI law within the year, there have been some discussions as to what it will consist of. According to the South China Morning Post, their proposed law aims to regulate the development of generative AI services like ChatGPT. THe draft law will be reviewed by the National People’s Congress Standing Committee. The legislation plan also covers measures on telecommunications, internet data security, drones, and amendments to the Foreign Trade Law.
This reflects the global need for a robust legal framework for AI technology. Breakthroughs in generative AI have the potential to revolutionize content creation. China’s efforts in AI regulation have garnered attention worldwide, including from Elon Musk.
The rapid development and implementation of AI technologies, including the likes of ChatGPT, have raised both excitement and concerns globally. While AI has a rich history, from Alan Turing’s exploration of machine capabilities to IBM’s Deep Blue defeating Garry Kasparov, the recent surge in AI adoption has led to controversies and risks.
The exponential growth of content creation made possible by AI has posed challenges for online platforms in terms of content moderation, ensuring that it is authentic and reliable. There are also concerns about the potential atrophy of human creativity and critical thinking. Misusing generative AI can yield disastrous results, such as Steven Schwartz’s legal case, highlighting the need for caution and understanding when implementing and using these technologies.
As governments, corporations, and field experts seek to regulate and exercise oversight, we may be seeing an imminent establishment of a collective consensus on how to regulate and monitor AI. The European Commission has proposed a framework aimed at ensuring safety, fundamental rights, and trust in AI. The US’s “blueprint” outlines principles for safe and equitable AI systems. China will likely soon be releasing its proposed law on AI, with a focus on regulating generative AI services.
It is critical to strike a balance between harnessing its potential and preserving human experience and connection. The future of AI regulation lies in collaborative efforts between governments, organizations, and other stakeholders to shape policies that align with mission statements and protect individuals’ rights, privacy, and equity.