In March of 2023, AI industry leaders came together to warn society of a risk that was so substantial it required a “pause” in its development. The letter, now signed by almost 34 thousand people is titled: “Pause Giant AI Experiments: An Open Letter.” Its intention was to give us needed time to regulate AI language models more powerful than OpenAI’s GPT-4. They wanted to avoid the unintended consequences of exponential-technology spinning out of control.

This MIT Technology Review article provides a situational review from MIT professor Max Tegmark, the founder and president of Future of Life Institute which helped originate the letter.

“What’s not great is that all the companies are still going full steam ahead and we still have no meaningful regulation in America. It looks like US policymakers, for all their talk, aren’t going to pass any laws this year that meaningfully rein in the most dangerous stuff.”

-MIT professor Max Tegmark, the founder and president of FLI

Why Regulate?

Regulation of AI is a way to ensure that AI is beneficial to society over the long-term and is not only evolving only to help the wealthy build more wealth–AI has an important role to play in improving society. It prepares for how AI will soon be used to power systems that will influence what it means to be human and our evolutionary trajectory. It recognizes the potential for misuse that could lead to dystopian levels of wealth inequality or provide resources that destroy vs harmonize social order. Take for example efforts to enhance human capabilities through transhumanism which embeds AI into biological functions using techniques like brain-computer interfaces. Regulation of artificial general intelligence (AGI) might include determining the role of review boards, encouraging research into safe AI, prioritizing risk-reducing strategies over risk-taking strategies in AI development, or limits that discourage use of AI for destructive purposes (AI should not lead to anarchy).

Compare the lack of regulations for AI with other industries where regulations limit risk and in many cases allow the sector to sustain itself by establishing boundaries. Certainly, regulation can stifle creativity or economic growth, but a balance can and must be found.

While falling far short of what should be expected of our federal government to regulate, the Biden-Harris Administration secured voluntary commitments from 15 companies – Open AI, Amazon, Anthropic, Google, Inflection, Meta, Microsoft, Adobe, Cohere, IBM, Nvidia, Palantir, Salesforce, Scale AI, and Stability. These companies have committed to ensuring AI products undergo both internal and external security testing before public release; to share information on the management of AI risks with the industry, governments, civil society, and academia; to prioritize cybersecurity and protect proprietary AI system components; to develop mechanisms to inform users when content is AI-generated, such as watermarking; to publicly report on their AI systems’ capabilities, limitations, and areas of use; to prioritize research on societal risks posed by AI, including bias, discrimination, and privacy concerns; and to develop AI systems to address societal challenges, ranging from cancer prevention to climate change mitigation.

What are your views on regulating AI? How is the lack of regulation harming or helping you or your business?