Unintended consequences: U.S. election results herald reckless AI development

MT HANNACH
10 Min Read
Disclosure: This website may contain affiliate links, which means I may earn a commission if you click on the link and make a purchase. I only recommend products or services that I personally use and believe will add value to my readers. Your support is appreciated!

Join our daily and weekly newsletters for the latest updates and exclusive content covering cutting-edge AI. Learn more


While the 2024 US elections have focused on traditional issues like the economy and immigration, their quiet impact on AI Policy could prove even more transformative. Without a single debate question or major campaign promise on AI, voters inadvertently tipped the scales in favor of accelerators – those who advocate rapid development of AI with minimal regulatory hurdles. The implications of this acceleration are profound, heralding a new era of AI policy that prioritizes innovation over caution and signals a decisive shift in the debate between The potential risks and rewards of AI.

President-elect Donald Trump’s pro-business stance leads many to assume that his administration will favor those who develop and commercialize AI and other advanced technologies. His party platform has little to say about AI. However, it emphasizes a policy approach focused on repealing AI regulations, particularly targeting what they described as “radical left-wing ideas” in the outgoing administration’s existing executive orders. In contrast, the platform supported the development of AI aimed at fostering free speech and “human flourishing”, calling for policies that enable innovation in AI while opposing measures perceived as hindering technological progress.

Early indications based on appointments to senior government positions underline this direction. However, a larger story unfolds: the resolution of the intense debate over The future of AI.

An intense debate

From ChatGPT emerged in November 2022, there has been a heated debate between those in the AI ​​field who want to speed up AI development and those who want to slow it down.

It is well known that in March 2023 the latter group proposed a six-month pause in the development of the most advanced systems in AI, warning in an open letter that AI tools pose “profound risks to society and humanity”. This letter, launched by the Future of Life Institutewas motivated by OpenAI’s release of the GPT-4 large language model (LLM), several months after the launch of ChatGPT.

The letter was initially signed by more than 1,000 technology leaders and researchers, including Elon Musk, Apple co-founder Steve Wozniak, 2020 presidential candidate Andrew Yang, podcaster Lex Fridman and AI pioneers Yoshua Bengio and Stuart Russell. The number of signatories to the letter eventually grew to more than 33,000. Collectively, they became known as “convicts”, a term to describe their concerns about the potential existential risks of AI.

Not everyone agreed. OpenAI CEO Sam Altman did not sign. Neither does Bill Gates and many others. The reasons why they haven’t done so vary, although many have expressed concerns about the potential harm caused by AI. This has led to much discussion about the potential for AI to run amok, leading to disaster. It has become fashionable for many in the AI ​​field to talk about their disaster probability assessmentoften called equation: p(doom). However, work on the development of AI has not stopped.

For the record, my p(doom) in June 2023 was 5%. This may seem low, but it’s not zero. I felt that major AI labs were sincere in their efforts to rigorously test new models before release and to provide important guardrails for their use.

Many observers concerned about the dangers of AI have rated the existential risks at more than 5%, and some have given a much higher rating. AI security researcher Roman Yampolskiy assessed the likelihood of AI end humanity by more than 99%. That said, a study published earlier this year, well before the election and representing the views of more than 2,700 AI researchers, showed that “the median prediction of extremely bad outcomes, such as the extinction of humanity, was 5 %”. Would you board a plane if there was a 5% chance it would crash? This is the dilemma facing AI researchers and policymakers.

We have to go faster

Others openly dismissed concerns about AI, instead emphasizing what they perceived to be the technology’s enormous advantage. These include Andrew Ng (who founded and led the Google Brain Project) and Pedro Domingos (professor of computer science and engineering at the University of Washington and author of “The main algorithm“). Instead, they argue that AI is part of the solution. As Ng points out, there are indeed existential dangers, such as climate change and future pandemics, and AI can contribute to how they are addressed and mitigated.

Ng argued that AI development should not be halted, but rather accelerated. This utopian vision of technology has been taken up by others, collectively known as “effective accelerationists” or “e/acc” for short. They argue that technology – and particularly AI – is not the problem, but the solution to most, if not all, of the world’s problems. Startup accelerator Y combiner CEO Garry Tan, along with other prominent Silicon Valley executives, included the term “e/acc” in their usernames on X to show their alignment with the vision. Journalist Kevin Roose of the New York Times captured the essence of these accelerators by saying that they have an “all gas, no brakes approach”.

A sub-stack newsletter from a few years ago, described the principles behind effective accelerationism. Here’s the summary they offer at the end of the article, along with a comment from OpenAI CEO Sam Altman.

AI acceleration ahead

The outcome of the 2024 election can be seen as a turning point, placing the accelerator vision in a position to shape US AI policy for the coming years. For example, the president-elect recently named David Sacks, a tech entrepreneur and venture capitalist, “AI czar.”

Sacks, a vocal critic of AI regulation and proponent of market-driven innovation, brings experience as a technology investor to the role. He is one of the leading voices in the AI ​​sector, and much of what he has said about AI aligns with the accelerationist views expressed by the party’s new platform.

In response to the Biden administration’s 2023 AI executive order, Sacks tweeted: “The United States’ political and fiscal situation is hopelessly broken, but we have an unprecedented asset as a country: cutting-edge AI innovation, driven by a completely free and unregulated development market of software. This has just ended. Although the influence Sacks will have on AI policy remains to be seen, his appointment marks a shift toward policies that promote industry self-regulation and rapid innovation.

Elections have consequences

I doubt most voters gave much thought to the policy implications for AI when they cast their ballots. Nonetheless, in a very tangible way, accelerators won following the election, potentially sidelining those who advocated a more cautious approach by the federal government to mitigating AI’s long-term risks.

As accelerators chart the path forward, the stakes couldn’t be higher. It remains to be seen whether this era will mark the beginning of unprecedented progress or an unintended catastrophe. As the development of AI accelerates, the need for informed public discourse and vigilant oversight becomes increasingly paramount. How we navigate this era will define not only technological progress, but also our collective future.

To counteract the lack of action at the federal level, it is possible that one or more states will adopt various regulations, which has already happened to some extent in California And Colorado. For example, California’s AI safety bills focus on transparency requirements, while Colorado addresses AI-related discrimination in hiring practices, proposing models of governance at the state level. Now all eyes will be on voluntary testing and self-imposed guardrails at Anthropic, Google, OpenAI and other AI model developers.

In summary, accelerationist victory means fewer restrictions on AI innovation. This increase in speed can indeed lead to faster innovation, but also increases the risk of unintended consequences. I am now revising my p(doom) to 10%. What is yours?

Gary Grossman is senior vice president of the technology practice at Edelmann and Global Head of the Edelman AI Center of Excellence.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including data technicians, can share data insights and innovations.

If you want to learn more about cutting-edge ideas and up-to-date information, best practices, and the future of data and data technology, join us at DataDecisionMakers.

You might even consider contribute to an article to you!

Learn more about DataDecisionMakers

Share This Article
Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *