In an open letter citing possible risks to society and humanity, Elon Musk and a group of artificial intelligence (AI) experts and business executives are urging a six-month halt to the development of systems more powerful than OpenAI’s recently released GPT-4.
Backed by Microsoft, OpenAI revealed the fourth version of OpenAI’s GPT (Generative Pre-trained Transformer) AI program earlier this month. Which has wowed users on its wide range of applications, including engaging users in human-like conversation, song composition, and document summarization, earlier this month.
More than 1,000 individuals, including Musk, signed a letter from the nonprofit Future of Life Institute, which urged a halt on the development of advanced artificial intelligence until standardized safety guidelines for such designs were created, put into place, and independently audited.
In the letter, it was stated that “powerful AI systems should only be developed once we are confident that their effects will be positive and their risks will be manageable.”
Additionally, the letter outlined potential dangers posed to society and civilization by human-competitive AI systems. Which included potential for political and economic turmoil, and urged developers to collaborate with regulators and lawmakers on governance and regulatory frameworks.
On the other hand, Musk has been outspoken about his worries regarding AI, which is being used in an autopilot system by his company.
Since its debut in 2017, OpenAI’s ChatGPT has inspired competitors to expedite the creation of similar large language models and businesses to incorporate generative AI models into their products.
Critics asserted that claims regarding the technology’s current potential had been greatly exaggerated and accused the letter’s signatories of promoting “AI hype.”
According to Ume University assistant professor and AI researcher Johanna Björklund, “these remarks aim to create excitement. She adds, “the purpose of it is to make people anxious. I don’t think there’s a need to pull the handbrake.”
She suggested that greater transparency requirements for AI researchers be put in place rather than pausing research. “You should be very transparent about how you conduct AI research,” the author advised.