Benjamin Godfrey is a blockchain enthusiast and journalists who relish writing about the real life applications of blockchain technology and innovations to drive general acceptance and worldwide integration of the emerging technology. His desires to educate people about cryptocurrencies inspires his contributions to renowned blockchain based media and sites. Benjamin Godfrey is a lover of sports and agriculture.
Besides the complaint filed by the Center for AI and Digital Policy that may stir the probe of OpenAI, there is a number of other targeted efforts to stifle the development of AI systems across the board.
OpenAI, the fast-growing Artificial Intelligence (AI) startup behind the popular platform, ChatGPT may be welcoming a probe from the Federal Trade Commission (FTC) for the violation of section five of the FTC Act that targets deceptive and unfair practices. The probe might be triggered following a formal complaint filed by the Center for AI and Digital Policy.
“The FTC has a clear responsibility to investigate and prohibit unfair and deceptive trade practices,” the center’s founder and president, Marc Rotenberg, said in a statement. “We believe that the FTC should look closely at OpenAI and GPT-4.”
That ChatGPT has taken the center stage in AI discourse is not an exaggeration as the platform scored more than 100 million users within the first few months of its release. The recently launched GPT-4 is even more advanced, however, there have been complaints about the application giving false answers.
The basis of the complaints and Section Five that was violated posited that OpenAI employed deceptive tactics to market its product to consumers and sniff out the competition. This complaint may even get more defined attention from the regulator who updated its terms recently, calling on firms like OpenAI to desist from inflating their capabilities.
In addition, the FTC wants all the potential risks to be explored by the developers of AI programs.
“You need to know about the reasonably foreseeable risks and impact of your AI product before putting it on the market,” the agency wrote. “If something goes wrong—maybe it fails or yields biased results—you can’t just blame a third-party developer of the technology. And you can’t say you’re not responsible because that technology is a ‘black box’ you can’t understand or didn’t know how to test.”
OpenAI Probe amid Call to Halt AI Development
Besides the complaint filed by the Center for AI and Digital Policy that may stir the probe of OpenAI, there is a number of other targeted efforts to stifle the development of AI systems across the board. In a recent move, about 1000 tech leaders featuring Tesla CEO Elon Musk and Apple’s Co-Founder Steve Wozniack have called for the halt of AI solutions for at least 6 months.
“We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors,” the leaders said in their open letter. “If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.”
Besides this, the United Nations Educational, Scientific and Cultural Organization (UNESCO) has also called for the formation of a Global Ethical Framework for Artificial Intelligence systems. With this growing clamor, the excitement surrounding ChatGP, and Google’s Bard may be tapered if regulatory scrutiny on these solutions is heightened.