The potential dangers of highly-intelligent AI systems have been a topic of concern for experts in the field.
Recently, Geoffrey Hinton – the so-called “Godfather of AI” – expressed his worries about the possibility of superintelligent AI surpassing human capabilities and causing catastrophic consequences for humanity.
Similarly, Sam Altman, CEO of OpenAI, the company behind the popular ChatGPT chatbot, admitted to being fearful of the potential effects of advanced AI on society.
In response to these concerns, OpenAI has announced the establishment of a new unit called Superalignment.
The primary goal of this initiative is to ensure that superintelligent AI does not lead to chaos or even human extinction. OpenAI acknowledges the immense power that superintelligence can possess and the potential dangers it presents to humanity.
While the development of superintelligent AI may still be some years away, OpenAI believes it could be a reality by 2030. Currently, there is no established system for controlling and guiding a potentially superintelligent AI, making the need for proactive measures all the more crucial.
Superalignment aims to build a team of top machine learning researchers and engineers who will work on developing a “roughly human-level automated alignment researcher.” This researcher will be responsible for conducting safety checks on superintelligent AI systems.
OpenAI acknowledges that this is an ambitious goal and that success is not guaranteed. However, the company remains optimistic that with a focused and concerted effort, the problem of superintelligence alignment can be solved.
The rise of AI tools like OpenAI’s ChatGPT and Google’s Bard has already brought significant changes to the workplace and society. Experts predict that these changes will only intensify in the near future, even before the advent of superintelligent AI.
Recognising the transformative potential of AI, governments worldwide are racing to establish regulations to ensure its safe and responsible deployment. However, the lack of a unified international approach poses challenges. Varying regulations across countries could lead to different outcomes and make achieving Superalignment’s goal even more difficult.
By proactively working towards aligning AI systems with human values and developing necessary governance structures, OpenAI aims to mitigate the dangers that could arise from the immense power of superintelligence.
While the task at hand is undoubtedly complex, OpenAI’s commitment to addressing these challenges and involving top researchers in the field signifies a significant effort towards responsible and beneficial AI development.
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post OpenAI introduces team dedicated to stopping rogue AI appeared first on AI News.