170073537056117.webp

OpenAI's Superalignment team is developing control strategies for super-intelligent AI

OpenAI affirms that it's making progress in its capacity to handle highly intelligent AI systems, according to an article in a recent WIRED report. This Superalignment team, which is led by OpenAI's chief scientist Ilya Sutskever, has come up with a method to guide the behavior of AI models as they become more sophisticated.

OpenAI's Superalignment team is developing control strategies for super-intelligent AI
The Superalignment team, which was established in July, is focused on the issue of ensuring that AI is safe and effective when it reaches and exceeds human intelligence. "AGI is rapidly approaching," Leopold Aschenbrenner, a researcher at OpenAI, told WIRED. "We're likely to see superhuman-like models, they're gonna have vast capabilities, and they're also extremely, very dangerous and we don't yet have the means to manage them."

The new research paper from OpenAI outlines a technique called supervision, in which a less-developed AI model directs the behavior of a higher-level one. This approach aims to keep the higher-end model's capabilities while ensuring it adheres to standards of security and ethical conduct. The approach is seen as crucial to managing potential superhuman AIs.

The experiments involved using OpenAI's GPT-2 text generator to teach GPT-4, a more advanced system. The researchers tested two methods to stop the degrading efficiency of the GPT-4 system. The first method involved training increasingly larger models, and the second method added an algorithmic modification to GPT-4. The latter was found to be more effective however, the researchers admit that the perfect control of behavior is not yet guaranteed.

Industry response and future directions
Dan Hendryks, director of the Center for AI Safety, acknowledged the proactive approach of OpenAI to controlling superhuman AIs. This Superalignment Team's research is viewed as a significant first step, however further research and development are necessary to ensure the effectiveness of control systems.

OpenAI plans to allocate the majority of its computing resources towards the Superalignment project and is calling for collaboration from outside. OpenAI, in collaboration together with Eric Schmidt, is offering $10 million in grants for researchers working on AI control techniques. Furthermore, there will be a conference on superalignment next year to explore further this important area.

Ilya Sutskever, a co-founder of OpenAI and a key player in the company's technical advancements as well as the Superalignment team. His involvement in the project is vital, especially following the recent governance issues at OpenAI. Sutskever's leadership and expertise have been instrumental in moving the project forward.

The development of strategies to control the super-intelligent machine is a challenging and urgent task. As AI technology advances rapidly and advances, ensuring that it's aligned with human values as well as safety becomes increasingly critical. The initiative of OpenAI in this area is a major step, but the path to reliable and effective AI-controlling systems remains ongoing and requires collaboration from the entire AI scientific community.
170073537014693.webp