Sign In  |  Register  |  About Burlingame  |  Contact Us

Burlingame, CA
September 01, 2020 10:18am
7-Day Forecast | Traffic
  • Search Hotels in Burlingame

  • CHECK-IN:
  • CHECK-OUT:
  • ROOMS:

AI will eventually need an international authority, OpenAI leaders say

AI leaders will eventually need an international authority in order to regulate future superintelligence, according to the leaders of OpenAI.

The artificial intelligence field needs an international watchdog to regulate future superintelligence, according to the founder of OpenAI. 

In a blog post from CEO Sam Altman and company leaders Greg Brockman and Ilya Sutskever, the group said – given potential existential risk – the world "can't just be reactive," comparing the tech to nuclear energy. 

To that end, they suggested coordination among leading development efforts, highlighting that there are "many ways this could be implemented," including a project set up by major governments or curbs on annual growth rates. 

"Second, we are likely to eventually need something like an IAEA for superintelligence efforts; any effort above a certain capability (or resources like compute) threshold will need to be subject to an international authority that can inspect systems, require audits, test for compliance with safety standards, place restrictions on degrees of deployment and levels of security, etc." they asserted. 

AI COULD GROW SO POWERFUL IT REPLACES EXPERIENCED PROFESSIONALS WITHIN 10 YEARS, SAM ALTMAN WARNS

The International Atomic Energy Agency is the international center for cooperation in the nuclear field, of which the U.S. is a member state. 

The authors said tracking computing and energy usage could go a long way. 

"As a first step, companies could voluntarily agree to begin implementing elements of what such an agency might one day require, and as a second, individual countries could implement it. It would be important that such an agency focus on reducing existential risk and not issues that should be left to individual countries, such as defining what an AI should be allowed to say," the blog continued. 

Thirdly, they said they needed the technical capability to make a "superintelligence safe."

LATEST VERSION OF CHATGPT PASSES RADIOLOGY BOARD-STYLE EXAM, HIGHLIGHTS AI'S 'GROWING POTENTIAL,' STUDY FINDS

While there are some facets that are "not in scope" – including allowing development of models below a significant capability threshold "without the kind of regulation" they described and that systems they are "concerned about" should not be watered down by "applying similar standards to technology far below this bar" – they said the governance of the most powerful systems must have strong public oversight.

"We believe people around the world should democratically decide on the bounds and defaults for AI systems. We don't yet know how to design such a mechanism, but we plan to experiment with its development. We continue to think that, within these wide bounds, individual users should have a lot of control over how the AI they use behaves," they said. 

The trio believes it is conceivable that AI systems will exceed expert skill level in most domains within the next decade. 

So, why build AI technology at all considering the risks and difficulties posed by it?

They claim AI will lead to a "much better world than what we can imagine today," and that it would be "unintuitively risky and difficult to stop the creation of superintelligence."

"Because the upsides are so tremendous, the cost to build it decreases each year, the number of actors building it is rapidly increasing, and it’s inherently part of the technological path we are on, stopping it would require something like a global surveillance regime, and even that isn’t guaranteed to work. So we have to get it right," they said.

Data & News supplied by www.cloudquote.io
Stock quotes supplied by Barchart
Quotes delayed at least 20 minutes.
By accessing this page, you agree to the following
Privacy Policy and Terms and Conditions.
 
 
Copyright © 2010-2020 Burlingame.com & California Media Partners, LLC. All rights reserved.