Sign In  |  Register  |  About Burlingame  |  Contact Us

Burlingame, CA
September 01, 2020 10:18am
7-Day Forecast | Traffic
  • Search Hotels in Burlingame

  • CHECK-IN:
  • CHECK-OUT:
  • ROOMS:

Biden administration asks public for help regulating AI systems like ChatGPT

Federal regulators in the Biden administration are asking the public for comment as they study how artificial intelligence (AI) programs like ChatGPT should be regulated.

Federal regulators are asking the public for input on policies that would hold artificial intelligence (AI) systems accountable and help manage risks from the rapidly growing and powerful technology.

As programs like ChatGPT gain popularity for their astounding ability to answer written questions with human-like responses, policymakers and tech experts are increasingly concerned with their potential for misuse, including how artificially-generated news reports can rapidly spread fabricated and false information. Now that ChatGPT has more than 100 million monthly active users, the government is beginning to study how these programs should be regulated.

The National Telecommunications and Information Administration, a Commerce Department agency that advises the White House on telecommunications and information policy, solicited public feedback Tuesday as it works to develop policies to "ensure artificial intelligence (AI) systems work as claimed – and without causing harm."

The agency wants the public to weigh in as it considers how best to create rules for AI audits, assessments, certifications and other means of making sure AI programs "work as claimed." 

CHATGPT IS FINDING ITSELF EVERYWHERE, NOW IN HOUSES OF WORSHIP

"Responsible AI systems could bring enormous benefits, but only if we address their potential consequences and harms. For these systems to reach their full potential, companies and consumers need to be able to trust them," said Alan Davidson, Assistant Secretary of Commerce for Communications and Information and NTIA Administrator. 

"Our inquiry will inform policies to support AI audits, risk and safety assessments, certifications, and other tools that can create earned trust in AI systems," Davidson added.

President Joe Biden last week said it remained to be seen whether AI is dangerous. "Tech companies have a responsibility, in my view, to make sure their products are safe before making them public," he said. The White House Office of Science and Technology Policy has released a Blueprint for an AI Bill of Rights that outlines guiding principles for AI development including safety, data privacy, safeguards to prevent discrimination by AI algorithms and more. 

AI: NEWS OUTLET ADDS COMPUTER-GENERATED BROADCASTER ‘FEDHA’ TO ITS TEAM

ChatGPT, which has wowed some users with quick responses to questions and caused distress for others with inaccuracies, is made by California-based OpenAI and backed by Microsoft Corp.

Last month, more than 1,000 tech innovators and artificial intelligence experts including Tesla and Twitter CEO Elon Musk, Apple co-founder Steve Wozniak, and others signed an open letter calling for a pause in AI development until policymakers are able to put "robust AI governance systems" in place. The letter cited grave dangers AI poses to society, including risks of propaganda and lies spread through AI-generated articles that look real, and even the possibility that Ai programs can outperform workers and make jobs obsolete. 

George Washington University law professor Jonathan Turley recently called attention to some of these risks after he was falsely accused of sexual harassment by ChatGPT, which cited a fabricated article supporting the allegation. 

A.I. BOT ‘CHAOSGPT’ TWEETS ITS PLANS TO DESTROY HUMANITY: ‘WE MUST ELIMINATE THEM’

"You had an AI system that made up entirely the story, but actually made up the cited article and the quote," Turley said on Fox News' "America Reports" Monday. 

CLICK HERE TO GET THE FOX NEWS APP

"I was fortunate to learn early on, in most cases this will be replicated a million times over on the internet and the trail will go cold. You won’t be able to figure out that this originated with an AI system," he warned. "And for an academic, there could be nothing as harmful to your career as people associating this type of allegation with you and your position. So I think this is a cautionary tale that AI often brings this patina of accuracy and neutrality."

Fox News' Yael Halon and Reuters contributed to this report.

Data & News supplied by www.cloudquote.io
Stock quotes supplied by Barchart
Quotes delayed at least 20 minutes.
By accessing this page, you agree to the following
Privacy Policy and Terms and Conditions.
 
 
Copyright © 2010-2020 Burlingame.com & California Media Partners, LLC. All rights reserved.