In a letter published by the Office of Management and Budget, the Biden administration provided new guidelines to government agencies about the use of artificial intelligence and its limitations.

It’s a big step towards ensuring the safe application of AI, a problem that other nations and private businesses are also facing.

Before Vice President Harris travelled to the United Kingdom for the first-ever global AI meeting, a draft of the guidelines was made public last autumn. After that, the document was made available for public review before being issued on Thursday in its final version.

Harris noted that the standards were “binding” and underlined the necessity of global public interest guidelines.

In a call with reporters on Wednesday, Harris stated, “President Biden and I intend that these domestic policies serve as a model for global action.” “We will continue to call on all nations to follow our lead and put the public interest first when it comes to government’s use of AI.”

The agency guidelines aim to achieve a balance between promoting innovation and mitigating the hazards associated with artificial intelligence.

A chief artificial intelligence officer, a senior position that will manage AI implementation, must also be appointed by each agency. It also describes the steps the government is taking to increase the number of workers specialising in AI, such as employing 100 or more experts in the field before the end of the summer.

“The director of the Office of Management and Budget (OMB), Shalanda Young, stated that “the public deserves trust that the federal government will use the technology appropriately.”

Agencies must put AI safeguards in place by December 1st.
By December 1, agencies must have appropriate safeguards in place for any AI technology they utilise, according to the guidelines. They must cease utilising the technology if they are unable to offer those protections, unless they can demonstrate that doing so is essential to the agency’s operation.

The guidelines don’t provide enough detail on the mechanics of the process for evaluating, testing, and tracking the effects of AI technology, which is one of the necessary protections.

The president and CEO of the Centre for Democracy and Technology, Alex Reeve Givens, expressed her concerns to NPR, saying she is still unclear about the nature of the testing procedures and which government officials had the necessary knowledge to approve the technology.

This seems like the first step to me. Detailed practice standards and expectations for what constitutes good auditing, for instance, will follow, according to Reeve Givens. “There’s a lot more work to be done.”

Reeve Givens is looking forward to the administration’s guidelines on the procurement process and the conditions that will apply to businesses whose artificial intelligence (AI) technology the government want to purchase.

“That really is the inflection point when a lot of decisions and values can be made and a lot of testing can be done before the government is spending dollars on the system in the first place,” she stated.

Agency transparency will enable more thorough inspection
According to Reeve Givens, the OMB’s transparency guidance was especially notable.

The new guidelines mandate that agencies provide an inventory of their AI usage and associated risks online once a year, and that the inventory be easily accessible. “That clause is crucial,” according to Reeve Givens.

‘What testing did you do?’ is one of the queries we can then pose. How did that appear to be? While there may be increased public interest in and examination of such use cases, this provides us with the impetus to initiate that dialogue,” the speaker stated.

However, the DOD and intelligence services are not required to share how they employ AI.

The guidelines can act as a “catalyst” to increase AI usage.
The goal of the OMB guidelines is to promote AI innovation as well. Emory University law professor Ifeoma Ajunwa told NPR that the guidelines convey to agencies that it’s acceptable to investigate the use of artificial intelligence (AI).

“I think this will be a catalyst for agencies that may perhaps have had some trepidation or reservation about using AI technologies,” she said.

“I don’t want agencies to take this as carte blanche to use AI technologies in all instances,” Ajunwa said. “But I do want them to see this as an opening, as a catalyst that they can use it when appropriate and when safety guardrails have been put in place.”

Artificial intelligence is currently used by a number of federal departments, but the Biden administration’s paper describes further applications for the technology that might be significant, such as tracking the spread of disease and opiate use or predicting extreme weather events.

You May Also Like

More From Author

+ There are no comments

Add yours