President Biden Takes a Risk by Signing an AI Executive Order

President Biden Takes a Risk by Signing an AI Executive Order

President Biden signed a wide artificial intelligence executive order on Monday, mobilizing federal departments and using broad emergency authority to harness the potential.

The broad effort is the US government’s greatest attempt to encourage innovation and address worries that emerging technology may perpetuate bias, displace jobs, and endanger national security.

He said they need to govern this technology to realize the promise of AI and avoid the risk, Biden said in a White House address before the signing Monday that the most important step any government across the world has ever taken on AI safety, security, and trust.

The order comes as policymakers and regulators around the world consider new measures to oversee and strengthen the technology’s deployment, efforts in Congress to pass comprehensive AI legislation, remain in their early stages, restricting federal government managers from enforcing existing safeguards.

According to a White House statement, imposing new safety duties on AI developers and directing a host of government departments to reduce the technology’s risks while examining their own usage of the technologies.

The regulation that businesses developing the most advanced AI systems conduct safety testing, known as “red teaming,” and inform before releasing products. The directive requires corporations to share red-teaming results with the government under the Defense Production Act, a 1950 law that has been used in crises such as the coronavirus epidemic and the baby formula crisis.

Biden stated that the capabilities are normally reserved for such times of war and that he intends to use the same authority to prove that their most potent systems are safe before authorizing them to be used.

According to a directive obtained by The Washington Post, the order leverages federal purchasing power, requiring the government to employ management techniques when using AI to damage people’s rights or safety. Agencies will be obligated to regularly monitor and evaluate deployed AI.

The order by the government to create rules for corporations to designate content, a process known as watermarking directs other departments to consider how the technology may disrupt areas.


Similar Posts

Signup MLNews Newsletter

What Will You Get?


Get A Free Workshop on
AI Development