MLNews

A new AI poll suggests that the experts are far behind the rest of us.

A new AI poll suggests that the experts are far behind the rest of us.

AI is turning up to be yet another public policy issue in which elites are out of touch with the public.

Leaders in Silicon Valley, Congress, and the Biden administration are all abuzz about AI. Killer robots with laser eyes dominate the discourse in the Valley. The Beltway, on the other hand, is expending significant political capital to address prejudice in algorithms.

Nonetheless, the general people aren’t very concerned about robots gaining control or biased algorithms. What worries them about AI are the consequences for national security and the potential for employment losses.

Our organization polled 1,000 US voters last month to rate their top benefits and concerns. Killer computers, transparency, bias, job loss, and national security are among the top fears. The findings of our poll, conducted in collaboration with YouGov, reveal a disparity between the public’s and those in power worries.

WHAT IS ARTIFICIAL INTELLIGENCE (AI)?

When it comes, more than one-third of individuals say job loss is their top concern. This is consistent with the findings of a CGO research performed earlier this summer, which revealed that four out of five Americans are concerned that AI will displace jobs “generally.” (It’s worth noting that only two out of every five Americans were concerned that AI might supplant their own job.) Following that, around one-quarter consider national security to be the most critical feature. Bias and killer computers are near the bottom of the rankings, with around 10% each.

In other words, people are less concerned about the esoteric problems of killer robots and bias and more concerned about actual losses such as job loss and China’s loss in the AI race.

It is widely assumed that the government has done little to advance, yet the Biden administration has been highly active, particularly in combating bias. To cite a few recent initiatives, they have taken “New Steps to Advance Responsible Artificial Intelligence Research, Development, and Deployment,” addressed “Racial and Ethnic Bias in Home Valuations,” and outlined a “Blueprint for an AI Bill of Rights.” The goal of minimizing algorithmic bias is shared by all initiatives in the executive.

Of course, the government can help police bad actors in housing, banking, and education. However, there are already standards in place to prohibit biased hiring, for example, and creating new rules for AI is far easier said than done because there is no agreement on what Artificial Intelligence is. Poorly conceived AI regulatory standards might potentially harm all software.

When analyzing bias, the same definitional challenges arise, and various experts reach different findings. What is rarely acknowledged in these high-level debates is that the same data might yield drastically different results depending on who analyses it. Independent research teams were given identical data to decrypt in a variety of separate investigations, resulting in opposing and conflicting outcomes. Teams attempting to minimize inequity may not agree on the same model or parameters.

However, the government must consider the unexpected effects of its acts. Excessively stringent rules may hamper innovation and slow the adoption of new technology. Furthermore, the government must be careful that its efforts to combat algorithmic prejudice may be misinterpreted by the public as an attempt to regulate data privacy.

Bias and physical safety must not be overlooked. However, governments would do well, like the American public, to prioritize these areas of possible job loss and national security to ensure the public knows the benefits and hazards of AI systems in these industries.

Aligning remedies to the clearer, more pressing problems is a strategy that works well in AI policy and all other areas of public policy.


Similar Posts

Signup MLNews Newsletter

What Will You Get?

Bonus

Get A Free Workshop on
AI Development