MLNews

AI Doomsday Predictions about AI Challenges Are Overshadowed

AI Doomsday Predictions about AI Challenges Are Overshadowed

According to a senior industry executive attending this week’s AI safety meeting, concentrating on doomsday possibilities in artificial intelligence that downplays urgent threats such as large-scale disinformation creation.

According to Aidan Gomez, co-author of a research article that contributed to the development of chatbot technology, long-term hazards from Artificial intelligence should be “studied and pursued,” but they may divert politicians’ attention away from dealing with immediate possible problems.

He said he thinks in terms of fundamental risk and public policy, it isn’t an informative discussion to be had. In terms of public policy should focus on the public sector.

Gomez will be attending the two-day conference as CEO of Cohere, a North American startup that creates solutions for enterprises such as chatbots, which begins on Wednesday. At the age of 20, Gomez was part of a Google research team that built the Transformer, a crucial technology behind huge language models that enable products like chatbots.

Gomez stated that Artificial intelligence – the term for computer systems that can do tasks normally associated with intelligent individuals – is already in widespread usage and that the summit should focus on these applications. Chatbots like ChatGPT and image generators like Midjourney have astounded the audience by producing believable text and graphics from simple text queries.

This technology has been implemented in a billion user products, such as those offered by Google and others. This raises a slew of new concerns, none of which are fundamental or doomsday possibilities, according to Gomez. They said they should focus directly on the pieces that are about to affect people or are actively influencing people, as opposed to maybe the more academic and theoretical debate about the long-term future.”

Gomez stated that his main issue was disinformation or the propagation of false or erroneous information on the internet. “Misinformation is one that is top of mind for me,” he went on to say. “These models can produce media that is extremely convincing, compelling, and nearly indistinguishable from human-created text, images, or media.”

The government warned last week in a series of documents describing threats, including Artificial Intelligence-generated misinformation and job market upheaval, that it could not rule out AI development reaching a stage where systems threatened humans.

A risk study released this week said: “Given the significant uncertainty in predicting developments, there is insufficient evidence to rule out that highly capable Frontier systems if misaligned or inadequately controlled, could pose an existential threat.”

Reference

https://www.theguardian.com/technology/2023/oct/29/ai-doomsday-warnings-a-distraction-from-the-danger-it-already-poses-warns-expert


Similar Posts

Signup MLNews Newsletter

What Will You Get?

Bonus

Get A Free Workshop on
AI Development