MLNews

New AI products: Whether we’re ready or not, a torrent of new AI products has just arrived.

Whether we're ready or not, a torrent of new AI products has just arrived.

SAN FRANCISCO (AP) — This week, Big Tech unveiled a slew of new AI gadgets capable of reading emails and papers and interacting in a human-like manner. However, even before their public debuts, these new tools were making errors – fabricating information or getting simple facts mixed up — a warning that the tech titans are rushing out their latest inventions before they are entirely ready.

Google claimed that its Bard chatbot can summarise files from Gmail and Google Docs, but users demonstrated that it was fabricating emails that were never received. OpenAI celebrated their new Dall-E 3 image generator, but social media users quickly pointed out that the images in the published demos were missing some essential details. And, while Amazon introduced a new conversational mode for Alexa, the device repeatedly erroneously recommended a museum in the wrong part of the country in a demo for The Washington Post.

The tech titans are racing to control the groundbreaking “generative” Artificial intelligence technology that can write human-like language and generate realistic-looking visuals. Getting more people to use them creates the data needed to improve them, providing motivation to get the tools to as many people as possible. However, many experts, including tech leaders, have warned against launching largely new and untested technology.

“There’s a horrible sense of FOMO among big tech companies that want to do, and they don’t want to miss out on generating an early audience,” Steve Teixeira, Mozilla’s chief product officer and a former executive at Microsoft, Facebook, and Twitter, said. “They’re all aware that these systems aren’t perfect.”

The corporations claim that they have been explicit that their Artificial Intelligence is a work in progress, and that they have taken precautions to prevent the technology from making offensive or biased claims. Some leaders, such as Open Artificial Intelligence CEO Sam Altman, think that it’s better to have people utilizing tools today to understand what kinds of hazards come with them before the technology grows more powerful.

The authorities have already taken note. Despite the fact that Congress has convened numerous meetings and hearings, and multiple bills have been presented, little meaningful action has been taken against the firms. Last week, CEOs like Tesla CEO Elon Musk and Facebook CEO Mark Zuckerberg gathered to face senators’ questions about the technology, who have said they intend to design legislation to control it.

Legislators in the European Union are pressing forward with legislation prohibiting specific uses of Artificial Intelligence, such as predicting criminal behavior, while establishing rigorous controls for the rest of the business. The government of the United Kingdom is arranging a significant gathering for Artificial Intelligence and government officials to explore global cooperation in November.

According to Teixeira, the Mozilla executive, the internet companies’ policy of “launch first, fix later” has significant hazards. Chatbots typically deliver information in an authoritative manner, making it difficult for individuals to recognize that what they are being told may be incorrect. And the corporations aren’t being transparent enough about how they’re using the data that individuals enter while interacting with the bots.

“There’s certainly not a sufficient level of transparency to tell me what’s happening with my stuff,” he added.

Amazon’s generative Artificial Intelligence talking chatbot functionality for its Alexa home speakers was released this week, months behind the competition. Amazon senior vice president of devices and services Dave Limp stated that the new technology allowed for “near-human” communication with Alexa.

However, the business did not allow journalists to test it out, and in an onstage demonstration, the chatbot interspersed its dialogue with Limp with some long, awkward pauses.

“It’s not the endgame,” Limp explained in an interview. According to him, the bot would improve with time.


Similar Posts

Signup MLNews Newsletter

What Will You Get?

Bonus

Get A Free Workshop on
AI Development