MLNews

ChatGPT evolvement: UK competition authority warns that the emergence of ChatGPT may not be beneficial

UK competition authority warns that the emergence of ChatGPT may not be beneficial

The UK’s competition authority has warned that people should not expect a positive outcome from artificial intelligence like the ChatGPT explosion, noting dangers such as an increase in misleading information, fraud, and fake reviews, as well as excessive charges for adopting the technology.

According to the Competition and Markets Authority, people and businesses could profit from a new generation of AI systems like ChatGPT but established players’ dominance and violations of consumer protection law posed a variety of possible hazards.

The emergence of ChatGPT in particular has sparked a debate about the economic impact of generative AI – a catch-all term for tools that produce convincing text, image, and voice outputs from typed human prompts – on areas such as law, IT, and the media, as well as the potential for mass-producing disinformation targeting voters and consumers.

The CMA’s chief executive, Sarah Cardell, described the pace at which AI like ChatGPT was becoming a part of people’s and businesses’ daily lives as “dramatic,” with the potential to simplify millions of everyday tasks while also increasing productivity – a measure of economic efficiency, or the amount of output generated by a worker for each hour worked.

However, Cardell cautioned that individuals should not expect a favorable outcome. “We can’t take a positive future for granted,” she added in a statement. “There is still a real risk that the use of AI undermines consumer trust or is dominated by a few players with market power that prevents the full benefits from being felt across the economy.”

The CMA defines foundation models as “large, general machine-learning models that are trained on vast amounts of data and can be adapted to a wide range of tasks and operations” such as powering chatbots, image generators, and Microsoft’s 365 office software packages.

According to the watchdog, approximately 160 foundation models have been released by a variety of companies, including Google, Facebook owner Meta, and Microsoft, as well as new AI startups like ChatGPT creator OpenAI and UK-based Stability AI, which funded the Stable Diffusion image generator.

The CMA also stated that many companies already had a presence in two or more key aspects of the AI model ecosystem, with major AI developers such as Google, Microsoft, and Amazon owning critical infrastructure for producing and distributing foundation models such as data centers, servers, and data repositories, as well as a presence in markets such as online shopping, search, and software.

The CMA offered a set of principles for the creation of AI models as part of the report. They are as follows: ensuring that foundation model developers have access to data and computing power and that early AI developers do not gain an entrenched advantage; “closed source” models, such as OpenAI’s GPT-4, and publicly available “open source” models, which can be adapted by external developers, are both allowed to develop; businesses have a variety of options for accessing AI models, including developing their own; consumers should be able to use multiple AI providers; and no antics are allowed.


Similar Posts

Signup MLNews Newsletter

What Will You Get?

Bonus

Get A Free Workshop on
AI Development