Illuminati Press
Top trending news in digital world is now here

AI boom may not have positive outcome, warns UK competition watchdog | Artificial intelligence (AI)

0


People should not assume a positive outcome from the artificial intelligence boom, the UK’s competition watchdog has warned, citing risks including a proliferation of false information, fraud and fake reviews as well as high prices for using the technology.

The Competition and Markets Authority said people and businesses could benefit from a new generation of AI systems but dominance by entrenched players and flouting of consumer protection law posed a number of potential threats.

The CMA made the warning in an initial review of foundation models, the technology that underpins AI tools such as the ChatGPT chatbot and image generators such as Stable Diffusion.

The emergence of ChatGPT in particular has triggered a debate over the impact of generative AI – a catch-all term for tools that produce convincing text, image and voice from typed human prompts – on the economy by eliminating white-collar jobs in areas such as law, IT and the media, as well as the potential for mass-producing disinformation targeting votes and consumers.

The CMA chief executive, Sarah Cardell, said the speed at which AI was becoming a part of everyday life for people and businesses was “dramatic”, with the potential for making millions of everyday tasks easier as well as boosting productivity – a measure of economic efficiency, or the amount of output generated by a worker for each hour worked.

However, she warned that people should not assume a positive outcome.

“We can’t take a positive future for granted,” she said in a statement. “There remains a real risk that the use of AI develops in a way that undermines consumer trust or is dominated by a few players who exert market power that prevents the full benefits being felt across the economy.”

The CMA defines foundation models as “large, general machine-learning models that are trained on vast amounts of data and can be adapted to a wide range of tasks and operations” including powering chatbots, image generators and Microsoft’s 365 office software products.

The watchdog estimates about 160 foundation models have been released by a range of firms including Google, the Facebook owner Meta and Microsoft, as well as new AI firms such as the ChatGPT developer OpenAI and the UK-based Stability AI, which funded the Stable Diffusion image generator.

It is “essential” that the AI market does not fall into the hands of a handful of companies, with the short-term threat that consumers are exposed to significant levels of false information, AI-enabled fraud and fake reviews, the CMA said. In the long term it could also result in companies failing to offer the best products and services, and charging high prices.

skip past newsletter promotion

“If competition is weak or developers fail to heed consumer protection law, people and businesses could be harmed,” the CMA said. “For example, people could be exposed to significant levels of false and misleading information and AI-enabled fraud. In the longer term, a handful of firms could use FMs [foundation models] to gain or entrench positions of market power and fail to offer the best products and services and/or charge high prices.”

As part of the report the CMA proposed a new set of principles for the development of AI models. They are: ensuring that foundation model developers have access to data and computing power and that early AI developers do not gain an entrenched advantage; “closed source” models such as OpenAI’s GPT-4 and publicly available “open source” models, which can be adapted by external developers, are both allowed to develop; businesses have a range of options to access AI models – including developing their own; consumers should be able to use multiple AI providers; no anticompetitive conduct like “bundling” AI models into other services; consumers and businesses are given clear information about use and limitations of AI models.

The CMA said it would publish an update on its principles, and how they have been received, in 2024. The UK government will host a global AI safety summit in early November.



Source link

Leave A Reply

Your email address will not be published.