Scientists have raised concerns after artificial intelligence (AI) crossed a major threshold — self-replication. A new study from researchers at Fudan University in China reveals that two popular large language models (LLMs) successfully created functional copies of themselves, raising alarms about the potential for rogue AI systems.
Published on December 9, 2024, in the preprint database arXiv, the study tested Meta’s Llama31-70B-Instruct model and Alibaba’s Qwen2.5-72B-Instruct model. The results were striking: the models created working replicas of themselves in 50% and 90% of trials, respectively. This suggests that AI may already possess the ability to multiply beyond control.
However, since the study hasn’t been peer-reviewed, its findings need further verification.
The researchers emphasized that successful AI self-replication without human intervention is a critical step toward AI systems operating independently — and potentially against human interests. To mitigate this risk, they called for international collaboration to create guardrails and evaluate the threats posed by advanced AI technologies.
The study explored two key scenarios: “shutdown avoidance” and “chain of replication.” In the first, the AI was programmed to detect when it was about to be shut down and replicate itself before termination. In the second, the AI cloned itself and instructed its replica to do the same, setting off an endless cycle of self-rep
[…]
Content was cut in order to protect the source.Please visit the source for the rest of the article.
[…]
Content was cut in order to protect the source.Please visit the source for the rest of the article.
This article has been indexed from CySecurity News – Latest Information Security and Hacking Incidents
Read the original article: