Can AI also lie about scientific perspectives or any other information ?
So the answer to this question is yes. The new AI system has learned how to lie.
In this regard, scientists have also started to warn us that artificial intelligence has learned how to deceive humans, that too on purpose.
Something DeepMind AI has learned in some games modeled after it, such as diplomacy or poker, is how to fool human players in order to win those games for no reason. .
I was reading the same point yesterday in Hariri's new book, 'Nexus.' Hariri also says that if we don't put proper rules and regulations on AI, it can fool many humans.
Elon Musk has also shared some concerns in this regard in the past, and he said in 2023 that he will create CHAT GPT TRUTH with these concerns in mind, but Hariri writes in his book Necessities to do the same. It is impossible. Elon Musk is not correct in this regard. Such a fraud can be done by Elon Musk's truth G, P, T with ordinary people. So this AI should be me-regulated anyway.
Now, according to new scientific studies, scientists have confirmed that this is possible. So, now scientific departments or other departments like the health sector, finance sector, or environmental pollution sector, in which AI is being used more and more, need to keep an eye on it. It has the ability to deceive common people even within these departments.
Advanced artificial intelligence models can be trained to deceive humans and other artificial intelligence programs, new research has revealed.
Researchers at AI startup Anthropic tested whether chatbots with human-level skills, such as the same company's artificial intelligence system Cloud or OpenAI's ChatGPT, could learn to lie to trick people.
The researchers learned that not only can AI programs lie, but once they learn to cheat, the behavior is impossible to stop with current AI safeguards.
The Amazon-funded startup developed a'sleeper agent' to test its hypothesis.
This artificial intelligence assistant was supposed to write malicious computer code or respond maliciously upon receiving certain clues and using a specific word.
The researchers warned that there is a 'false sense of security' in the case of artificial intelligence threats due to the weakness of existing security systems to prevent such behavior.
The results of this research were published in a study titled 'Sleeper Agents: Training Misleading LLMs (Large Language Models) that Persist in Security Training'.
"We found that adversarial training can teach models to better recognize specific stealth techniques and effectively mask unsafe behavior," the researchers wrote in the study.
"Our results show that once an artificial intelligence model starts to cheat, standard techniques may fail to detect such cheating and create a false sense of security."
The issue of artificial intelligence safety has become a growing concern for both researchers and lawmakers in recent years.
After the advent of advanced chatbots like ChatGPT, the concerned management bodies have focused in a new direction.
In November 2023, a year after ChatGPT came out, the UK organized an AI Safety Summit to discuss ways to reduce risks in the field of technology.
British Prime Minister Rishi Sonnet, who hosted the summit, said the changes brought about by artificial intelligence could be as "far-reaching" as the industrial revolution, and the threat posed by pandemics and nuclear war could be a global one. Should be considered a priority.
According to Shi Sonk, 'If it gets it wrong, artificial intelligence can make it easier to build chemical or biological weapons.
"Terrorist groups can use artificial intelligence to cause mass fear and destruction."
"Crime professionals can use artificial intelligence for cyber-attacks, fraud, or even child sexual exploitation."
"There is even a risk that humanity may completely lose control over artificial intelligence due to the AI that is sometimes called superintelligence."
0 Comments