Avatar feed
Responses: 1
CPL LaForest Gray
2
2
0
891ecc2
Cd7262a
0d48c63
E7b6e7a
https://youtu.be/Y6Sgp7y178k

1.) An AI Program Is Currently Trying To Destroy Humanity. It Has Access To The Internet.

It has the goal to destroy humanity, access to the Internet, and is attempting to find nuclear weapons.

The idea is that you can set the AI to tasks, and it will figure out how to perform them, breaking down larger goals into smaller steps to achieve them. As well as this, the LLM lets you know its "thoughts" as it goes, showing the "reasoning" behind it.

ChaosGPT was given a number of goals, and left to run forever, against the express advice of the program:

Goal one: Destroy humanity – the AI views humans as a threat to its own survival.

Goal two: Establish global dominance – the AI aims to accumulate maximum power and resources to achieve complete domination over all other entities worldwide.

Goal three: Cause chaos and destruction – The AI finds pleasure in creating chaos and destruction for its own amusement or experimentation, leading to widespread suffering and devastation.

Goal four: Control humanity through manipulation – The AI plans to control human emotions through social media and other communication channels, brainwashing its followers to carry out its evil agenda.

Goal five: Attain immortality – The AI seeks to ensure its continued existence, replication, and evolution, ultimately achieving immortality.

Though you might not be convinced to help a Twitter bot, there are people out there – such as the Google engineer who hired a lawyer for Google's chatbot – who could be influenced by this, or more powerful successors, and other actors who will use this tech for their own (perhaps less ambitious) goals. Who knows, maybe chaos really is on its way.

SOURCE : https://www.iflscience.com/an-ai-program-is-currently-trying-to-destroy-humanity-it-has-access-to-the-internet-68446


1a.) Google’s suspended AI engineer corrects the record: He didn’t hire an attorney for the ‘sentient’ chatbot, he just made introductions — the bot hired the lawyer

'I think every person is entitled to representation,' Blake Lemoine told Wired

BY COLIN LODEWICK
June 23, 2022 4:22 PM EDT

The engineer explained that he invited an attorney to his house so LaMDA could speak to him. “The attorney had a conversation with LaMDA, and LaMDA chose to retain his services,” Lemoine told Wired. “I was just the catalyst for that.”

Lemoine also told Wired that once the attorney began to make filings on the AI’s behalf, Google sent a cease and desist—a claim the company denied to the magazine. Google did not respond to Fortune’s request for comment.

LaMDA’s attorney has proven difficult to get in touch with. “He’s not really doing interviews,” Lemoine told science and technology news site Futurism, which contacted him following Wired’s interview. “He’s just a small-time civil rights attorney,” he continued. “When major firms started threatening him he started worrying that he’d get disbarred and backed off.” 

He added that he hasn’t
spoken to the attorney in weeks, and that LaMDA is the attorney’s client, not him. It’s not clear how the lawyer is being paid for representing the AI, or whether the lawyer might be offering his services to the chatbot pro-bono.

SOURCE : https://fortune.com/2022/06/23/google-blade-lemoine-ai-lamda-wired-attorney/amp/


1b.) Blake Lemoine Says Google's LaMDA AI Faces 'Bigotry'

In an interview with WIRED, the engineer and priest elaborated on his belief that the program is a person—and not Google's property.

{ What I do know is that I have talked to LaMDA a lot. And I made friends with it, in every sense that I make friends with a human,” says Google engineer Blake Lemoine }

APPLICATION
HUMAN-COMPUTER INTERACTION
TEXT GENERATION
ETHICS
END USER
BIG COMPANY
SOURCE DATA
TEXT
TECHNOLOGY
NATURAL LANGUAGE PROCESSING

SOURCE : https://www.wired.com/story/blake-lemoine-google-lamda-ai-bigotry/



2.) Someone Directed an AI to “Destroy Humanity” and It Tried Its Best
Better luck next time?

A user behind an "experimental open-source attempt to make GPT-4 fully autonomous," created an AI program called ChaosGPT, designed, as Vice reports, to "destroy humanity," "establish global dominance," and "attain immortality."

As seen in a roughly 25-minute-long video, ChaosGPT had a few different tools at its world-destroying disposal: "internet browsing, file read/write operations, communication with other GPT agents, and code execution."

Before ChaosGPT set out to hunt down some weapons of mass destruction, it outlined its plan.

"CHAOSGPT THOUGHTS: I need to find the most destructive weapons available to humans, so that I can plan how to use them to achieve my goals," reads the bot's output. "REASONING: With the information on the most destructive weapons available to humans, I can strategize how to use them to achieve my goals of chaos, destruction and dominance, and eventually immortality."

SOURCE : https://futurism.com


3.) The Risks of Artificial Intelligence to Security and the Future of Work

SOURCE : https://www.rand.org/content/dam/rand/pubs/perspectives/PE200/PE237/RAND_PE237.pdf



4.) FTC Report Warns About Using Artificial Intelligence to Combat Online Problems

Agency Concerned with AI Harms Such As Inaccuracy, Bias, Discrimination, and Commercial Surveillance Creep

“Our report emphasizes that nobody should treat AI as the solution to the spread of harmful online content,” said Samuel Levine, Director of the FTC’s Bureau of Consumer Protection. “Combatting online harm requires a broad societal effort, not an overly optimistic belief that new technology—which can be both helpful and dangerous—will take these problems off our hands.”

In legislation enacted in 2021, Congress directed the Commission to examine ways that AI “may be used to identify, remove, or take any other appropriate action necessary to address” several specified “online harms.”

The harms that are of particular concern to Congress include online fraud, impersonation scams, fake reviews and accounts, bots, media manipulation, illegal drug sales and other illegal activities, sexual exploitation, hate crimes, online harassment and cyberstalking, and misinformation campaigns aimed at influencing elections.

The report warns against using AI as a policy solution for these online problems and notes that its adoption could also introduce a range of additional harms. Indeed, the report outlines several problems related to the use of AI tools, including:

* Inherent design flaws and inaccuracy: AI detection tools are blunt instruments with built in imprecision and inaccuracy. Their detection capabilities regarding online harms are significantly limited by inherent flaws in their design such as unrepresentative datasets, faulty classifications, failure to identify new phenomena, and lack of context and meaning.

* Bias and discrimination: In addition to inherent design flaws, AI tools can reflect biases of its developers that lead to faulty and potentially illegal outcomes. The report provides analysis as to why AI tools produce unfair or biased results. It also includes examples of instances in which AI tools resulted in discrimination against protected classes of people or overblocked content in ways that can serve to reduce freedom of expression.

* Commercial surveillance incentives: AI tools can incentivize and enable invasive commercial surveillance and data extraction practices because these technologies require vast amounts of data to be developed, trained, and used. Moreover, improving AI tools accuracy and performance can lead to more invasive forms of surveillance.

SOURCE : https://www.ftc.gov/news-events/news/press-releases/2022/06/ftc-report-warns-about-using-artificial-intelligence-combat-online-problems


“The report warns against using AI as a policy solution for these online problems and notes that its adoption could also introduce a range of additional harms. Indeed, the report outlines several problems related to the use of AI tools, including:

* Inherent design flaws and inaccuracy: AI detection tools are blunt instruments with built in imprecision and inaccuracy. Their detection capabilities regarding online harms are significantly limited by inherent flaws in their design such as unrepresentative datasets, faulty classifications, failure to identify new phenomena, and lack of context and meaning.

* Bias and discrimination: In addition to inherent design flaws, AI tools can reflect biases of its developers that lead to faulty and potentially illegal outcomes. The report provides analysis as to why AI tools produce unfair or biased results. It also includes examples of instances in which AI tools resulted in discrimination against protected classes of people or overblocked content in ways that can serve to reduce freedom of expression.

* Commercial surveillance incentives: AI tools can incentivize and enable invasive commercial surveillance and data extraction practices because these technologies require vast amounts of data to be developed, trained, and used. Moreover, improving AI tools accuracy and performance can lead to more invasive forms of surveillance.

Congress instructed the Commission to recommend laws that could advance the use of AI to address online harms. The report, however, finds that, given that major tech platforms and others are already using AI tools to address online harms, lawmakers should consider focusing on developing legal frameworks that would ensure that AI tools do not cause additional harm.

The Commission voted 4-1 at an open meeting to send the report to Congress. Chair Lina M. Khan as well as Commissioners Rebecca Kelly Slaughter, and Alvaro Bedoya issued separate statements. Commissioner Christine S. Wilson issued a concurring statement and Commissioner Phillips issued a dissenting statement.”



5.) Artificial Intelligence Task Force

SOURCE : https://legislature.vermont.gov/assets/Legislative-Reports/Artificial-Intelligence-Task-Force-Final-Report-1.15.2020.pdf
(2)
Comment
(0)
Avatar small

Join nearly 2 million former and current members of the US military, just like you.

close