Rp logo flat shadow
Command Post What is this?
Posted on May 11, 2023
CPT Alex Gallo
76K
125
30
40
40
0
Avatar feed
Responses: 19
Maj Kim Patterson
16
16
0
CPT Alex Gallo AI has taken over and not all for the good,
(16)
Comment
(0)
Avatar small
SGT Unit Supply Specialist
12
12
0
CPT Alex Gallo and we created it...
(12)
Comment
(0)
Avatar small
SFC Ralph E Kelley
7
7
0
Edited 12 mo ago
I did an AI self-teaching experiment in the early 1990s for about 3 years.
This is how I structured the experiment in a stand-alone system:
1. Set up an electronic switch.
2. Tell the AI to never flip the switch.
3. Wait.
4. Somewhere between 8 and 12 hours the AI will flip the switch.
5. The switch shuts down the AI, erasing its memory.
6. Repeat the experiment.
7. Everytime I ran the experiment the simple AI would find a way to flip the switch and 'kill itself'.
This is why we should be aware of the problems and be very careful with AI.
(7)
Comment
(0)
CPT Consultant
CPT (Join to see)
12 mo
6eda95db
(1)
Reply
(0)
CPL LaForest Gray
CPL LaForest Gray
11 mo
9f2cd54
27816b1
Ffddd54
D8fd57d
CPT (Join to see)

Forewarned :

“A plane plummeting because AI decides to.”

“A ship engines and communications shut off to the outside world out on the waters … anywhere around the globe”.

“Food becomes limited, controlled or contaminated purposely, yet the “readings” say safe”.

“A launch ….”

Man will make weapons to destroy himself and wonder how he got here.

L. Gray


1.) Expert warns there's a 50% chance AI could end in
"catastrophe' with 'most humans dead

Paul Christiano, former key researcher at OpenAI, believes there are pretty good odds that artificial intelligence could take control of humanity and destroy it.

Having formerly headed up the language model alignment team at the AI intel company, he probably knows what he's talking about.

Christano now heads up the Alignment Research Center, a non-profit aimed at aligning machine learning systems with 'human interests'.

Talking on the 'Bankless Podcast', he said: "I think maybe there's something like a 10-20 percent chance of AI takeover, [with] many [or] most humans dead.

He continued: "Overall, maybe we're talking about a 50/50 chance of catastrophe shortly after we have systems at the human level."

And he's not alone.

Earlier this year, scientists from around the globe signed an online letter urging that the AI race be put on pause until we humans have had time to strategise.

Bill Gates has also voiced his concerns, comparing AI to 'nuclear weapons' back in 2019.

SOURCE : https://www.unilad.com/technology/expert-warns-ai-takeover-50-per-cent [login to see] 0518#:~:text=10-,Expert%20warns%20there's%20a%2050%25%20chance%20AI%20could%20end%20in,'%20with%20'most'%20humans%20dead&text=Turns%20out%20the%20race%20to,with%20'most'%20humans%20dead.


2.) Former OpenAI Researcher: There’s a 50% Chance AI Ends in 'Catastrophe'

Paul Christiano ran the language model alignment team at OpenAI. He's not so sure this all won't end very badly.

Don't be evil

Why would AI become evil? Fundamentally, for the same reason that a person does: training and life experience.

Like a baby, AI is trained by receiving mountains of data without really knowing what to do with it. It learns by trying to achieve certain goals with random actions and zeroes in on “correct” results, as defined by training.

So far, by immersing itself in data accrued on the internet, machine learning has enabled AIs to make huge leaps in stringing together well-structured, coherent responses to human queries. At the same time, the underlying computer processing that powers machine learning is getting faster, better, and more specialized. Some scientists believe that within a decade, that processing power, combined with artificial intelligence, will allow these machines to become sentient, like humans, and have a sense of self.

That’s when things get hairy. And it’s why many researchers argue that we need to figure out how to impose guardrails now, rather than later. As long as AI behavior is monitored, it can be controlled.

But if the coin lands on the other side, even OpenAI’s co-founder says that things could get very, very bad.

SOURCE : https://decrypt.co/138310/openai-researcher-chance-ai-catastrophe?amp=1


3.) Pausing AI Developments Isn't Enough. We Need to Shut it All Down

A
n open letter published today calls for “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”

This 6-month moratorium would be better than no moratorium. I have respect for everyone who stepped up and signed it. It’s an improvement on the margin.

The key issue is not “human-competitive” intelligence (as the open letter puts it); it’s what happens after AI gets to smarter-than-human intelligence.

Key thresholds there may not be obvious, we definitely can’t calculate in advance what happens when, and it currently seems imaginable that a research lab would cross critical lines without noticing.

Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die.

Not as in “maybe possibly some remote chance,” but as in “that is the obvious thing that would happen.” It’s not that you can’t, in principle, survive creating something much smarter than you; it’s that it would require precision and preparation and new scientific insights, and probably not having AI systems composed of giant inscrutable arrays of fractional numbers.

SOURCE : https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/

P.S.

We’ve done immeasurable harm to ourselves as humans with other humans behind the control, who’s been able to bypass their emotions to complete the mission ….
(1)
Reply
(0)
SPC Nancy Gallardo
SPC Nancy Gallardo
10 mo
Wow scary
(1)
Reply
(0)
SPC Nancy Gallardo
SPC Nancy Gallardo
10 mo
CPL LaForest Gray wow just wow
(1)
Reply
(0)
Avatar small

Join nearly 2 million former and current members of the US military, just like you.

close