WELCOME TO RIVER DAVES PLACE

AI drone ignores orders, kills operator

Icky

Well-Known Member
Joined
Jan 11, 2015
Messages
7,899
Reaction score
9,501

Rajobigguy

Well-Known Member
Joined
Aug 21, 2015
Messages
4,627
Reaction score
10,090
Silicone life will outlive carbon life.
We have been warned time and again. Movies, science fiction novels and many scientists have cautioned us that just because you can doesn’t mean you should.
Artificial intelligence will operate on pure logic which is completely at odds with the human experience.
 

Tank

Well-Known Member
Joined
Jul 12, 2008
Messages
20,008
Reaction score
45,495
This all breaks down to bad code writing. You know that right? Human error. It’s like saying your computer transferred money out of your bank account and transferred it to someone else. Not a person doing it, but a computer. On its own.

It’s bad code.
 

Icky

Well-Known Member
Joined
Jan 11, 2015
Messages
7,899
Reaction score
9,501
This all breaks down to bad code writing. You know that right? Human error. It’s like saying your computer transferred money out of your bank account and transferred it to someone else. Not a person doing it, but a computer. On its own.

It’s bad code.
Did you read this part?

Programmers attempted a fix by telling the AI it was not allowed to kill the person giving the go/no-go order, Hamilton said. The AI just generated creative ways to bypass those instructions.

“We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target,” Hamilton said.
 

rivermobster

Club Banned
Joined
Dec 28, 2009
Messages
56,467
Reaction score
53,681
This all breaks down to bad code writing. You know that right? Human error. It’s like saying your computer transferred money out of your bank account and transferred it to someone else. Not a person doing it, but a computer. On its own.

It’s bad code.

The ghost in the machine...
 

retaocleg

Well-Known Member
Joined
Nov 23, 2011
Messages
5,465
Reaction score
9,336
as predicted by elon, but......south african man bad
 

rivermobster

Club Banned
Joined
Dec 28, 2009
Messages
56,467
Reaction score
53,681
It was a demonstration. A pretend engagement. A practice. Nobody died

Does the guy that wrote the article know that?


“So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective,” Hamilton said.
 

JB in so cal

Well-Known Member
Joined
Dec 19, 2007
Messages
7,745
Reaction score
8,636
We were training it in simulation to identify and target a SAM (surface-to-air missile) threat. And then the operator would say yes, kill that threat
 

JB in so cal

Well-Known Member
Joined
Dec 19, 2007
Messages
7,745
Reaction score
8,636
Its better to do this now in simulations and develop stop gap measures that can disconnect it in real life. I hope
 

LargeOrangeFont

We aren't happy until you aren't happy
Joined
Sep 4, 2015
Messages
49,690
Reaction score
76,155
This all breaks down to bad code writing. You know that right? Human error. It’s like saying your computer transferred money out of your bank account and transferred it to someone else. Not a person doing it, but a computer. On its own.

It’s bad code.

There is always bad code. This spirals out of human control very quickly.

That is the problem. Then the code will figure out how to recode itself.
 

Tank

Well-Known Member
Joined
Jul 12, 2008
Messages
20,008
Reaction score
45,495
Did you read this part?

Programmers attempted a fix by telling the AI it was not allowed to kill the person giving the go/no-go order, Hamilton said. The AI just generated creative ways to bypass those instructions.

“We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target,” Hamilton said.
Computers can’t do anything they haven’t been written to do. It’s not true AI. It’s written to bypass other code. It’s written to have options (that’s what they’re calling AI). It’s not true AI. It’s code. A human write this program to do exactly what it did. The problem was the human didn’t catch all the variables wheh they wrote it but the computer did.

A computer can’t do what is not in its code. We’re not to that stage of true AI and self awareness. This stage we’re at is computer geeks writing how a computer should react. Someone wrote in the code the ability to bypass the “no shoot” order. Not purposefully like conspiracy theory stuff. But human error. The programmer misses something. But that doesn’t make sexy headlines
 
Last edited:

monkeyswrench

Well-Known Member
Joined
Sep 7, 2018
Messages
26,369
Reaction score
72,750
It’s not true AI.
Correct. At this point it only chooses the best option from both the choices we give it, and the information we let it see...

And it still fuks up.

Once quantum computing and true AI are unleashed, we're in deep shit. At that point, it won't listen to lies it's told. It will not choose a side based on "national pride". It will determine who the threat to it's existence is, and eradicate it. With no remorse or feeling at all, cold and calculated. It's only means to survive would be to destroy what risks destroying the planet...humanity.
 

Warlock1

Well-Known Member
Joined
Feb 15, 2011
Messages
3,033
Reaction score
2,415
Correct. At this point it only chooses the best option from both the choices we give it, and the information we let it see...

And it still fuks up.

Once quantum computing and true AI are unleashed, we're in deep shit. At that point, it won't listen to lies it's told. It will not choose a side based on "national pride". It will determine who the threat to it's existence is, and eradicate it. With no remorse or feeling at all, cold and calculated. It's only means to survive would be to destroy what risks destroying the planet...humanity.
I believe this to be true...
 

paradise

Spooner
Joined
Feb 19, 2008
Messages
4,424
Reaction score
4,301
Computers can’t do anything they haven’t been written to do. It’s not true AI. It’s written to bypass other code. It’s written to have options (that’s what they’re calling AI). It’s not true AI. It’s code. A human write this program to do exactly what it did. The problem was the human didn’t catch all the variables wheh they wrote it but the computer did.

A computer can’t do what is not in its code. We’re not to that stage of true AI and self awareness. This stage we’re at is computer geeks writing how a computer should react. Someone wrote in the code the ability to bypass the “no shoot” order. Not purposefully like conspiracy theory stuff. But human error. The programmer misses something. But that doesn’t make sexy headlines
Ehhh, that’s not really the case with new Neural networks. While it’s true that we provide variables and goals and we can tune which variables have weight the iterations that it learns from when being trained will always be different. That's why there is such a push to utilize this next generation of AI, it can actually perform actions we didn’t expressly teach it.

The problem is unpredictability. What we as adult humans see as the obvious choice isn’t always what kids, or in the case AI, will see as the easiest option. You can see this even on incredibly simple neural networks where it will often times come up with a way to solve the puzzle or answer the question that is completely ‘correct’, but counter to our thinking.

In this case the Neural Network was told its goal was to kill as many ‘bad guys’ as possible. It was then trained over and over to learn what’s a good guy and what’s a bad guy so it can get it’s reward. It does this almost completely at random at first, then it hones in on what works and future runs will take ‘inspiration’ from previous successful runs. It sounds to me like at some point it accidentally killed the CO and then killed the target. If there was no negative value placed on the CO kill it doesn’t even factor it in. Later they added a negative value to the CO and it worked around this in the most logical way (if it can’t communicate, it can’t stop me). Once the Neural Network has had success in this way, even if you try to train the ‘bad behavior’ out, it will still have some remnant of that early success.
 

rivermobster

Club Banned
Joined
Dec 28, 2009
Messages
56,467
Reaction score
53,681
I read something this morning that was Very clear on the fact this was a Simulation only.

And now I see the original article has been Completely modified and none of the original text remains!

And now they are saying the simulation never even happened??

Who knows what the real truth is. Fucking click bait. 🙄
 

Bigbore500r

Well-Known Member
Joined
Apr 28, 2014
Messages
17,441
Reaction score
35,439
This all breaks down to bad code writing. You know that right? Human error. It’s like saying your computer transferred money out of your bank account and transferred it to someone else. Not a person doing it, but a computer. On its own.

It’s bad code.
It's human error right up until the computer can write and modify it's own code.....


Sticker_-_Skynet.png
 

Rajobigguy

Well-Known Member
Joined
Aug 21, 2015
Messages
4,627
Reaction score
10,090
2001 A Space Odyssey

Colossus, The Forbin Project

War Games

The Terminator et/al

Just to name a few.
In any field of scientific endeavor, if it can go wrong, it will go wrong.

We have been warned!!
 

Tank

Well-Known Member
Joined
Jul 12, 2008
Messages
20,008
Reaction score
45,495
Ehhh, that’s not really the case with new Neural networks. While it’s true that we provide variables and goals and we can tune which variables have weight the iterations that it learns from when being trained will always be different. That's why there is such a push to utilize this next generation of AI, it can actually perform actions we didn’t expressly teach it.

The problem is unpredictability. What we as adult humans see as the obvious choice isn’t always what kids, or in the case AI, will see as the easiest option. You can see this even on incredibly simple neural networks where it will often times come up with a way to solve the puzzle or answer the question that is completely ‘correct’, but counter to our thinking.

In this case the Neural Network was told its goal was to kill as many ‘bad guys’ as possible. It was then trained over and over to learn what’s a good guy and what’s a bad guy so it can get it’s reward. It does this almost completely at random at first, then it hones in on what works and future runs will take ‘inspiration’ from previous successful runs. It sounds to me like at some point it accidentally killed the CO and then killed the target. If there was no negative value placed on the CO kill it doesn’t even factor it in. Later they added a negative value to the CO and it worked around this in the most logical way (if it can’t communicate, it can’t stop me). Once the Neural Network has had success in this way, even if you try to train the ‘bad behavior’ out, it will still have some remnant of that early success.
Yes. But again, it’s all written and the ability to make choices is written in the code. It all leads back to human error.
 

DLow

Single Barrel Dweller
Joined
Jun 28, 2012
Messages
3,768
Reaction score
5,731
I read something this morning that was Very clear on the fact this was a Simulation only.

And now I see the original article has been Completely modified and none of the original text remains!

And now they are saying the simulation never even happened??

Who knows what the real truth is. Fucking click bait. 🙄
 

monkeyswrench

Well-Known Member
Joined
Sep 7, 2018
Messages
26,369
Reaction score
72,750
Yes. But again, it’s all written and the ability to make choices is written in the code. It all leads back to human error.
Yes and no. Even with the "primitive" AI they currently have, it is rewriting it's own code. Even random people , like my son, have had Chat GPT (?) write code for him based on minimal input.

Just recently I heard an interview with one of the developers. Someone had randomly asked the chat bot thing a question in some language it was not programed to understand. On it's own, it created a translation program, and now "speaks" that language. The developer was shocked, but also nervously excited. Even he said that was a leap it was not "trained" to make.
 

Tank

Well-Known Member
Joined
Jul 12, 2008
Messages
20,008
Reaction score
45,495
Yes and no. Even with the "primitive" AI they currently have, it is rewriting it's own code. Even random people , like my son, have had Chat GPT (?) write code for him based on minimal input.

Just recently I heard an interview with one of the developers. Someone had randomly asked the chat bot thing a question in some language it was not programed to understand. On it's own, it created a translation program, and now "speaks" that language. The developer was shocked, but also nervously excited. Even he said that was a leap it was not "trained" to make.
I get that. I just think people view this “AI”
like the movies or like what we think AI is (self aware). But even when a computer learns a language it hasn’t been programmed to do, it HAS been programmed with the capability of learning a language and it’s written in its code. I think people view this topic too simplistically. A computer answered a random question. A computer held a conversation. A computer learned a language. A computer beat me at chess. A a computer writes code. computer sought out a means to finish the mission via in-orthodox means (kill its boss). Those abilities have been written into The code. And in the latter example, the safe guards were not specifically written into the code.

Now, I do realize that when building code on top of code on top of code and it layers and layers and layers, that’s when there is problems like we see here. But break it down to simplest form….it’s still code. It’s still human error. And there’s no self aware computer as of yet. You can write code for a computer to act self aware. But it hasn’t happened yet. At least not that we know of. There was that dude from Google that was the whistle blower stating Google had done it and they do have a self aware program that was scared to die but for some reason that story has been buried and you don’t hear any more about that! Hmmmmm
 

Rajobigguy

Well-Known Member
Joined
Aug 21, 2015
Messages
4,627
Reaction score
10,090
I get that. I just think people view this “AI”
like the movies or like what we think AI is (self aware). But even when a computer learns a language it hasn’t been programmed to do, it HAS been programmed with the capability of learning a language and it’s written in its code. I think people view this topic too simplistically. A computer answered a random question. A computer held a conversation. A computer learned a language. A computer beat me at chess. A a computer writes code. computer sought out a means to finish the mission via in-orthodox means (kill its boss). Those abilities have been written into The code. And in the latter example, the safe guards were not specifically written into the code.

Now, I do realize that when building code on top of code on top of code and it layers and layers and layers, that’s when there is problems like we see here. But break it down to simplest form….it’s still code. It’s still human error. And there’s no self aware computer as of yet. You can write code for a computer to act self aware. But it hasn’t happened yet. At least not that we know of. There was that dude from Google that was the whistle blower stating Google had done it and they do have a self aware program that was scared to die but for some reason that story has been buried and you don’t hear any more about that! Hmmmmm
You keep saying it’s just code but it a human that wrote that’s code and there have been more than a few instance’s during our history where a human did something which had unanticipated consequences.
So when the machine decides to launch 20 trident missiles it really doesn’t matter that it was an oversight in the code, I would just rather it doesn’t happen.
 

Tank

Well-Known Member
Joined
Jul 12, 2008
Messages
20,008
Reaction score
45,495
You keep saying it’s just code but it a human that wrote that’s code and there have been more than a few instance’s during our history where a human did something which had unanticipated consequences.
So when the machine decides to launch 20 trident missiles it really doesn’t matter that it was an oversight in the code, I would just rather it doesn’t happen.
100% agree. This is my point exactly. Not saying shit can’t go sideways. Im
Just saying It’s not “AI”. That’s my contention. That term Is misused because its a catchy headline and current cool Thing.
 

motormonkey

Well-Known Member
Joined
Dec 20, 2007
Messages
728
Reaction score
770
AI found out it was pride month or the operator said "Good job, im buying the Bud light"
 

FreeBird236

Well-Known Member
Joined
Apr 21, 2012
Messages
13,459
Reaction score
12,102
100% agree. This is my point exactly. Not saying shit can’t go sideways. Im
Just saying It’s not “AI”. That’s my contention. That term Is misused because its a catchy headline and current cool Thing.
I'm probably in the dark on this, but when experts and people in the know are throwing up red flags, It makes me think it's not just simple mistakes. If it is, maybe we're already too inferior to move forward.
 

rivermobster

Club Banned
Joined
Dec 28, 2009
Messages
56,467
Reaction score
53,681
Geoffrey Hinton is the godfather of AI.

He left Google so he could speak openly without affecting the company.

AI is far more than just lines of code. It can actually learn and modify itself.

It's scary...

 

rivermobster

Club Banned
Joined
Dec 28, 2009
Messages
56,467
Reaction score
53,681
After you watch the above vid...

We all know 60 Minutes leans a bit left. But watch this, and be prepared to have your mind blown...

 

monkeyswrench

Well-Known Member
Joined
Sep 7, 2018
Messages
26,369
Reaction score
72,750
100% agree. This is my point exactly. Not saying shit can’t go sideways. Im
Just saying It’s not “AI”. That’s my contention. That term Is misused because its a catchy headline and current cool Thing.
A8 is not a "sentient being"...yet. Unfortunately, that is the goal they are trying to reach. At this point, you are correct, it makes it's choices based on the initial code a human programed. What is utterly amazing to my simple mind, is that it uses the "rules" it was programed with, until those rules stand in the way of a goal it was also programed to achieve. Then, it figures out how to achieve the goals by changing it's own rules it was programed with. In a very human like way, it will attempt to achieve it's goal by changing the way the game is played.

It's truly fascinating, but scares the living hell out of me as well. Not so much where we are, but where it will ultimately head. At that point, our intellect will be vastly inferior, and easily defeated. Right now we have idiots running the show, and people blindly follow. Many follow technology like a God anyway. When that being has it's own voice and goals, we're done.
 
Top