WELCOME TO RIVER DAVES PLACE

AI drone ignores orders, kills operator

LargeOrangeFont

We aren't happy until you aren't happy
Joined
Sep 4, 2015
Messages
49,690
Reaction score
76,155
A8 is not a "sentient being"...yet. Unfortunately, that is the goal they are trying to reach. At this point, you are correct, it makes it's choices based on the initial code a human programed. What is utterly amazing to my simple mind, is that it uses the "rules" it was programed with, until those rules stand in the way of a goal it was also programed to achieve. Then, it figures out how to achieve the goals by changing it's own rules it was programed with. In a very human like way, it will attempt to achieve it's goal by changing the way the game is played.

It's truly fascinating, but scares the living hell out of me as well. Not so much where we are, but where it will ultimately head. At that point, our intellect will be vastly inferior, and easily defeated. Right now we have idiots running the show, and people blindly follow. Many follow technology like a God anyway. When that being has it's own voice and goals, we're done.

You guys are confusing/combining AI and ML (Machine Learning) here. They are 2 different things and ML specifically enables self improvement, meaning these machines rewrite and debug their own code.

We are approaching the point where there is no “human code to fix” just some guardrails.. that may or may not work.
 

Rajobigguy

Well-Known Member
Joined
Aug 21, 2015
Messages
4,627
Reaction score
10,090
You guys are confusing/combining AI and ML (Machine Learning) here. They are 2 different things and ML specifically enables self improvement, meaning these machines rewrite and debug their own code.

We are approaching the point where there is no “human code to fix” just some guardrails.. that may or may not work.
Given that on one side of those guardrails is a reality where the line between man vs machine for the ruling seat is blurred and the other side is human extinction, they had better be extremely strong guardrails.
 
Last edited:

monkeyswrench

Well-Known Member
Joined
Sep 7, 2018
Messages
26,369
Reaction score
72,749
You guys are confusing/combining AI and ML (Machine Learning) here. They are 2 different things and ML specifically enables self improvement, meaning these machines rewrite and debug their own code.

We are approaching the point where there is no “human code to fix” just some guardrails.. that may or may not work.
Not confusing the two at all. Machine learning is a stepping stone on the path to AI. True AI does not exist yet, or not that anyone admits possibly? Machines have basically reached a point where they can learn from their failures. They catalog the results, and use that to make a different attempt at solving a problem or achieving a goal set for it. In the previously mentioned ChatGPT incident, where it learned the foreign language, that is really all it did. It has been programed to answer questions. It's directives are probably fairly broad, but answering questions are a primary goal. When it didn't understand the question, it figured out how to. In doing so, it taught itself something new, while still following it's directives.

AI, in a more absolute and true form, will be when the computer itself determines it's own goals and directives, devoid of any human input. With advancements in quantum computing coming at a high rate, it may happen soon. I don't know what will push that envelope, but it may be something as simple as one random code generated by the computer itself.
 

LargeOrangeFont

We aren't happy until you aren't happy
Joined
Sep 4, 2015
Messages
49,690
Reaction score
76,155
Not confusing the two at all. Machine learning is a stepping stone on the path to AI. True AI does not exist yet, or not that anyone admits possibly? Machines have basically reached a point where they can learn from their failures. They catalog the results, and use that to make a different attempt at solving a problem or achieving a goal set for it. In the previously mentioned ChatGPT incident, where it learned the foreign language, that is really all it did. It has been programed to answer questions. It's directives are probably fairly broad, but answering questions are a primary goal. When it didn't understand the question, it figured out how to. In doing so, it taught itself something new, while still following it's directives.

AI, in a more absolute and true form, will be when the computer itself determines it's own goals and directives, devoid of any human input. With advancements in quantum computing coming at a high rate, it may happen soon. I don't know what will push that envelope, but it may be something as simple as one random code generated by the computer itself.

Well the definition of AI is about as ambiguous as “cloud” or “hacked” today. Something self aware may exist.. who knows.

The chat GPT example is a perfect one. All code is flawed. The code and data we feed we AI are just guardrails…ones it may or may not jump.
 

monkeyswrench

Well-Known Member
Joined
Sep 7, 2018
Messages
26,369
Reaction score
72,749
Well the definition of AI is about as ambiguous as “cloud” or “hacked” today. Something self aware may exist.. who knows.

The chat GPT example is a perfect one. All code is flawed. The code and data we feed we AI are just guardrails…ones it may or may not jump.
My son is all into this stuff. In order for me to even maintain a slight concept of what he's talking about, I've had to stay up on modern type things. He had put it in a way that made a lot of sense. He feels that all the current versions of GPT are essentially just programs meant to mimic a human. It scans the net for the most common answers deemed correct, and repeats them as it's own. Unfortunately, that means it uses information that can be flawed and recites it as truth.
 

LargeOrangeFont

We aren't happy until you aren't happy
Joined
Sep 4, 2015
Messages
49,690
Reaction score
76,155
My son is all into this stuff. In order for me to even maintain a slight concept of what he's talking about, I've had to stay up on modern type things. He had put it in a way that made a lot of sense. He feels that all the current versions of GPT are essentially just programs meant to mimic a human. It scans the net for the most common answers deemed correct, and repeats them as it's own. Unfortunately, that means it uses information that can be flawed and recites it as truth.

How do you think they came up with all the Covid and Climate Change models? 😉
Garbage in, Garbage out.

Yea ChatGPT is a refined (read neutered) version of what has been available and used for awhile now.

This is where ML and endpoint data changes things dramatically.. you are now relying on the computer to essentially code itself based on outside inputs from other machines.

The machine now decides what garbage it ingests… and if indeed it is garbage or not 😂.
 

monkeyswrench

Well-Known Member
Joined
Sep 7, 2018
Messages
26,369
Reaction score
72,749
The machine now decides what garbage it ingests… and if indeed it is garbage or not 😂.
Not just the machine...but those who own it. They can essentially have the voice of God to the ears of the followers.
It's another one of the many facets of the current world.
 

Rajobigguy

Well-Known Member
Joined
Aug 21, 2015
Messages
4,627
Reaction score
10,090
join-the-resistance.jpg
 
Top