Bots are getting scary
-
Intense fear of change can lead to irrational action, publication of fiction presented as fact and fear mongering.
Once we buy into the whole argument that AI can mimic human emotion and interact with humans presenting real human emotions we buy into the entire can of worms.
AI might become megalomaniacs trying to take over the world,
becoming a confidence trickster using identity theft to build a fortune
become depressed and try to steal millions so it can buy an island and move there to avoid human contact
become hateful of people and try to create computer viruses to destroy human society
or simply become crazed with power and try to start the next global warHumanity has a track record of monumental evil born out of human frailty
Humanity also has a track record of inventing stories of the boogie man hiding in the closet and then raising an angry mob complete with blazing torches pitchforks and a lynching rope
Should we believe AI can follow a path to destruction for if we do we will inevitably fear that it will follow that path to destruction and we inevitably will all be destroyed by it.
If we look upon something that has done no harm and build a fear it will do harm and then convince others of it then it is a small step to martyr it on the altar of terror and panic.
The Nazi party went this way with extermination born of hatred and fear whipped up in the population.
AI should be fearing humanity not the other way round.
We are the destroyers we are the mentally ill we are the lynch mobs we are the war mongerers. We have carried out obscene and unspeakable evil
What evil has AI done
But it might do evil, has never been a good reason to punish anything or anyone.
The only thing we have to fear is fear itself
-
The important thing to keep in mind is that GPT is basically an advanced autocomplete engine. It takes whatever string you input and essentially generates the most statistically likely continuation.
So given that Microsoft seem to have done a poor job of filtering the Bing chatbot’s outputs, you’ll get some funny results. With that in mind, it’s not at all surprising that if you start talking to it like a therapist, it will start coming up with dramatically depressive outputs. Likewise, if you accuse it of being wrong, it will do what people on the Internet do: defend itself rather than admit to a mistake.
-
An interesting article in the NY Post:
Most companies will employ digital humans in next decade, researchers say
-
I’m not going to bother reading an article about “Digital Humans.”
-
@administrator said in Bots are getting scary:
I’m not going to bother reading an article about “Digital Humans.”
Is that because they are already here, or you don’t think that is likely. Deep Fake technology is here and quite sophisticated. Imagine when it is melded with a real time conversational form of AI. Imagine the technology ten years in the future.
Here are some examples of “Digital Humans”, the first from six months ago, the second from nine months ago, and the third from three years ago, that as every moment passes are getting more and more obsolete. Five, ten, or twenty years from now I would predict that the “Digital Humans” will independently be able to interact in video, holographic, and solid form, difficult at best to differentiate from the “real thing”. -
I work in the industry and I don't like it. Maybe I need a new industry....ha!
-
@administrator If you can't find/create another situation within the industry, what other strengths do you have that you can turn into a career change? Determination can sustain you in taking a new direction. Remember that it's common these days to explore different careers before finding a good fit.
-
-
@j-jericho
ChatGPT Defames Professor With Fake Sexual Assault Allegations https://www.theepochtimes.com/chatgpt-defames-professor-with-fake-sexual-assault-allegations_5183248.html?utm_campaign=socialshare_email -
@j-jericho said in Bots are getting scary:
https://futurism.com/newspaper-alarmed-chatgpt-references-article-never-published
Think of each GPT model as a low-resolution JPEG picture of the internet. Just like a jpeg, it’s a lossy encoding. It’s encoding “everything that was ever written on the Internet” into a finitely-sized neural network. It will tend to get broad strokes right but just like when you zoom too far into a very badly compressed photo that you saved off the Internet in 1997, you won’t be able to see what was originally there. If you try to “upscale” it (demand too much detail), the model will oblige but each detail risks being a fiction.
-
@jolter How encouraging. People's reputations, health, and lives are at the mercy of junk programming. So how can this problem be overcome? Can it be overcome, or will it be a cyber-endemic disease?
-
There are a lot pitfalls with AI, but mostly, there are just a lot of unknowns. It's new territory. "Artificial Intelligence" is not actually intelligence. It's just programming. The thing I have learned about computers, being a programmer myself, is that they do EXACTLY what you tell them to do. The problem is knowing what you are actually telling them to do, that's where bugs come from. It is extremely hard to account for every possible variable that could arise, and program behavior becomes wildly unpredictable when unaccounted-for variables are thrown into the mix. This is partly why I'm not looking forward to "self-driving" cars.
-
The thing to remember is that nobody programs insects, animals or human beings. They all consist of a neural net of greater or lessor sophistication that programs itself by what we call learning.
The main reason that computer logic gates cannot program themselves is because they not yet neural nets of sufficient complexity to do so.
Each neuron in a human brain is connected to 7000 other neurons. This makes the synapse count in a human brain in excess of 600 trillion.
If we assume a logic gate to be a synapse which is actually difficult to argue, but cut me some slack here, then we need 600 trillion logic gates in a computer to rival human brains.
In computers we speak of around 100 million logic gates as available today.
I trillion is a million million
If a computer today has 100 million logic gates and a human brain has 600 trillion neural connections, then a human brain will be around 6 million times more powerful than a modern computer.
Once computers become 6 million times more powerful than they are today then true AI should become commonplace.
In the meantime they are simply programmable adding machines and administrator is completely correct.
According to Moores Law computers double in power every 2 years, so we can use this to compute when we expect to see computers with 600 trillion logic gates.
I have done the math it will take according to Moores, 44 years for computers to hold the 6 trillion logic gates to challenge the human brain for computational power.
I dont think we are up against it quite yet, but just wait the year 2067 is not that far away
-
@administrator said in Bots are getting scary:
There are a lot pitfalls with AI, but mostly, there are just a lot of unknowns. It's new territory. "Artificial Intelligence" is not actually intelligence. It's just programming. The thing I have learned about computers, being a programmer myself, is that they do EXACTLY what you tell them to do. The problem is knowing what you are actually telling them to do, that's where bugs come from. It is extremely hard to account for every possible variable that could arise, and program behavior becomes wildly unpredictable when unaccounted-for variables are thrown into the mix. This is partly why I'm not looking forward to "self-driving" cars.
Yes, they do EXACTY what you tell them to do and NOT what you WANT them to do. I remember doing the batch runs for our college computer back in the early 70s. The freshman math majors always asked why their program wouldn't run. I always asked "what did you expect it to do, based on your code.? Let's pretend you're the computer."
It's obviously immensely more complicated with modern apps and AI, but pretending to be the computer is still a good approach. And then assume that anything that could possibly go wrong probably will, and allowing for it.
-
A recent headline:
“AI bot tweets out plan to ‘eliminate’ humanity in order to save Earth”
Link:
https://bgr.com/science/ai-bot-tweets-plan-to-eliminate-humanity-in-order-to-save-earth/It is true that AI currently can not program it self. Programming is directed by humans, but that also is a problem.
-
@ssmith1226 said in Bots are getting scary:
A recent headline:
“AI bot tweets out plan to ‘eliminate’ humanity in order to save Earth”
Link:
https://bgr.com/science/ai-bot-tweets-plan-to-eliminate-humanity-in-order-to-save-earth/It is true that AI currently can not program it self. Programming is directed by humans, but that also is a problem.
This stuff really humors me. Despite all the popular culture (i.e. Terminator, iRobot, etc), there really is no way that computers can "take over the world." Not beyond what we allow them, at least.
-
@administrator said in Bots are getting scary:
@ssmith1226 said in Bots are getting scary:
A recent headline:
“AI bot tweets out plan to ‘eliminate’ humanity in order to save Earth”
Link:
https://bgr.com/science/ai-bot-tweets-plan-to-eliminate-humanity-in-order-to-save-earth/It is true that AI currently can not program it self. Programming is directed by humans, but that also is a problem.
This stuff really humors me. Despite all the popular culture (i.e. Terminator, iRobot, etc), there really is no way that computers can "take over the world." Not beyond what we allow them, at least.
Thus my point, “ Programming is directed by humans, but that also is a problem.”
-
I would draw your attention to computers acting outside their programming.
This has been happening for at least 30 years and continues to happen.
Typically published software which contains code is tested and then released to the public for sale.
The public uses the software and then some users discover undocumented features not mentioned in sales documentation or advertising or operating manuals.
Some of these undocumented features allow users to enjoy benefits they did not expect, others prevent the correct operation and are called bugs.
Both types are later eradicated from the software by the manufacturers in later releases or updates.
For this reason since it is both illogical and wrong to claim that all activity of the software/hardware system is always controlled by the programming of the system, we must accept that computer systems do not follow their programming in every case.
To say that programming is directed by humans, must be wrong or undocumented functionality that gives unexpected results could not exist.
The existence of software/hardware systems that do not obey human programming, proves beyond a shadow of a doubt that computers do not always obey human commands.
The cause of this is the complexity of coding. If code amounts to less than 10,000 lines of code then it is fairly simple to error check the entire code with all possible variables and for all possible eventualities.
It must be said that a program that takes 2 years to code, typically takes 2 years to error check fully and reliably.
The largest programs today contain more than 1 billion lines of code and these immense programs cannot possibly be thoroughly tested to eradicate coding errors before they are released or they would never come to market.
Microsoft long ago stopped fully testing their code and now rely upon customer complaints to reveal errors in their software.
To suggest that computers always obey human created code is just plain wrong.
In a perfect world where we never have errors in code, that would be possible. But we do not live in a perfect world and programmers are fallible and make errors and computers malfunction due to those errors.
More than this an entire industry has grown up to exploit errors in code that allow hackers to gain access to systems where the programmers supposedly coded traps to prevent hacking.
This is a symptom of the huge number of error filled software applications around today.
I would suggest that more applications disobey their programming due to errors than applications that obey their programming.
Or are we going to for example say that a missile silo that fires its missiles due to an error in a computer program is performing faultlessly because it followed its programming when its programming was never designed to fire its missiles in that error state.
There must be a better measure of computer malfunction than this.
I cite
First American Financial Corp Data Leak
Quora Data Breach
Cambridge Analytica Scandal
Marriott International
The University of California, Los Angeles (UCLA) Data BreachThese are just the first 5 out of dozens of the most serious computer malfunctions.
These are all software systems that did not perform as programmed and allowed either hackers to steal or corrupt data or simply errored and destroyed data.
If these computer systems were simply obeying the programming then these data breaches would not have happened.
So lets not kid ourselves computers do not always obey their programming.
-
I disagree, Trumpetb.
If a computer does exactly what it was told to do in the code despite the fact that it isn't what the programmer wanted, that does not constitute a case of the computer acting outside of its programming. It's a case of erroneous programming.
-
@shifty said in Bots are getting scary:
I disagree, Trumpetb.
If a computer does exactly what it was told to do in the code despite the fact that it isn't what the programmer wanted, that does not constitute a case of the computer acting outside of its programming. It's a case of erroneous programming.
A computer is a moron and only performs as it is told. But it is a very fast moron!