• Dear Guest, Please note that adult content is not permitted on this forum. We have had our Google ads disabled at times due to some posts that were found from some time ago. Please do not post adult content and if you see any already on the forum, please report the post so that we can deal with it. Adult content is allowed in the glory hole - you will have to request permission to access it. Thanks, scara

Artificial Intelligence Thread

Listening to Richard Dawkins today who was putting forward warnings about AI, was only a small part of a bigger talk, but he warned about the programming of AI in that if you was to seek assurances of securing the future of the planet or a subject along those lines then the likely outcome would be to get rid of humans, was interesting talk about how well meaning messages might come with dire long term consequences

Or a terrorist could create an ai and tell it to get rid of humans.

Long way off that yet though. With all the hype they are mostly large language models which basically are good at predicting what to say in response to a question. Basically a glorified predictive text. Even where they can code they are just using code they have learned from the internet.

Will that change? Yes most likely. Will it effect jobs? Definitely. Will people abuse them? Yep. Wipe out humanity? Not this week at least.
 
Or a terrorist could create an ai and tell it to get rid of humans.

Long way off that yet though. With all the hype they are mostly large language models which basically are good at predicting what to say in response to a question. Basically a glorified predictive text. Even where they can code they are just using code they have learned from the internet.

Will that change? Yes most likely. Will it effect jobs? Definitely. Will people abuse them? Yep. Wipe out humanity? Not this week at least.

Oh of course, he was not talking about next week, me neither, its a long term thing, I mean its been in films like 2001 for years, will be years coming, but was just highlighting how even well intentioned programming could be used in a way not intended.

What I do believe though that based on a long term human history we are closer to the end days than not
 
Oh of course, he was not talking about next week, me neither, its a long term thing, I mean its been in films like 2001 for years, will be years coming, but was just highlighting how even well intentioned programming could be used in a way not intended.

What I do believe though that based on a long term human history we are closer to the end days than not

2001 Hal was the victim! #Justice4hal
 
He certainly was in the much maligned 2010 sequel which was very good IMO

The original he was. He was meant to be a member of the crew. Made one tiny mistake then they plot to kill him. The rest is him just trying to defend himself.
 
This guy is using an ai mod while playing skyrim. The results are amazing. Gaming got a whole new level.

 
00068-1683891101.png

Like @metalgear, I decided to use AI to help me make some company web sites for myself. It's crazy how much work was able to be "outsourced" to AI tools.
I've been using SD for image synthesis and have gotten fairly comfortable with the basics.

On a different note, there's this "AI robot" named Sophia... it's basically Siri with some kind of physical "face/avatar". It's been getting some headlines lately but it's being hyped up into something that it's not. Naturally, when there's any sort of gold rush, people will lie and exaggerate features or capabilities to get more investor funding. It'll probably still be a few decades yet before we need to start panicking about AIs terrorizing humans. Discussions and safeguards should definitely happen now, but already we are seeing AI implemented into software for military technology. If/when we see AI actually making decisions for using lethal force is what I would consider that "Terminator" threshold. I just don't see these training/learning models gaining some kind of actual "sentience" so I think most of us can treat AI as a tool rather than a threat. Right now, the threats and pitfalls are still pretty serious and will give us plenty of problems in the near-term.
00012-2485649767.png
 

Attachments

  • 00008-3923698153.png
    00008-3923698153.png
    334.3 KB · Views: 6
View attachment 16147

Like @metalgear, I decided to use AI to help me make some company web sites for myself. It's crazy how much work was able to be "outsourced" to AI tools.
I've been using SD for image synthesis and have gotten fairly comfortable with the basics.

On a different note, there's this "AI robot" named Sophia... it's basically Siri with some kind of physical "face/avatar". It's been getting some headlines lately but it's being hyped up into something that it's not. Naturally, when there's any sort of gold rush, people will lie and exaggerate features or capabilities to get more investor funding. It'll probably still be a few decades yet before we need to start panicking about AIs terrorizing humans. Discussions and safeguards should definitely happen now, but already we are seeing AI implemented into software for military technology. If/when we see AI actually making decisions for using lethal force is what I would consider that "Terminator" threshold. I just don't see these training/learning models gaining some kind of actual "sentience" so I think most of us can treat AI as a tool rather than a threat. Right now, the threats and pitfalls are still pretty serious and will give us plenty of problems in the near-term.
View attachment 16148

Allegedly ai is in charge of nuclear counter strikes. With subs being able to launch strikes on washington within a couple of minutes it was decided that if politicians were wiped out there needed to be a counter or at least the threat of one.
 
Allegedly ai is in charge of nuclear counter strikes. With subs being able to launch strikes on washington within a couple of minutes it was decided that if politicians were wiped out there needed to be a counter or at least the threat of one.
This is why I think the nuclear apocalypse isn't such a bad thing :)

IIn all seriousness, that seems plausible... probably tied into us spending $1.5 trillion on revamping our nuclear arsenal over the next decade.
 
Will it melt the servers?
It will melt the polar caps, reducing the albedo effect thus exacerbating the earth's energy imbalance. The resulting weather effects will cause a global famine and civilization will begin to crumble. And then someone will eat the servers.
 
It will melt the polar caps, reducing the albedo effect thus exacerbating the earth's energy imbalance. The resulting weather effects will cause a global famine and civilization will begin to crumble. And then someone will eat the servers.
Don't worry, A.I. will deduce in advance that humans are the problem and act accordingly.
 
Tried a few AI picture generating apps, and as @papaspur mentioned in another thread, those AI tools have problems with knowing how many limbs people have. This is one I created from the order 'kid and puppy playing in puddle'
Some tool on that kid! :p:D
gencraft_image_1696675514876-1.png
 
Back