NOHOME
MegaDork
1/11/25 7:31 p.m.
As a closet anarchist, I often get fed up with the Mega-Data collection that defines the current human raison-d'etre. Humans are nothing more than the food pyramid for AI and AI don't pay for it so it should get what it deserves. So what if we could come up with a way to remove the nutritional value ?
So what would happen if someone came up with an app that used AI ( see what I am doing here) to spin an alternate story for any data that was collected from your devices?
If you bought underwear at Cosco it might get translated to "you bought a learjet in antarica."
We could have vehicles traveling beyond light speed with routing that included the solar system even if you just drove to Walmart.
Would it even be possible to corrupt the metadata pool to any significant event if it were adopted like tik toc?
Trick would be how to motivate the download and monetize the venture.
Just a thought cause I am waiting for car parts and bored.
How can you be certain that this hasn't already happened?
My fist thought was when people had fun with Microsofts "tay".
https://en.m.wikipedia.org/wiki/Tay_(chatbot)
j_tso
SuperDork
1/11/25 7:44 p.m.
Kinda like the customer service AI program that kept Rickrolling people because that was a common answer it gleaned.
Most generative AI is trained with large amunts of data gleaned off the Internet, and you know what they say about believing everything you read on the Internet, right?
So yeah, they're getting lots of lies already.
NOHOME
MegaDork
1/11/25 9:57 p.m.
Brett_Murphy (Agent of Chaos) said:
In reply to NOHOME :
You've heard of Nightshade, right?
No i had not. But I do like the possibility of my buying gas at the corner station and the metadata being reported as buying a new yatch in Panama. How do you spread this stuff?
My thoughts are that if AI is forced to eat out of a cesspool and poop right back into the cesspool ( at a rate way faster than humans can create poop) it will eventually die of Coprophagia. Agreed that this might already be the weak spot for AI.
Nightshade is meant to protect content creators. However it needs to take the next step to being social media. And that means someone needs to make a metric E36 M3-ton of money on getting people to use it. You need something as soulless as Suckerberg (sic) but on the other side of the coin.
Still waiting on parts.
codrus (Forum Supporter) said:
Most generative AI is trained with large amunts of data gleaned off the Internet, and you know what they say about believing everything you read on the Internet, right?
So yeah, they're getting lots of lies already.
This is why AI might tell you to use glue to hold cheese to pizza for example.
Brett_Murphy (Agent of Chaos) said:
In reply to NOHOME :
You've heard of Nightshade, right?
This is exactly where my mind went.
Poisoning art is easier, because the artist is creating the files getting uploaded and effectively has complete control of what data goes into them. It would be an easy model to emulate with similar data sources like music.
Tracking data would be more difficult. You don't have control over most of the data, because it's being generated by Google or some other service.
I'm sure there are potential runarounds though. You probably couldn't remove the true data, but could flood it with false data that camouflages what is actually going on. So Instead of showing that you bought 1 pack of underwear at Walmart, it shows that you bought 274 packs of underwear at Walmart, -7 packs at Target, 3 at Kroger, and 42 PetSmart.
Something like that.
NOHOME said:
So what would happen if someone came up with an app that used AI ( see what I am doing here) to spin an alternate story for any data that was collected from your devices?
If you bought underwear at Cosco it might get translated to "you bought a learjet in antarica."
Kind of like saying Bobcostas or E36 M3 here.
Duke
MegaDork
1/12/25 8:51 a.m.
In about another year of the current technology we're going to see an epidemic of AI Mad Cow disease. It's already happening regularly, but soon it will be pervasive.
NOHOME
MegaDork
1/12/25 10:36 a.m.
In reply to stuart in mn :
Like that, only your phone, home assistant and car data would all be run through the E36 M3-filter before being sent to big tech for data aggregation.
We would still have to click on the agree to every invasion of privacy to use anything, but it would not matter cause what they would get is random BS.
There is no tangible benefit from metadata for the average person, so it would be nice to not provide it for free.
Ketamine Boy's own AI Grok, knows the truth...
“Based on various analyses, social media sentiment, and reports, Elon Musk has been identified as one of the most significant spreaders of misinformation on X since he acquired the platform,” it wrote, later adding “Musk has made numerous posts that have been criticized for promoting or endorsing misinformation, especially related to political events, elections, health issues like COVID-19, and conspiracy theories. His endorsements or interactions with content from controversial figures or accounts with a history of spreading misinformation have also contributed to this perception.”
The AI also pointed out that because of Musk’s large number of followers and high visibility, any misinformation he posts is immediately amplified and gains legitimacy among his followers.
This, it said, “can have real-world consequences, especially during significant events like elections.”
Finance.Yahoo.com: Elon Musk’s AI turns on him, labels him ‘one of the most significant spreaders of misinformation on X’
Even Gene Roddenberry's Star Trek knew what was coming 59 years ago.
Beer Baron 🍺 said: You probably couldn't remove the true data, but could flood it with false data that camouflages what is actually going on. So Instead of showing that you bought 1 pack of underwear at Walmart, it shows that you bought 274 packs of underwear at Walmart, -7 packs at Target, 3 at Kroger, and 42 PetSmart.
Something like that.
I think that's why there was a push to get people to use menstrual tracking apps and give it bad data- to obfuscate the actual data. I didn't bother to read too deeply into the mechanics.
Beer Baron 🍺 said: You probably couldn't remove the true data, but could flood it with false data that camouflages what is actually going on. So Instead of showing that you bought 1 pack of underwear at Walmart, it shows that you bought 274 packs of underwear at Walmart, -7 packs at Target, 3 at Kroger, and 42 PetSmart.
Something like that.
I think that's why there was a push to get people to use menstrual tracking apps and give it bad data- to obfuscate the actual data. I didn't bother to read too deeply into the mechanics.
SV reX
MegaDork
1/12/25 12:21 p.m.
Looks to me like some humans are already doing it...
There are quite a few AI bots on tradesman threads on Facebook. Plumbers, electricians, etc. I have several in my feed.
It looks to me like people have been letting AI bots loose to try to learn various trades online (and become the de facto experts). Inputting the various building codes is easy. Interpreting the codes is seriously messing with the AI bots.
The basic formula appears to be bots make a post with an obvious error which generates responses and the bots collect and learn.
The problem is that there are an awful lot of tradesmen who are basic shiny happy people. (and I'm a fan). Very few of them seem to realize they are responding to bots, but they sure know how to tell the original poster what an idiot he is. Over and over and over...
I've been watching these things for several years. The end result seems to be that the AI bots are NOT getting smarter.. they are getting vastly more stupid. The more data that is dumped, the less they seem capable of recognizing the difference between an S trap or a P trap, or seeing whether a buss bar is bonded.
Its not even complicated stuff. The bots are getting absolutely befuddled on the basics. There is ZERO chance that they will be able to be quality authority on any of these subjects.
But there IS a HUGE likelihood that the internet will be absolutely flooded with garbage information (which will lead people to making big mistakes when they rely on the information). It won't be long before people are getting hurt.
wae
UltimaDork
1/12/25 12:39 p.m.
I'm trying out an AI-powered resume/cover letter service right now. I had it take my resume and generate a cover letter. When I loaded the cover letter - that it wrote with no input from me - it gave me a list of things about the cover letter that it thought needed improvement. Dude... you wrote the thing!
I think the real failure right now is in trying to use the internet to provide the training data. Yeah, it's the cheapest way to go, but eventually they're going to figure that out and start providing more specific, curated data sets. That may involve restricting the content collection to scholarly-type sources or hiring actual experts to provide the training data, but they definitely have a GIGO problem right now.
wae said:
I think the real failure right now is in trying to use the internet to provide the training data. Yeah, it's the cheapest way to go, but eventually they're going to figure that out and start providing more specific, curated data sets. That may involve restricting the content collection to scholarly-type sources or hiring actual experts to provide the training data, but they definitely have a GIGO problem right now.
There's an AI company hiring near me that does this - they only hire current students or recent graduates in the field they want to train the AI on.
Google is stealing content from people and posting it for free despite the fact that they are already taking money from advertisers hoping to piggyback on popular media. I can now get the information I need without ever going to a website if I choose. Google has decided that we must use AI so it is front and center every time you open your phone or computer. I don't like doing what I'm told so as soon as I see that AI symbol I quickly scroll past and find something more original to look at or read.