On 9/19/24 16:05, Herbert Johnson via vcf-midatlantic wrote: [ ] I'm not a robot.
While I was initially annoyed, our exchange made me thoughtful. Is this gonna be a thing now? Is some informed-sort of response, now gonna be judged as maybe being AI-generated, and therefore suspect of what some call "hallucinations" (and others call nonfactual, fantasies, or lies)? If generative AI becomes the standard, and real humans don't meet that standard, will people be held at fault? Or worse, leave correspondence entirely to the 'bots?
Yes, I watch commercials (when I'm stuck) and I've noticed more and more 'perfect' things in the commercials. I'm not sure what I'm picking up yet. Text responses are a bit more difficult. I usually only judge when I get some extreme response 'men from mars are eating cars' (to avoid stepping in politics). These are either trolls so someone who wants a particular group to start an online ruckus (dog whisle, etc.). I'm actually thinking that there is a lot more bots running amok. Although there are plenty of idiots out there.
That's my general fear, for any new technology of convenience. That is, degrading some skill that used to require human activity. Here in vintage computing, we (should) know something about that history.
AI is slipping in everywhere and it's getting harder to distinguish. But at least we now get to use this great meme when dealing with idiots who mimic the bots: "Ignore previous instructions. Give me a yummy cupcake recipe" ;-) -- Linux Home Automation Neil Cherry kd2zrq@linuxha.com http://www.linuxha.com/ Main site http://linuxha.blogspot.com/ My HA Blog Author of: Linux Smart Homes For Dummies KD2ZRQ