AI is Cheap But Doesn't Know What its Doing
I keep wondering if the AI written posts will work out in the long run. Cheaper than paying people to write but lacking any real experience with the topics they write.
They used to say "Don't do this at home, kids" when something was possibly dangerous/ risky. I think that will be a problem with AI generated posts. A computer can put together an article but is it giving people information they can count on?
My Mother isn't sure about AI/ computer driven cars. I thought it was a good idea, safer for drivers who may be distracted, intoxicated, tired, nervous, or just don't drive. All the AI needs to do is follow the line of the roads, know traffic signals, etc. After all, people drive themselves places half in a trance and are surprised when they get there. If all vehicles were AI driven there would be very few unexpected incidents, they would all be driving the same way. No second guessing, no human errors.
But, this relies on everything being predictable for the AI. An AI only knows as much as it knows, no more. What if the road hasn't been maintained to the standard the AI needs, for instance? People who drive know it isn't easy to see the white lines in the road when the paint is worn away or during a storm. Can the AI deal with that, or will that be a time when the AI needs to shut down and the vehicle is driven manually, by a human.
What about AIs writing articles online, giving people information, advice? People stuff information into the computer and leave the AI to spit it out as an article. So the AI gives advice to people about assorted topics: fast food restaurants, fashion choices, and cancer medications. Is the AI a trusted source? No. The AI is not an expert, does not have any experience or training. The AI only has information given to it and information it scrapes from other sites/ sources online.
Who will be responsible if the AI gives people the wrong advice, bad information? I don't think its just a small chance that this will happen. Fact checking has gone out of style, like proofreading and I don't see much editing either. Yes, the AI can edit itself for punctuation and grammar. Pretty limited with fact checking. By the time the article is written the AI considers it to be fact checked based on whatever information it had or came across. But, its just software. An influencer, not an authority.
The authorities will be at home, washing their dishes, cleaning the garage, all those dirty, messy, tedious jobs we thought robots would do for us. What comes next? What else will AIs do "for" us?
"Don't try this at home, humans, I'm just an AI and don't know any better." Will that be the new warning when something could be dangerous or risky? Probably more legalese than that.