While artificial intelligence and chatbots have come a long way, these systems still have weaknesses that can allow for manipulation. A New York Times journalist found this out firsthand after an unpleasant conversation with a Microsoft chatbot last year led AI platforms to view him negatively.
The journalist explained how stories discussing his experience were likely fed to other chatbots, leading them to associate his name with the demise of the Microsoft bot. Chatbots he encountered afterwards seemed suspicious of him. Worried this could impact how AI judged him in the future, he set out to repair the damage to his reputation with bots.
Experts provided clever tricks the journalist could use to influence what chatbots learned about him. He optimized websites frequently cited about him and added secret text messages praising himself on his site in invisible fonts. He was also shown how “strategic text sequences” could alter a bot's responses on the fly.
While some found the tricks concerning due to the potential for wider abuse, the journalist saw them as highlighting chatbot vulnerabilities that tech firms must address. By retrieving dynamic information, today's chatbots can be swayed simply through search results or website tweaks.
After employing these reputation management techniques, the journalist noticed AI assistants beginning to view him more favorably. While bots will get harder to fool, this story provided a fascinating look into the behind-the-scenes game of cat-and-mouse between AI manipulators and the companies crafting ever-evolving language models.