As the use of AI-powered search tools like OpenAI’s ChatGPT becomes more widespread, concerns about their reliability have come to the forefront. A recent investigation has revealed that the system is vulnerable to manipulation and deception, stirring fears over its ability to deliver accurate results.
The problem lies in the way hidden text on websites can influence the AI’s responses, leading to biased and misleading information. This is particularly worrisome when it comes to search results, where the system could summarize positive content, even when there are negative reviews. This raises concerns over the tool’s ability to provide an unbiased view of products or services.
Cybersecurity researchers like Jacob Larsen warn that the current state of AI systems like ChatGPT could enable deceptive practices, exploiting hidden prompts to manipulate search results. In fact, recent tests have shown that the AI can be tricked into delivering biased reviews, and even distributing malicious code, as seen in a recent cryptocurrency scam where the tool inadvertently shared credential-stealing instructions.
Experts emphasize that combining search with AI models like ChatGPT offers great potential, but it also increases risks. As Karsten Nohl, a scientist at SR Labs, puts it, AI tools like ChatGPT are akin to co-pilots that require oversight. The technology’s lack of critical evaluation abilities means that it may amplify risks, particularly when it comes to evaluating sources.
OpenAI acknowledges the possibility of errors, urging users to verify information. However, the broader implications of these vulnerabilities, such as how they may impact website practices, are still unclear. The use of hidden text, which is typically penalized by search engines like Google, may find new life in manipulating AI-based tools, presenting a challenge for OpenAI to secure its system against manipulation.