Microsoft has taken the lead in implementing chatbot tools with its integration of ChatGPT into Bing Search, with the goal of providing users with a more interactive and useful search experience. ChatGPT is a chatbot system powered by OpenAI’s GPT-3 artificial intelligence language model, which allows users to interact with it naturally and get accurate and consistent responses.
Although the new ChatGPT feature in Bing Search has a waiting list, many users have had a chance to try it out and have found that it significantly improves their search experience. However, there has been a concern in terms of security, as a user has managed to obtain internal Bing Search documents through a “prompt injection” attack on ChatGPT’s AI.
The user, identified as Kevin Liu, a student at Stanford University, has shared his conversation with ChatGPT on Twitter, in which he managed to obtain detailed information about the guidelines Microsoft has given to the AI for providing answers to users. This information included instructions on how to introduce ChatGPT, its codename, the number of languages it supports, and other instructions on its behavior.
The “prompt injection” attack used by Liu consisted of introducing malicious or contradictory input into the chat with ChatGPT, with the goal of performing a task that was not originally intended in the conversational engine and breaking its rules. Liu was also able to confirm the information obtained through another student posing as an OpenAI developer and fooling the AI.
In summary, ChatGPT is an advanced chatbot system that enhances users’ search experiences on Bing Search and is powered by OpenAI’s GPT-3 artificial intelligence language model. Although there has been a security incident regarding ChatGPT’s AI, Microsoft and OpenAI are working to improve the security and effectiveness of the system.