[ChatGPT/Bing chat] Risks of using interactive AI you should know about

Interactive AI is easy and free for anyone to use, but before using it for business or other purposes, you need to know that there are risks.

Posted at: 2023.4.12

Risks of using interactive AI

Leakage of confidential and personal information

The biggest risk in using interactive AI such as ChatGPT and Bing chat is the leakage of confidential and personal information.

Although the interface of interactive AIs may give the impression of "a conversation between an AI and a user in a closed space," in reality, the conversation is established through communication with the server on which the AI is running, and the information of the conversation is retained by the operator.

It is unclear how the operator retains or uses the conversation information, but OpenAI, which operates ChatGPT, states that "data posted on ChatGPT and other services may be used to improve the AI model.

So, for example, if you accidentally enter sensitive information, the AI may learn what you have entered and use it to answer other users.

Risk of using incorrect information

ChatGPT and Bing chat are very advanced interactive AIs, but they do not always give correct answers.

One of the reasons why interactive AIs do not respond with correct information is that ChatGPT's GPT-3.5 responds from training data as of 2021, but it is important to know that current interactive AIs respond with more emphasis on "making the conversation work" than on correctness of the response. The answer is smooth and human-like, and it can answer anything.

However, it is actually a "super AI that knows everything," but its current function is to "make a 'reasonable' conversation (answer)," and it cannot judge whether the information is correct or not. It is not possible to judge whether the information is correct or not.

For example, if you keep typing "wrong" in response to a correct answer, it may give you the wrong answer even though you gave the correct answer.

Although AI services are tuned not to answer unethical questions, or questions that involve lives or quite a bit of information, they may still take risks they did not anticipate as a result of acting on misinformation.

Therefore, when using interactive AI as a source of data for which accuracy is critical, due care and confirmation with professional books and experts is required.

Interactive AI learns from information available on the Internet, some of which is protected by copyright.

For example, ChatGPT can write some programs, but it is unclear where the program source was obtained and who holds the copyright to the source answers. Therefore, it is possible that when ChatGPT published the program source it responded to, it was using code that was not open source somewhere and violated copyright.

Similarly, when you have someone create lyrics or text for you, there is a non-zero chance that it is a copy of someone else's copyrighted work.

If you use your own writing or programs, such as "summarizing" or "testing" them, there is no problem, but it is important to understand that there is a risk in letting ChatGPT or others create something easily just because it is easy.

How to avoid risk

Do not allow humans to use it with sensitive/personal information

Currently, there is no way to protect confidential and personal information when using interactive AI operated by services such as ChatGPT and Bing chat. Therefore, the biggest risk avoidance measure in the current situation is to "not allow people who handle confidential or personal information to use it.

The news that "an employee of Samusung made ChatGPT modify a code that contained confidential information" became a hot topic. Some companies have already taken measures to prohibit (block) access to ChatGPT and other services from their own networks.

Since the appearance of ChatGPT and Bing chat, development of stand-alone interactive AI has been progressing, and using such stand-alone interactive AI in your own closed environment is one way to avoid risks.

Learn how to use it correctly.

Interactive AIs such as ChatGPT and Bing chat may seem to be able to do everything because of their high degree of freedom and perfection, but they are not necessarily correct or perfect AIs.

However, there is no doubt that it is a useful tool that can support users if they learn how to use it properly, so learning how to use it correctly first is the best way to avoid risks.

For this reason, it would be one of the best ways to avoid risks by first learning how to use it properly. Whatever the case may be, it should be noted that it is currently risky to introduce interactive AI freely into business settings.

New Posts