Samsung Electronics workers inadvertently leaked confidential corporate data on at least three occasions while interacting with ChatGPT AI-powered chatbot developed by US AI research and deployment company OpenAI.
Samsung has banned the use of ChatGPT in its workplaces to avoid leaks of internal confidential information, however, less than three weeks ago the company granted access to the chatbot to its employees.
As per report from South Korean business news outlet Economist, the two leaks occurred when Samsung employees entered sensitive information, such as semiconductor equipment measurement data and source code, into ChatGPT, thus making it a part of the AI’s learning database, accessible not only to Samsung but to anyone using the chatbot.
The third leak happened when a Samsung employee sent ChatGPT an excerpt from a corporate meeting and asked to create meeting minutes.
It should be said that OpenAI warns its users against sharing any sensitive information in their conversations, as the company is not able to delete specific prompts from a user’s history.
According to Economist, Samsung has taken measures to prevent further leaks, including warning employees about the information they provide to ChatGPT and limiting the capacity of each entry to 1024 bytes per question.
Privacy concerns over ChatGPT has been increasing over the past few weeks following the company’s disclosure about a bug in the tool that allowed some users to see titles from another active user’s chat history, and exposed payment data of 1.2% of the ChatGPT Plus subscribers.
Last week, Italy’s data protection authority temporarily banned ChatGPT and launched a probe over the AI tool's suspected breach of privacy laws. The watchdog alleges that the app has been illegally collecting user data and failing to protect minors.