To Prevent Data Leakage, Big Techs Are Restricting The Use of AI Chatbots For Their Staff

The UK published on its an AI white paper launched to drive responsible innovation and public trust considering these five principles:

  • safety, security, and robustness;
  • transparency and explainability;
  • fairness;
  • accountability, and governance;
  • and contestability and redress.

As we can see, as AI becomes more present in our lives, especially at the speed at which it occurs, new concerns naturally arise. Security measures become necessary while developers work to reduce dangers without compromising the evolution of what we already recognize as a big step toward the future.

Do you want to continue to be updated with Marketing best practices? I strongly suggest that you Rock Content’s interactive newsletter. We cover all the trends that matter in the Digital Marketing landscape. See you there.

ChatGPT accounts on Dark Web Marketplace

Time is running out while governments and technology communities around the world are discussing AI policies. The main concern is keeping humanity safe against misinformation and all the risks it involves.

And the discussion is turning hot now that fears are Cayman Island Mobile Number List related to data privacy. Have you ever thought about the risks of sharing your information using ChatGPT, Bard, or other AI chatbots?

If you haven’t then, you may not yet know that technology giants have been taking serious measures to prevent information leakage.

In early May, Samsung notified their staff of a new internal policy restricting AI tools on devices running on their networks, after sensitive data was accidentally leaked to ChatGPT.

The company is reviewing measures to create a secure environment for safely using generative AI to enhance employees’ productivity and efficiency,” said a Samsung spokesperson to

And they also explained that the company will temporarily restrict the use of generative AI through company devices until the security measures are ready.

Another giant that adopted a similar action was Apple. According to the  Samsung’s rival is also concerned about confidential data leaking out. So, their restrictions include ChatGPT as .

Even you Google?

phone number list

Another factor that could generate sensitive data exposure is, as AI chatbots are becoming more and more popular, employees around the world who are adopting them AO Lists to optimize their routines. Most of the time without any cautiousness or supervision.

Yesterday  a Singapore-based global cybersecurity solutions leader, reported that they found more than 100k compromised ChatGPT accounts infected with saved credentials within their logs. This stolen information has been traded on illicit dark web marketplaces since last year. They highlighted that by default, ChatGPT stores the history of queries and AI responses, and the lack of essential care is exposing many companies and their employees.

Leave a Reply

Your email address will not be published. Required fields are marked *