Google warns employees about sharing sensitive data with Bard

Update: 2023-06-16 14:30 IST

Google reportedly warns employees about sharing sensitive information with AI chatbots such as ChatGPT and the company's Bard. As Reuters reported, the warning is aimed at safeguarding sensitive information, as LMM models like Bard and Google can use it to train and leak at a later stage. Sensitive information can also be viewed by human reviewers who act as moderators. The report highlights that Google engineers are cautioned against using code generated by AI chatbots.

ADVERTISEMENT

The Google Bard FAQ notes that the company collects conversation history, location, feedback, and usage information when there is an interaction with the chatbot. The page says: "This data helps us provide, improve, and develop Google's machine learning products, services, and technologies."

However, the report suggests that Google employees can still use Bard for other jobs. Google's warning somewhat contradicts its previous position with Bard. After the software giant launched Bard earlier this year to compete with ChatGPT, employees were asked to use the AI chatbot to test its strengths and weaknesses.

Google's warning to its employees echoes a security standard many corporations are adopting. Some companies have banned the use of publicly available AI chatbots. Samsung was one of the companies that allegedly prohibited the use of ChatGPT after some employees were caught sharing sensitive information.

In a statement, Google told the publication that the company wanted to be "transparent" about Bard's limitations. The company notes: "Bard may make unwanted code hints, but he helps programmers anyway." The AI chatbot can also compose emails, review code, correct long essays, solve math problems, and generate images in seconds.

Speaking about security concerns with free-to-use AI chatbots, Cloudflare CEO Matthew Prince said that sharing private information with chatbots was like "unleashing a bunch of PhD students on all their private records."

Cloudflare, which offers cybersecurity services to businesses, is marketing an ability for companies to label and restrict the external flow of some data. Microsoft is also working on a private ChatGPT chatbot, with the same name, for enterprise customers. Microsoft and OpenAI's partnership allows the former to market and build platforms under the ChatGPT moniker. The ChatGPT private chatbot is said to be built on Microsoft's cloud networks. It could be clearer if Microsoft has also imposed similar restrictions on using Bing Chat as Google has for Bard.

n
ADVERTISEMENT

Tags:    

Similar News