Live
- Three persons admitted to hospital for diarrhea treatment
- First Star Outside Milky Way Captured: WOH G64 is 2,000 Times Larger Than the Sun
- Sikkim govt to constitute state Niti Ayog: CM Tamang
- CBI books Rajasthan narcotics inspector for Rs 3 lakh bribe
- Rajasthan bypolls: A tough contest between BJP and Congress
- Albania joins SEPA, paving way for EU integration
- Japanese government approves 250-billion USD economic package to ease price pain
- Six pharma companies to set up their units in Telangana
- The Unstable Events of a 17-Wicket Day in Perth: India vs Australia
- Dutch FM's Israel trip cancelled after Netanyahu's arrest warrant
Just In
Google warns employees about sharing sensitive data with Bard
The Google Bard FAQ notes that the company collects conversation history, location, feedback, and usage information when interacting with the chatbot.
Google reportedly warns employees about sharing sensitive information with AI chatbots such as ChatGPT and the company's Bard. As Reuters reported, the warning is aimed at safeguarding sensitive information, as LMM models like Bard and Google can use it to train and leak at a later stage. Sensitive information can also be viewed by human reviewers who act as moderators. The report highlights that Google engineers are cautioned against using code generated by AI chatbots.
The Google Bard FAQ notes that the company collects conversation history, location, feedback, and usage information when there is an interaction with the chatbot. The page says: "This data helps us provide, improve, and develop Google's machine learning products, services, and technologies."
However, the report suggests that Google employees can still use Bard for other jobs. Google's warning somewhat contradicts its previous position with Bard. After the software giant launched Bard earlier this year to compete with ChatGPT, employees were asked to use the AI chatbot to test its strengths and weaknesses.
Google's warning to its employees echoes a security standard many corporations are adopting. Some companies have banned the use of publicly available AI chatbots. Samsung was one of the companies that allegedly prohibited the use of ChatGPT after some employees were caught sharing sensitive information.
In a statement, Google told the publication that the company wanted to be "transparent" about Bard's limitations. The company notes: "Bard may make unwanted code hints, but he helps programmers anyway." The AI chatbot can also compose emails, review code, correct long essays, solve math problems, and generate images in seconds.
Speaking about security concerns with free-to-use AI chatbots, Cloudflare CEO Matthew Prince said that sharing private information with chatbots was like "unleashing a bunch of PhD students on all their private records."
Cloudflare, which offers cybersecurity services to businesses, is marketing an ability for companies to label and restrict the external flow of some data. Microsoft is also working on a private ChatGPT chatbot, with the same name, for enterprise customers. Microsoft and OpenAI's partnership allows the former to market and build platforms under the ChatGPT moniker. The ChatGPT private chatbot is said to be built on Microsoft's cloud networks. It could be clearer if Microsoft has also imposed similar restrictions on using Bing Chat as Google has for Bard.
© 2024 Hyderabad Media House Limited/The Hans India. All rights reserved. Powered by hocalwire.com