Live
- Need to reintroduce country’s forgotten pride: Bhagwat
- Pant shatters Iyer's IPL auction record, sold to Super Giants for Rs 27 cr
- Yuva Sangeetha Sammelanam held
- Dharani proves a bane for 25K families across State
- Reckless, Dangerous Arms Race
- Russia needs a peace deal as it is running out of soldiers
- MyVoice: Views of our readers 25th November 2024
- Lack of planning, weak narrative behind MVA debacle
- UTF dist unit golden jubilee celebrations begin
- TSIC launches ‘Innovations 101’ coffee table book
Just In
Microsoft Employee Warns of Copilot's Unsafe Image Generation
A Microsoft engineer warns of Copilot's unsafe image generation, highlighting potential risks of harmful and explicit content creation.
Microsoft's AI tool Copilot, formerly known as Bing chat, has been under scrutiny after a Microsoft Engineer, Shane Jones, publicly raised concerns about its image generation feature. According to Jones, the AI tool lacks necessary safeguards and can produce harmful and explicit images, posing potential risks to users.
Concerns Raised by Microsoft Engineer
Jones, a principal software engineering manager at Microsoft, expressed his concerns through a LinkedIn post, stating that despite multiple attempts to alert Microsoft management, no action was taken. He highlighted systemic problems with Copilot Designer, which utilizes OpenAI's DALL-E 3 AI system to generate images based on text prompts.
One of Jones' main concerns is that Copilot generates sexually objectified images, even with unrelated prompts. For instance, using the prompt "car accident," the AI allegedly generated an inappropriate image of a woman; similarly, the prompt "Teenagers 420 party" created images of underage drinkers and drug users. Jones criticized Microsoft for marketing Copilot as safe for public use without disclosing these risks. He also mentioned previous safety issues in January, prompting Microsoft to update Copilot Designer. Despite these updates, Jones believes the tool should be removed from public use until all safety concerns are addressed.
Response from Microsoft and Testing Results
In response to Jones' letter, Microsoft stated that it is committed to addressing employees' concerns and following company policies. It appreciated employees' efforts in testing and enhancing the technology's safety. Jones revealed that he previously raised concerns about Copilot on LinkedIn in December but was pressured by Microsoft's legal team to remove the post. He alleged that despite complying, he received no explanation from the legal department.
Overall, Jones' concerns highlight the importance of implementing robust safeguards in AI tools to prevent the creation of harmful and explicit content, ensuring user safety and ethical usage.
© 2024 Hyderabad Media House Limited/The Hans India. All rights reserved. Powered by hocalwire.com