Live
- Madanapalle Rythu Bazaar revived after 14 yrs
- Arunachal musician booked for publicly killing chicken, drinking its blood
- Pushpayagam at Srivari temple on Nov 9
- Nagula Chavithi: Lord rides on Pedda Sesha Vahanam
- Redefining Skincare
- TGBIE announces due dates for exam fee payment
- Man, daughter held for killing 60-year-old woman
- India’s Mandeep Jangra wins WBF’s world title
- Tirupati: Hygiene concerns soar at eateries in pilgrim city
- Minister Sridhar Babu poses ten questions to BRS
Just In
The meteoric rise of ChatGPT, a free chatbot fueled by artificial intelligence, has garnered widespread attention.
Hyderabad : The meteoric rise of ChatGPT, a free chatbot fueled by artificial intelligence, has garnered widespread attention. Developed by OpenAI, a non-profit organisation committed to advancing friendly AI, this advanced machine learning model boasts the ability to answer any query. However, as ChatGPT gains popularity, so does the growing concern about associated risks. Cybercriminals have swiftly seized the opportunity, creating nearly identical replicas of the official site or app to distribute malicious content. Beyond mere imitations, the real peril lies in the potential for spear phishing attacks facilitated by this chatbot. These targeted cyberattacks leverage the wealth of personal information users unwittingly share on social media and in their daily online activities.
The growing threat: Spear phishing attacks
In the hands of an attacker, ChatGPT transforms into a potent tool for spear phishing attacks. These attacks are meticulously tailored to exploit the information individuals unknowingly disclose through their social media profiles and browsing habits. Cybercriminals employ AI to construct deceptive content specifically designed to deceive their intended victims.
To counter this alarming trend, Ermes – Cybersecurity, an Italian cybersecurity firm, has developed an effective AI system. Recognising the increasing reliance on third-party AI-based services, Ermes aims to provide a secure solution that filters and blocks the sharing of sensitive information such as emails, passwords, and financial data.
The peril of Business Email Compromise (BEC)
One particularly worrisome threat is the exploitation of ChatGPT for Business Email Compromise (BEC) attacks. Cybercriminals use templates to craft deceptive emails, tricking recipients into divulging sensitive information. With ChatGPT's assistance, hackers can generate unique content for each email, making these attacks harder to detect and differentiate from legitimate correspondence.
The flexibility of ChatGPT enables attackers to apply various modifications to their prompts, heightening the chances of success. This scenario raises serious concerns about the potential misuse of advanced AI technologies in the realm of cybersecurity, urging users and organisations to remain vigilant against evolving threats.
© 2024 Hyderabad Media House Limited/The Hans India. All rights reserved. Powered by hocalwire.com