How do hackers trick AI to get data?

Developments in artificial intelligence (AI) are very exciting. Many users are testing applications like ChatGPT. Some for fun, others to make their work more effective in the long run.

More and more companies are integrating chatbots and AI assistants into their products and processes. This shows that AI can be tricked into handing over sensitive information, thereby creating serious security holes.

Artificial intelligence promises many improvements in workflows, but the risks involved should not be overlooked.

Thus, the need to regulate the use of AI is not only related to ethical and legal aspects, but also increasingly becomes a cybersecurity issue. Some of the world's largest business, financial and legal companies have already recognized that AI security risks should be taken seriously and have limited or even banned internal use of ChatGPT.

Chatbots can be tricked into revealing sensitive data

AI-powered chatbots like ChatGPT and chat assistants on websites automatically collect not only the personal data users share with them, but also information about their access devices, their location, and even social media activity.

According to the privacy policy that users agree to when using ChatGPT, the owner undertakes to protect the data by not disclosing it to third parties. However, the information may be used, for example, for personalized advertising.

Many users are unaware that chatbots do not offer data protection mechanisms such as encryption, data loss prevention, access restrictions and logs, which can open the door to cyber breaches.

And this is precisely the main vulnerability that hackers have been increasingly exploiting lately. The current level of security for chatbot data does not at all guarantee protection for both end users and companies that integrate them into their services. Therefore, no one should enter sensitive information into AI assistants or chats.

Of course, if chatbots are directly asked for personal information or confidential business information, they will not provide an answer. But thanks to skillful manipulation of the search query and sophisticated wording, hackers can extract any confidential information. This includes contact and location data, preferences, business relationships and partners, and even user data and passwords.

Armed with this knowledge, criminals can launch a variety of personalized and targeted attacks against individuals and organizations. These attacks can include phishing campaigns that lure visitors to fake websites, ransomware, and custom malicious code. It is also possible for a criminal to impersonate or take over an account, which often leads to business email compromise.

How can AI be used to create highly effective cyber attacks?

1. Ransomware, malware and spam.

ChatGPT is used by many developers to write code. Even hackers have found a way to manipulate the guardrails, or control mechanisms, so that requests from their malicious code are not recognized as such. According to experts, ChatGPT has the potential to write undetectable malicious code that can enable code injection and mutation.

Malicious code created by artificial intelligence saves time and simplifies your work. This gives even inexperienced hackers quick access to powerful malicious code. According to experts, ChatGPT is capable of creating ransomware code that can encrypt not just part, but the entire company’s system.

You can also spam - run AI-powered campaigns to improve your workflow and streamline your copywriting more efficiently than ever before. Even AI-powered virtual assistants are not immune to abuse. They are manipulated through indirect cue injection, in which the AI's behavior is modified by text hidden by hackers on a website. This allows you to bypass the assistant's privacy policy and transmit sensitive data.

2. Phishing attacks

One of the most important factors in the success of a phishing attack is its personalization and authenticity. Currently, the best way to spot a phishing email is poor grammar and punctuation, strange word choice, lack of disclaimers and footers, and often a strange return address.

With the help of AI, criminals create clean emails that are written coherently and without errors and use the language chosen by a professional. For example, they may pose as a bank employee or customer service representative - a perfect impersonation. To further refine and personalize phishing, hackers are taking social engineering to the next level by adding AI-determined personal information to enhance trust.

Currently, there is already a type of phishing attack called a ChatGPT attack, which aims to trick victims into revealing their access or personal data. For example, hackers create a fake customer service account on a popular chat platform and use it to contact victims and offer help in resolving the problem. They then redirect the victim to a malicious website, which in turn collects login credentials. This gives hackers easy access.

3. Impersonation and account hacking

The more information AI collects about websites, social media profiles and posts, chats and online behavior, the more authentic a malicious email masquerading as the real thing can appear. For example, an employee might receive an email from a regular contact at a partner company asking them to transfer money for an event that actually happened.

With the same ease with which ChatGPT can generate text, it can also imitate a person's voice and facial expressions. Therefore, imitating a high-ranking manager becomes child's play. This is then used, for example, to send an email or chat/voice message to the finance department requesting payment or sensitive data.

Unlike impersonation, account takeover is the actual takeover of an email account, which in turn is used by hackers to obtain insider information or initiate malicious activities.

4. Business Email Compromise

The attacks described above are often part of a business email compromise (BEC) attack that typically targets executives. That's why they are also called whaling attacks. Due to the patterns inherent in regular BECs, they are easy to detect by antivirus programs. However, the complexity and level of personalization of AI-based BECs makes them significantly more difficult to detect.

How to protect a company from cyber attacks using artificial intelligence?

100% prevention of cyberattacks using artificial intelligence is unlikely, but a combination of strict security measures and training provides optimal protection for most companies. Firewalls and modern virus protection programs are absolutely essential basic equipment for companies, complemented by strong passwords, multi-factor authentication and access controls.

To protect outgoing emails from spying and abuse, encryption should be standard, as should data loss prevention technology. An integral part of the security strategy should also include regular software upgrades and security patch updates, as well as pen tests for those who are particularly vulnerable.

Monitoring email traffic and accounts allows companies to identify vulnerabilities and make data-driven decisions to optimize security measures.

Mainton Company - custom software development and testing, SEO and online advertising since 2004.

PENTEST SAFETY HACKED? ARTICLES VACANCIES