Six cybersecurity threats associated with generative artificial intelligence and how to mitigate them.

What risks does generative artificial intelligence pose and how can businesses protect themselves?

ChatGPT and its competitors have brought generative artificial intelligence (AI) into the mainstream. Generative AI, which allows users to create, combine and repurpose content, is considered a transformative technology for business.

However, as the use of generative artificial intelligence grows, so do concerns about cybersecurity, as the technology is capable of carrying out more targeted cyberattacks. Hackers can use generative AI to craft effective phishing emails, and the technology makes deepfakes even more convincing.

Creating malware is also made easier by generative artificial intelligence. For example, in 2020, researchers discovered a new type of malware called DeepLocker that used generative artificial intelligence to create unique obfuscation techniques that made it difficult for security tools to detect and block.

Attackers use off-the-shelf libraries and machine learning platforms such as TensorFlow or PyTorch to create generative models, says cybersecurity expert Mainton. “These tools are widely available and easy to use, which has lowered the barrier to entry for attackers looking to use AI in their attacks”.

There are several types of generative AI, each of which can potentially be used in cyber attacks. So what new risks do different types of generative AI pose, and how can businesses protect themselves as technology advances?

Text-Based Generative Artificial Intelligence

Text-based generative AI like ChatGPT helps make phishing attacks much more sophisticated and difficult to detect. “AI campaigns can create highly personalized emails to enable spear phishing at scale”, says Senior Solutions Architect at Mainton.

Since text-based generative AI models are currently the most mature, experts say this type of attack will have the greatest impact in the near future. They can be used to create personalized phishing emails or disinformation campaigns about an organization or individual.

Using text-based generative AI, it will be possible in the future to enhance live chat capabilities to automatically target companies through their web chat services, according to Mainton's chief solutions consultant.

Generative AI based on video

Going forward, video-based generative AI could enhance deep fake attacks to trick employees into transferring large sums of money to criminals. For example, an attacker could use video generation to create a deepfake of a company executive for social engineering attacks or to spread disinformation.

Alternatively, an attacker could use the video generation model to create fake footage of the CEO instructing employees to transfer money or disclose sensitive information. Video models can be used to bypass facial recognition security measures in identity-based attacks or to impersonate company employees in spoofing attacks.

Generative AI based on audio

Voice cloning is just one application of audio-based generative artificial intelligence, and it's easy to see how it could be used for nefarious purposes. An attacker can use audio systems to create a convincing voice phishing call that appears to be from a trusted source, such as a bank or credit card company.

Alternatively, an attacker could use the audio generation model to create a fake audio clip of the CEO instructing employees to take a specific action.

Generative text-to-speech AI, such as Microsoft's new VALL-E neural codec language model, is capable of accurately reproducing a person's voice using a combination of text cues and a short video of the actual speaker. Because Microsoft VALL-E can reproduce tone and intonation and convey emotion, voice clips created using this model are very convincing.

The speed at which sound-based generative AI is advancing poses a serious threat. Audio fakes are a reality, and the technology that makes them possible is rapidly improving - in recent years we've seen huge changes with computers creating their own dialogue.

Generative Image-Based Artificial Intelligence

AI-generated images created by DALL-E 2 could also pose a serious threat as the technology advances.

An attacker can use generative artificial intelligence to create a convincing fake image or video that shows, for example, a company executive behaving in an inappropriate or illegal manner. The image or video can be used to blackmail or spread misinformation.

Code-Based Generative Artificial Intelligence

Automated code generation using generative AI models not only allows less experienced attackers to create advanced malware, but also makes it easier to bypass traditional security tools. Code-based generative AI tools include tools such as GitHub Copilot.

The tool allows you to do this by hiding malicious intent deep inside a benign application, such as in a sophisticated Trojan attack - similar to how information can be hidden inside an image using steganography.

Combined security threats to generative AI

Attackers can also combine different types of generative AI models to carry out more sophisticated attacks.

For example, an attacker who wants a victim to perform a specific action might use a text generation model to compose a persuasive email, a video generation model to create a fake video, and an audio generation model to create a fake audio clip.

This combination attack can be especially effective because it uses multiple forms of media to create a more transparent and persuasive message.

How to Mitigate Security Threats to Generative AI?

Like any type of security threat, the risk posed by generative AI attacks is likely to evolve, so business preparation is essential. At this point, it is worth noting that security technologies are not always able to detect and stop these attacks.

There are currently no known tools that can identify generative AI attacks because “the modus operandi appears to be human”. The emergence of realistic fake videos is especially alarming.

With this in mind, companies need to remain vigilant and adapt to new threats as they emerge, working closely with cybersecurity experts and technology providers, says senior security researcher at Mainton. He advises using strong authentication measures, such as multi-factor authentication (MFA), to prevent unauthorized access.

Underlying all of this must be a strong strategy that addresses the use of AI in business. Introducing artificial intelligence technologies into business structures can be counterproductive if organizations do not pay attention to safety and security.

As the threat posed by generative AI becomes increasingly complex, experts agree that training and education are key. Experts advise businesses to strengthen staff training on the latest attack methods. Humans will always be one of the easiest attack vectors for attackers - organizations should make sure their employees are aware of the new tools used by attackers.

Mainton Company - custom software development and testing, DevOps and SRE, SEO and online advertising since 2004.

PENTEST SAFETY HACKED? ARTICLES VACANCIES