4 types of AI threat causing global disruption

0 22
Informa Tech’s AI Summit and USA recently released a collaborative report entit...

Informa Tech’s AI Summit and USA recently released a collaborative report entitled How Gen AI Is Revolutionising Threat Detection in Cybersecurity. 

It explores major developments in both AI-powered threats, and AI-powered security – because the two are developing side-by-side, each trying to outdo the other and gain an advantage. If cyber criminals gain that advantage, the consequences are potentially devastating; so this is a critical time for cybersecurity professionals and industry leaders to step up to the challenge. 

Here, we’ll cover four types of AI threats that are causing disruption around the world right now. 

1. Social engineering attacks are here to stay

Phishing attacks targeted organisations and individuals around the world with significant success even before GenAI arrived on the scene. 

The FBI’s 2023 Internet Crime Report revealed that the total cost of cybercrime in the US rose to $12.5 billion in 2023, with 880,418 complaints logged. A staggering 298,878 (34%) of these were specifically related to phishing. 

According to Santander bank, 91% of cyber attacks start with a phishing email. 

And with GenAI, phishing can reach more victims with enhanced success rates. In part, this is because the collection and analysis of vast amounts of personal data allow GenAI tools to generate personalised phishing emails that are more effective at deceiving the recipient.

2. Deepfakes make false information feel real 

A worldwide survey by iProov in 2023 found that 71% of people globally don’t know what deepfakes are. In spite of this, a surprising 57% said they think they could recognise a deepfake if they saw one. 

Deepfake content is on the rise – and it’s highly effective at tricking recipients into believing they’re seeing or hearing a genuine recording.

In 2022, Brazilian crypto exchange BlueBenx also fell victim to a costly deepfake scam – criminals impersonated Patrick Hillmann, the CCO of Binance, and used his likeness on a Zoom call to persuade BlueBenx to send $200,000 and 25 million BNX tokens to their accounts. 

AI-powered deepfakes are just happening on Zoom. Scammers have used deepfake YouTube videos to distribute stealer malware (including Raccoon, RedLine, and Vidar); and deepfake audio, often exploiting recordings of the voices of people trusted by victims, is increasingly used to build trust over the telephone. 

3. Automated malware aids antivirus evasion 

Threat actors are using AI to generate new malware variants very quickly. They use AI to analyse existing malware code and create slight variants – that are different enough to evade the signature-based detection models used by antivirus software. 

Cyber criminals are also using AI to observe and analyse how malware reacts in a sandbox, and use this information to develop detection avoidance techniques in those environments.

4. Threat actors weaponise AI systems  

There’s growing potential for cyber criminals to manipulate AI-powered systems themselves – turning AI against itself to exploit or harm victims. Vulnerable systems could include autonomous vehicles, chatbots, and critical national infrastructure; so there’s potential for serious harm. 

In a recent report titled Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations, researchers from the US National Institute of Standards and Technology (NIST) examined four types of attack that must be considered when deploying AI system: 

  • Evasion attacks: these attempt to alter an input after an AI system is deployed in order to change how the system responds to it. For example, adding markings to road signs so autonomous vehicles misinterpret them.
  • Poisoning attacks: these attacks involve introducing corrupted data during the GenAI training phase, so the system is biased or produces incorrect outcomes.
  • Privacy attacks: occurring during AI deployment, these are attempts to gather sensitive information about either the AI itself, or the data it was trained on – so that information can be misused.
  • Abuse attacks: these are different from poisoning attacks, because the threat actors attempt to provide the AI with incorrect information, from a legitimate (but compromised) source – for example, by inserting false information into a web page or online document. 
  • Join us at MEA 2024 and discover how to improve your organisation’s cyber resilience.

    REGISTER NOW
    你可能想看:

    As announced today, Glupteba is a multi-component botnet targeting Windows computers. Google has taken action to disrupt the operation of Glupteba, and we believe this action will have a significant i

    Card farming and cashing out not only cause losses to banks but also disrupt financial order

    3.6 Should not use OS package manager update instructions such as apt-get update or yum update separately or on a single line in Dockerfile

    b) It should have a login failure handling function, and should configure and enable measures such as ending the session, limiting the number of illegal login attempts, and automatically logging out w

    Analysis of SSRF Vulnerability in Next.js: A deep exploration of blind SSRF attacks and their preventive strategies

    Announcement regarding the addition of 7 units as technical support units for the Ministry of Industry and Information Technology's mobile Internet APP product security vulnerability database

    Knowledge Point 5: Bypass CDN through Space Engine & Use Tools for Global CDN Bypass Scanning

    b) It should have the login failure handling function, and should configure and enable measures such as ending the session, limiting the number of illegal logins, and automatically exiting when the lo

    Article 2 of the Cryptography Law clearly defines the term 'cryptography', which does not include commonly known terms such as 'bank card password', 'login password', as well as facial recognition, fi

    d) Adopt identification technologies such as passwords, password technologies, biometric technologies, and combinations of two or more to identify users, and at least one identification technology sho

    最后修改时间:
    admin
    上一篇 2025年03月22日 04:40
    下一篇 2025年03月22日 05:03

    评论已关闭