news
November 27, 2023by Cybersixgill

2024 Predictions: AI Will be Used as an Attack Tool and Target

In the second installment of our blog series discussing 2024 predictions, we explore how threat actors will use artificial intelligence (AI) for adversarial use, from automating large-scale cyberattacks and creating human-like phishing email campaigns to developing malicious content targeting companies, employees, and customers.

CSG-Prediction ThumbnailWhile AI’s potential is exciting, the technology has also become a game-changer for cybercriminals, enabling them to mount attacks faster and at a grander scale. Its misuse spans across industries, leaving organizations vulnerable to increasingly sophisticated attacks such as speech synthesis impersonating people and companies, spam emails using information pulled from social media, and even exploiting AI systems themselves.  

In our recently released Cybersecurity in 2024: Predicting the Next Generation of Threats and Strategies, we reveal how AI will shape the industry, influence attackers, and change security strategies. In the coming year, our experts predict that AI will be used as both an attack tool and a target, as black hat hackers use AI to improve effectiveness and legitimate uses of AI become a prominent attack vector. For instance, AI will enable cyber attackers to target users’ credentials that can be compromised and sold in underground markets. Additionally, malicious attacks like data poisoning and vulnerability exploitation in AI models will gain momentum.

Of particular concern is the rise of social engineering, like pretexting, that is further enabled by AI. Why? Generative AI can easily and quickly mimic the writing styles of legitimate organizations and individuals so that phishing emails seem more credible. Additionally, threat actors can now pretext in multiple languages to target a larger group of victims and conduct attacks over an expanding attack surface.

Across the globe, governments, technology companies, and industry thought leaders are growing increasingly concerned over the uncertainty and risks presented by AI’s many unknowns. For instance, Europol, the European Union’s law enforcement agency, released a report on ChatGPT’s impact on large language models (LLMs) and law enforcement earlier this year. The report stated that ChatGPT makes it possible for those with limited English proficiency to realistically impersonate English-native organizations and individuals.

As AI models become more sophisticated and the call for regulation gathers momentum, technology companies and governments must work together to minimize AI’s risks. One such example of this much-needed collaboration took place at this year’s DEFCON, which held the largest red teaming exercise for any group of AI models. Supported by the White House and technology leaders, including OpenAI, Google, and Meta, the event had hackers use generative AI to make LLMs create discriminatory statements, false information, and more. Additionally, in November 2023, the Federal Trade Commission (FTC) announced the Voice Cloning Challenge to promote the development of preventing, monitoring, and evaluating malicious use of voice cloning technology. The results of this type of collaboration can significantly enhance AI safety and help to solve AI security challenges.

Want to learn more about Cybersixgill’s insights and predictions for 2024 to keep your assets and stakeholders safe? Download Cybersecurity in 2024: Predicting the Next Generation of Threats and Strategies.

You may also like

Rising Cybersecurity Threats to Nuclear Infrastructure

November 19, 2024

Nuclear Facility Threat Intelligence – The Sellafield Near Miss

Read more
A New Chapter

November 14, 2024

A New Chapter as Cybersixgill is acquired by Bitsight

Read more
Smart Security At Scale For MSSPs

November 05, 2024

Smart Security At Scale For MSSPs

Read more