Phil Hausmann, CIC, CRIS June 16, 2023 12 min read

How Cybercriminals Are Weaponizing Artificial Intelligence

The past few years have seen artificial intelligence (AI) surge in popularity among both businesses and individuals. Such technology encompasses machines, computer systems, and other devices that can simulate human intelligence processes. In other words, this technology can perform a variety of cognitive functions typically associated with the human mind, such as observing, learning, reasoning, interacting with its surroundings, problem-solving, and engaging in creative activities. Applications of AI technology are widespread, but some of the most common include computer vision solutions (e.g., drones), natural language processing systems (e.g., chatbots), and predictive and prescriptive analytics engines (e.g., mobile applications).

While this technology can certainly offer benefits in the realm of cybersecurity—streamlining threat detection capabilities, analyzing vast amounts of data, and automating incident response protocols—it also has the potential to be weaponized by cybercriminals. In particular, cybercriminals have begun leveraging AI technology to seek out their targets more easily, launch attacks at greater speeds and in larger volumes, and wreak further havoc amid these attacks.

As such, it’s crucial for businesses to understand the cyber risks associated with this technology and implement strategies to minimize these concerns.

Ways Cybercriminals Can Leverage AI Technology

AI technology can help cybercriminals conduct a range of damaging activities, including the following:

  • Creating and distributing malware—In the past, only the most sophisticated cybercriminals were capable of writing harmful code and deploying malware attacks. However, AI chatbots are now able to generate illicit code in a matter of seconds, permitting cybercriminals with varying levels of technical expertise to launch malware attacks with ease. Although current AI technology writes more basic (and often bug-ridden) code, its capabilities will likely continue to advance over time, thus posing more substantial cyberthreats. In addition to writing harmful code, some AI tools can also generate deceptive YouTube videos claiming to be tutorials on how to download certain versions of popular software (e.g., Adobe and Autodesk products) and distribute malware to targets’ devices when they view this content. Cybercriminals may create their own YouTube accounts to disperse these malicious videos or hack into other popular accounts to post such content. To convince targets of these videos’ authenticity, cybercriminals may further utilize AI technology to add fake likes and comments.
  • Cracking credentials—Many cybercriminals rely on brute-force techniques to reveal targets’ passwords and steal their credentials to then utilize their accounts for fraudulent purposes. Yet, these techniques may vary in effectiveness and efficiency. By leveraging AI technology, cybercriminals can bolster their password-cracking success rates, uncovering targets’ credentials at record speeds. In fact, a recent cybersecurity report found that some AI tools are capable of cracking more than half (51%) of common passwords in under a minute and over two-thirds (71%) of such credentials in less than a day.
  • Deploying social engineering scams—Social engineering consists of cybercriminals using fraudulent forms of communication (e.g., emails, texts and phone calls) to trick targets into unknowingly sharing sensitive information or downloading harmful software. It repeatedly reigns as one of the most prevalent cyberattack methods. Unfortunately, AI technology could cause these scams to become increasingly common by giving cybercriminals the ability to formulate persuasive phishing messages with minimal effort. It could also clean up grammar and spelling errors in human-produced copy to make it appear more convincing. According to the latest research from international cybersecurity company Darktrace, social engineering scams involving sophisticated linguistic techniques have already risen by 135%, suggesting an increase in AI-generated communications.
  • Identifying digital vulnerabilitiesWhen hacking into targets’ networks or systems, cybercriminals usually look for software vulnerabilities they can exploit, such as unpatched code or outdated security programs. While various tools can help identify these vulnerabilities, AI technology could permit cybercriminals to detect a wider range of software flaws, therefore providing additional avenues and entry points for launching attacks.
  • Reviewing stolen data—Upon stealing sensitive information and confidential records from targets, cybercriminals generally have to sift through this data to determine their next steps—whether it’s selling this information on the dark web, posting it publicly or demanding a ransom payment in exchange for restoration. This can be a tedious process, especially with larger databases. With AI technology, cybercriminals can analyze this data much faster, allowing them to make quick decisions and speed up the total time it takes to execute their attacks. In turn, targets will have less time to identify and defend against such attacks.

 

Tips to Protect Against Weaponized AI Technology

Businesses should consider the following measures to mitigate their risk of experiencing cyberattacks and related losses from weaponized AI technology:

  • Uphold proper cyber hygiene. Such hygiene refers to habitual practices that promote the safe handling of critical workplace information and connected devices. These practices can help keep networks and data protected from various AI-driven cyberthreats. Here are some key components of cyber hygiene for businesses to keep in mind:
    • Requiring employees to use strong passwords (those containing at least 12 characters and a mix of uppercase and lowercase letters, symbols, and numbers) and leverage multifactor authentication across workplace accounts
    • Backing up essential business data in a separate and secure location (e.g., an external hard drive or the cloud) on a regular basis
    • Equipping workplace networks and systems with firewalls, antivirus programs, and other security software
    • Providing employees with routine cybersecurity training to educate them on the latest digital exposures, attack prevention measures, and response protocols
  • Engage in network monitoring. This form of monitoring pertains to businesses utilizing automated threat detection technology to continuously scan their digital ecosystems for possible weaknesses or suspicious activities. Such technology typically sends alerts when security issues arise, allowing businesses to detect and respond to incidents as quickly as possible. Since time is of the essence when it comes to handling AI-related threats, network monitoring is a vital practice.
  • Have a plan. Creating cyber incident response plans can help businesses ensure they have necessary protocols in place when cyberattacks occur, thus keeping related damages at a minimum. These plans should be well-documented and practiced regularly and should address multiple cyberattack scenarios (including those stemming from AI technology).
  • Purchase coverage. Lastly, it’s imperative for businesses to secure adequate insurance and financially safeguard themselves from losses that may arise from the weaponization of AI technology. It’s best for businesses to consult trusted insurance professionals to discuss specific coverage needs.

Looking forward, AI technology is likely to contribute to rising cyberattack frequency and severity. By staying informed on the latest AI-related developments and taking steps to protect against its weaponization, businesses can maintain secure operations and minimize associated cyberthreats.

As a benefit to our clients, we are pleased to offer Cyber JumpStart by Arctic Wolf – a secure portal to identify security gaps, improve security posture, and ultimately pursue insurance terms that best suit your organization. Click here to learn more about the Cyber JumpStart program and how it can protect your business from cyber threats. 

 

avatar

Phil Hausmann, CIC, CRIS

Phil joined Hausmann Group in 2010 as a Property & Casualty Consultant and is a member of the Construction Industry Group. He is a graduate of Marquette University and has been immersed in the insurance industry ever since graduation. He spent the first four years of his career on the service side of insurance in the Excess and Surplus lines market. Phil believes in a hands-on approach to risk management. He prides himself on getting to know his clients and their business goals so thoroughly that he becomes an integral part of their team. Outside of the office, Phil is active in the community as a board member of the YMCA of Dane County. He and his wife, Ellie, have two young boys that keep them on the go. Together, they enjoy traveling across the globe and exploring new cultures.

COMMENTS