866.680.3388

AI and Cyberattacks: Fears Real and Imagined

Updated 07/19/2023

Cybersecurity | IT and Business Operations | News

AI and Cyberattacks: Fears Real and Imagined

"It would take off on its own, and re-design itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded." Stephen Hawking

In terms of computing, AI, or artificial intelligence, is a machine's ability to adapt, problem-solve, improvise, and learn. Simple enough, but for the past several months, we’ve heard from business leaders who are eager to harness the power of AI to those who feel that humanity will not survive this new technology.

It’s true that AI poses real threats to you and your organization which is happening in real-time. And you should be concerned. Also, the philosophical and economic implications should be discussed widely, but that is beyond the scope of this article.

This article will address what we see as the most pressing threat to date, along with suggestions on how to protect yourself and your organization.

Weak, General, and Super AI

You have likely been using AI for years now. Common examples of everyday AI use include devices such as Alexa, Siri, and Google Assistant. These possess Artificial Narrow Intelligence (ANI), or "weak AI," because they were designed to carry out a specific task but don't have general intelligence.

Artificial General Intelligence (AGI) or “strong AI” is what most people consider to be AI, but as of this writing, AGI is a few years away. AGI would be creative, flexible, and possibly exhibit self-awareness or consciousness. Examples of AGI include machines that can “…reason, plan, learn, understand natural language, and solve problems in a way that is similar to human beings.” (citation).

Artificial Super Intelligence (ASI) is theoretical and does not yet exist. The idea is that ASI would surpass human intelligence and would even have an inner life characterized by emotions, beliefs, and desires.

AI: Threats That Are Real

Unfortunately, as usual, when a new technology emerges, threat actors will find ways to exploit it. This time, hackers are using AI to improve their phishing and vishing techniques.

As a quick recap, phishing is when a fraudulent email or message is sent to steal personal or otherwise confidential information or install malware on a device.

Vishing is similar in that a fraudster tries to obtain sensitive information from a victim via a voice call. There are cheap, easy, and effective AI tools already available to the public to create authentic voice copies of real people, and criminals are utilizing them.

With phishing, a threat actor creates an email containing publicly available information found online, pretending to be from a legitimate source. Often these criminals are from foreign countries, so words are misspelled, grammar is incorrect, or typos can be seen, all red flags for spotting a fake email.

However, with the introduction of AI tools, hackers are using AI to gather info and write emails for them instead making these common errors a thing of the past.

New impersonation technologies are making executing a successful vishing attack easier.

Voice synthesis analyzes a person's voice, learns their tone, pitch, and accent, and then can sound almost identical to the original.

A victim of an attack may be unable to tell the difference between their CEO and a threat actor and can be persuaded, for example, to set up a fraudulent wire transfer or to reveal confidential information.

Even the FBI has released advisories warnings that deep fake audio and video are rising, and those using virtual meeting platforms should be wary.

Cybersecurity companies use AI-enabled software for endpoint security; they are now considering how best to use AI to fight AI in the rapid deployment of zero-day malware and other types of attacks.

How to Protect Yourself and Your Organization

In light of these new methods of utilizing old attack techniques, mitigation techniques have largely stayed the same.

Train your employees to remain vigilant and skeptical of unsolicited phone calls and emails, especially those requesting sensitive information.

It's suggested that authenticity is verified using some other means, such as having a passphrase that only you and the other person would know.

When a request for payment is made, even from a known vendor, always verify the request by calling the number that you have on file. NEVER respond to the email directly or call the number in the email.

Training workers about how to spot phishing and vishing attempts and other cybersecurity awareness topics should remain a priority.

References:

  1. https://www.forbes.com/sites/emilsayegh/2023/04/11/almost-human-the-threat-of-ai-powered-phishing-attacks/?sh=6a2b27b63bc9
  2. https://darktrace.com/blog/off-the-hook-how-ai-catches-phishing-emails-even-if-we-take-the-bait
  3. https://securityboulevard.com/2023/04/ai-impersonation-and-vishing-an-overview-and-preventative-measures/
  4. https://www.techtarget.com/searchsecurity/news/365532243/Vishing-attacks-increasing-but-AIs-role-still-unclear
  5. https://www.zdnet.com/article/what-is-ai-heres-everything-you-need-to-know-about-artificial-intelligence/
  6. https://cybernews.com/editorial/three-types-artificial-intelligence-explained/
  7. https://www.bbc.com/future/article/20230717-what-you-should-know-about-artificial-intelligence-from-a-z
  8. https://www.dignited.com/108411/artificial-intelligence-ai-vs-artificial-general-intelligence-agi/