AI and Cyber Security The state of advancement in AI is doubling every six to 10 months and creating new cyber security challenges. From ChatGPT and Bard to facial recognition, cyber security professionals have their hands full, and there aren’t enough hands.

With a seemingly negative outlook on AI, cyber security, Canada’s job shortage, and the future in general, it can be easy to look at new technology with a pessimistic eye. While it’s true that businesses should be exercising caution when it comes to these new developments, it isn’t all bad news. We dig into the challenges and opportunities AI presents for cyber security.


ChatGPT and generative learning models can be used for the worst

According to Forbes, generative AI tools like ChatGPT, Bard, and other forms of language modelling could affect how information is presented and made available by search engines. How? By automating the crafting of convincing but misleading text to use in influence operations.

And that’s just the beginning.

99 problems with ChatGPT

Look, if you’ve read our other articles, you’ve heard us rat on ChatGPT. It goes without saying that AI can be helpful, from meal planning to spitting out ten different subject lines for that email you’re stuck writing. However, the architecture of this and other AI models poses significant security concerns.

While OpenAI maintains that safety has been built into its AI tools and its founders will do everything in their power to prevent misuse of their product, the tech industry and cyber security experts are worried. ChatGPT is unwittingly giving more power to those with little coding knowledge and a major desire to cause harm. Low-level hackers can develop basic code on the site accurate enough to pull off a minor attack.

Earlier this year, Blackberry released a survey showing that 74% of IT decision-makers surveyed are concerned about ChatGPT’s potential threat to cyber security. Of that, 51% believe a successful cyber attack will be credited to ChatGPT within the year. While opinions on what exactly this will look like differ, many of those surveyed believe that the number one global concern is ChatGPT’s ability to write believable phishing emails and use it to spread misinformation.

On top of that, the Canadian Centre for Cyber Security released a publication highlighting some of the risks of generative AI like ChatGPT:

  • Creating realistic content, making it harder to identify phishing emails or scams. Threat actors can also write targeted spear phishing attacks with a higher level of sophistication.
  • Users may unintentionally share private information that malicious actors can then use.
  • Cyber criminals can bypass generative AI restrictions to create malware.

Other risks include:

  • Creating malware for use in targeted cyber attacks.
  • Using disinformation as fraudulent campaigns against individuals and organizations.
  • Deliberately or accidentally introducing buggy code in software development.
  • Injecting malicious code into datasets could up the chance of large-scale supply-chain attacks.
  • Stealing corporate data faster and in large quantities.

Facial recognition, AI, and navigating murky waters

No matter how you feel about it, facial recognition technology (FRT) is becoming increasingly integrated into our everyday lives, from the mundane task of unlocking your iPhone to its sometimes dubious use in police surveillance. The global facial recognition market is forecasted to reach US$12.67 billion by 2028.

While some may see FRT as totally private, like Google categorizing your photos or a company storing security camera footage, much of the FRT captured could be broadcasted to the world.

Now, images are captured that aren’t just stored locally but are potentially shared publicly. Not only that, but malicious actors can easily scrape facial information from databases and do whatever they want with them. For example, China is leveraging FRT to judge citizens’ behaviour to adjust each person’s social credit score. Some of the main concerns with the fast development of FRT are:

  • Lack of consent.
  • Unencrypted faces. Faces are becoming easier to capture at longer distances. They also can’t be encrypted like other forms of data.
  • Lack of transparency. Using FRT without an individual’s consent is a huge privacy concern.
  • Technical vulnerabilities. One can create masks (aka spoofs) from just the digital imagery. It also allows easy access for those looking to use deepfake technology.
  • Inaccuracy. An FRT could misidentify someone, leading to wrongful arrests and other negative outcomes. - Additionally, racial minorities are more likely to be misidentified, adding more strain on an already vulnerable people group.

Fortunately, some governing powers are trying to curb FRT’s encroaching powers, adding accountability and privacy to the technology. Pittsburgh and the state of Virginia require prior legislative approval to deploy FRT. Both Massachusetts and Utah need law enforcement to submit a written request before conducting facial recognition research.

It all comes down to consent, privacy, and transparency. Companies should be frank when enrolling customers in FRT for verification purposes. Enterprises should provide consumers with detailed notice about how the FRT templates were developed and how much data will be used, shared, or destroyed.

Above all, the implementation of FRT technology needs to come with hefty cyber security safeguards. For this, we need more experts.


Cyber security talent gap creates vulnerabilities

Cyber crime in Canada is reaching crisis levels. The government estimates that this causes more than $3 billion in damage each year. Any company is vulnerable, and businesses big and small are scrambling to find qualified professionals to strengthen their digital defences.

With the demand for cyber security professionals doubling yearly, the immediate need requires a better approach to training cyber talent. “Education and skilling-focused programs and tools like Explore and Career Ready are helping to equip and empower the next generation of digital leaders with the essential skills our economy needs to thrive in the future,” Kevin Magee, Chief Security Officer of Microsoft Canada, says. Programs like Lighthouse Labs Cyber Security Program are equipping individuals for the workforce in a matter of months, helping to close the cyber security talent gap. We think Kevin would approve.


AI: a partner, not an adversary, for cyber security

Another way to fill in the missing cyber security puzzle pieces is to leverage already available artificial intelligence. While the industry awaits more qualified professionals,

Computers are really good at doing one thing most humans hate - the same menial tasks repeatedly and consistently. This frees the humans to handle the more complicated pattern matching with nuance and curiosity. Creating jobs that fit these standards will help close the talent gap. Handing automation tasks over to AI could create new positions for humans focused on oversight or decision-making.

While AI may seem like a negative for jobseekers, cyber security has always been more than just data crushing; the field has always required a large range of skills, some of which can now be relegated to robots. Some of those skills which belong to the human sphere are:

  • Expert problem-solving skills. Cyber security is constantly changing, so creative thinking is critical.
  • Communication skills. You’ll work alongside technical and non-technical teams, so you’ll need to know how to formulate complex jargon-filled ideas into easy-to-understand commonspeak.
  • Researching. The best of the best stay on top of developments and trends - including AI changes.
  • Business know-how. Depending on your role, you might need to explain recommendations that include business needs.
  • Technical knowledge and attention to detail. You must be able to install and maintain computer systems. Your technical understanding is something that AI can’t replace.

All in all, cyber security hopefuls and current employees shouldn’t be worried about AI stealing their jobs; rather, their focus should be on stopping malicious actors from using AI for their own gain. Beyond that, learning which tasks AI can automate can save you time that you can use to perform more complex technical and problem-solving tasks.

At Lighthouse Labs, we exist to make tech-enabled change an opportunity for all. We believe in the power of artificial intelligence (AI) to help people and make our world a better place. We recognize that the responsible and ethical development and use of AI is paramount to its success in driving progress.