October is Cybersecurity Awareness Month where we all are reminded to update antivirus software on our devices, use strong passwords and multifactor authentication, as well as be extra careful against email phishing scams.

However, one area where cybersecurity seems to be lacking is a general understanding of the security and privacy risks associated with using AI on the job.

Survey Shows Lack of AI Training And AI Fear Are High

New research from the National Cybersecurity Alliance finds a surprising — and troubling — lack of awareness among surveyed workers regarding AI pitfalls.

  • Of those surveyed, 55% of participants who use AI on the job stated they have not received any training regarding AI’s risks.
  • While 65% of respondents expressed worry and concern regarding some type of AI-related cybercrime.
  • Yet despite that potential threat, 38% — almost four out of ten employees —admitted to sharing confidential work information with an AI tool without their employer knowing about it.
  • The highest incidences of unauthorized sharing occurred among younger workers — Gen Z (46%) and Millennials (43%).

“Whenever I talk to people about AI, they don’t understand that the [AI] models are still learning and they don’t understand that they’re contributing to that, whether they know it or not,” explained Lisa Plaggemier, executive director of NCA during a Zoom call.

October is the busiest month of the year for Plaggemier who delivers dozens of talks to organizations to raise awareness about cybersecurity and AI use.

“I think that the average citizen still thinks about these AI tools as if they’re Google’s search function. We think about what we’re getting out, but we don’t think about what we’re giving up, what we’re putting in. I don’t think a lot of people understand that when they put information into an AI, that it actually goes into the learning, the training model data lake.”

Training Is Not Enough, Effective Training Is Key

Plaggemier said that while many financial and high-tech organizations have policies and procedures in place, the overwhelming majority of businesses do not.

“I’ve seen financial services that might be completely locked down. If it’s a tech company, they might announce AI tools that they decided are safe for use in their environment. Then there’s a bunch of companies that are somewhere in the middle, and there’s still a bunch of organizations that haven’t figured out their AI policy at all,” she said.

She noted that the NCA offers talks and trainings to help trigger discussions around AI and cybersecurity, but sometimes that’s not enough.

“Even companies that are doing training, I don’t think are locking enough down. I talked to somebody who works for a large organization in the Fortune 100. He had just joined that company, and they had completed their cybersecurity training — and it was really explicit about AI. And then he walked in and found a bunch of developers entering all their code in an AI model — in direct violation of the policy and training they had gone through. Even sophisticated technical employees don’t always connect the dots,” Plaggemier stated.

AI Training In The Workplace Starts With Leadership

She notes that individual workers need to adhere to the AI policies and procedures that their employer has put in place, but businesses need to establish those guidelines first. “I really think that the onus is on the employer, figure out what your policies are and figure out how are you going to take advantage of this technology and still protect yourself from the risks at the same time,” concluded Plaggemier.

Share.

Leave A Reply

Exit mobile version