Almost overnight, generative artificial intelligence has become the new darling of business and consumer computing. Tools that use GenAI can access your data and then use that access to make tasks easier and faster, and to support decision making and to gain insight into those tasks.
But according to Adir Gruss, co-founder and CTO of Aim Security, that access can also come with some significant security risks. Gruss said that those risks are directly related to the way GenAI functions.
“GenAI has democratized the use of AI and initiated a wave of consumer applications that allow the user to generate content, ask knowledge questions and more, almost like talking with another human being,” explained Gruss. “Unlike AI, GenAI can use any type or format of data and can generate any type of content. It’s useful – but also very unpredictable.”
Gruss said that those characteristics make it easy for attackers because GenAI technology creates enough flexibility to find vulnerabilities.
Gruss predicts that GenAI will become even more widespread than it already is. “As a consumer, GenAI tools learn off of your data and adapt to your preferences, providing an entirely new and wholly personalized user experience whether in content consumption, product recommendation and services.”
Gruss said that GenAI will introduce, “significant and highly unique security challenges, particularly concerning personal privacy, security, and a range of ethical issues.” He said that the GenAI models create new attack vectors that are unique to GenAI models. They include prompt injection which can bypass built-in safety measures, such as, “When an attacker manipulates the output of an LLM (large language model) or GenAI chatbot to gain unauthorized access or to bypass security guardrails,” he explained.
Gruss said that there are also important privacy risks as well. He said that recent research has shown that some of Chat GPT’s training data can be extracted, which means that if you used your data, it can be exposed. He said that this risk is exacerbated by GenAI’s ability to create detailed profiles of users based on their interactions and preferences.
Complicating the whole equation are legal issues. “It’s important to note that some GenAI outputs might be governed by a restrictive copyright license, such as the General Public License (GPL), which could have implications for how the output can be used or distributed,” Gruss said.
Unfortunately, most current security approaches aren’t really built to handle GenAI, which means that in the near term, users of products based on generative AI will have to take steps on their own. Those steps include limiting what information is provided for training GenAI so that it can’t be extracted, as well as making sure that some sensitive data isn’t available for use by applications that use GenAI.
Gruss said that the risks involved with GenAI are enhanced because of the way GenAI products are marketed. “The saying ‘If the product is free, you are the product’ is particularly relevant in the context of AI,” he said.
“People learn quickly how AI can help them, often for free. That’s the good,” Gruss said. “But what everyone also needs to know is that many GenAI services keep your data on file, which may include sensitive and personal details you innocently entered into the GenAI tool. That’s potentially the bad.”
Gruss also said that GenAI tools have other unique challenges, including plagiarism. “GenAI output should always be consumed and used with caution, ensuring that it is not unethical or illegal, with extra validation when used for commercial purposes,” he said. The plagiarism happens when a GenAI application uses information from its training data, which may have come from protected sources. Recently, The New York Times updated its terms of service to prohibit AI using its content.
What this means to users at all levels is that while GenAI can make accessing your data easier and more effective, it carries with it a new set of challenges and risks that must be taken into account before it can be used.