Artificial Intelligence (AI) has rapidly gained popularity in recent years, revolutionizing customer service and user interactions. However, the rise of AI-powered technologies also introduces potential risks and challenges to cybersecurity. This article aims to explore those risks and provide insights as to how they can be mitigated, from an enterprise perspective.
But first let’s first clarify what Artificial Intelligence (AI) is all about, as several interconnected technologies come to mind and although there are interconnected concepts, they still differ in terms of scope and methods.
At the top, Artificial Intelligence (AI) refers to the broad field of computer science that aims to create intelligent machines capable of mimicking human cognitive abilities. It encompasses various techniques, including Machine Learning (ML) and Deep Learning (DL). ML is a subset of AI that focuses on algorithms and statistical models enabling computers to learn from data and make predictions or decisions without being explicitly programmed. DL is a further subset of ML that specifically employs artificial neural networks, inspired by the structure and function of the human brain, to learn and extract complex patterns and representations from vast amounts of data, leading to advanced capabilities such as image and speech recognition. In essence, AI is the overarching concept, ML is a technique within AI that enables learning from data, and DL is a specific form of ML that uses deep neural networks for sophisticated pattern recognition.
Artificial Intelligence (AI)
The arrival of a new, AI-powered chatbot has changed the whole landscape: ChatGPT.
An AI chatbot (also called AI writer) is a type of AI-powered program that can generate written content from a user's input prompt. AI chatbots are capable of writing anything from a rap song to an essay upon a user's request. The extent of what each chatbot is specifically able to write about depends on its individual capabilities including whether it is connected to a search engine or not.
Launched back in November 2022, ChatGPT is a conversational AI chatbot developed by Open AI and based on Generative Pre-trained Transformer (GPT) language model, highly skilled not only at understanding, but also generating human-like text. It was built using Deep Learning (DL) techniques, trained on a vast amount of data from the internet, enabling it to comprehend and respond to a wide range of topics with coherent and contextually appropriate responses. As a cloud-based AI model, it can be accessed by users worldwide through the internet. By leveraging its powerful language capabilities, developers can create chatbots, virtual assistants, and other AI-powered solutions that enhance productivity, streamline customer support, and provide personalized experiences.
With its unprecedented adoption by the public, currently holding the record as the fastest-growing user base from any technology, ChatGPT has largely contributed to the democratization of AI technologies, bringing their benefits to the masses, and making them tangible and finally “real”, not something exclusive and only limited to the “elite”.
But the use of AI chatbots like ChatGPT by employees can present potential risks in terms of data privacy and cybersecurity for the organization.
AI chatbots typically rely on vast amounts of data to provide accurate and personalized responses. However, the collection, storage, and processing of user data raises concerns regarding data privacy and confidentiality, as well as current data regulations compliancy. This includes asking employees for personal information or identifiers during interactions. There is a real risk that said sensitive or personally identifiable information (PII), shared with a chatbot, could be mishandled, or accessed by unauthorized parties; organizations must understand that any data uploaded to an AI chatbot is simply outside of their control! Once “in the system”, there is no visibility whatsoever as to where (which country) that information will be stored, how it will be used, for what purpose, and under which hands it might eventually end up.
Organizations whom employees are using AI chatbots must ensure that their use complies with the existing data protection and privacy regulations, for example the General Data Protection Regulation (GDPR) in the EU and the European Economic Area (EEA), or the California Consumer Privacy Act of 2018 (CCPA) in the state of California. Failure to comply with these regulations can result in legal and financial consequences.
Like any content available out there on the Internet, AI-generated outputs, based on the mix of licensed data, data created by human trainers, and publicly available data they were trained on, is not necessarily accurate, objective, and unbiased. This means all information coming out of such systems should not be trusted blindly but should be fact-checked before it is used.
Another issue is that not only the generated information might be inaccurate, but also it could voluntarily be corrupted, if an attacker can manipulate or inject malicious data into the training datasets, used by AI algorithms to learn, and could lead to biased models or compromised security systems. This would be achieved by providing deceptive inputs, causing the algorithms to produce unintended or even harmful outputs (think about self-driving cars using AI technologies, using poisoned data to make decisions in real-time…).
Several reports confirm that some AI chatbots were intentionally “tricked” into creating…malwares! This means people with no previous coding expertise nor development background could potentially create and launch malware, exponentially increasing the risk for organizations of suffering a cyberattack. They could also be used for social engineering attacks, generating curated, almost unidentifiable phishing emails, or creating fake social media profiles to interact with employees, all for criminal purposes.
There are currently many legal considerations and concerns surrounding AI chatbots, mainly regarding data privacy, Intellectual Property (IP) as well as liability and accountability.
It is worth asking: Is data handling compliant with international and local data regulations (e.g., GDPR in the EU and the EEA)? Who owns AI-generated content? Can this content copyright? Who is liable in cases where an AI chatbot causes harm or makes erroneous decisions?
The current lack of government oversight and regulations could allow malicious actors to use AI chatbot to, directly or indirectly, do evil, while not even being legally held account for it (at least for now).
While AI chatbots are designed to automate interactions, relying solely on automated systems without human oversight can pose risks. Chatbots may misinterpret user queries, respond inadequately to complex situations, or fail to detect and respond appropriately to security-related issues. Human intervention and oversight are necessary to handle exceptional scenarios, security breaches, or when sensitive information is involved.
Finally, an overreliance on AI technologies without proper safeguards or human oversight can lead to a false sense of security. If AI systems fail or are compromised, it can have severe consequences, highlighting the need for a balanced approach to cybersecurity.
Corporations can take several measures to mitigate the risks associated with AI technologies like AI chatbots and ensure data privacy.
Train employees on cybersecurity best practices, including AI-specific risks. Foster a culture of security awareness, emphasizing the importance of following secure procedures, identifying potential threats, and reporting suspicious activities.
Organizations must comply with relevant data protection and privacy regulations, such as the General Data Protection Regulation (GDPR) or the California Consumer Privacy Act (CCPA). Compliance involves obtaining user consent, providing access to personal data, and implementing appropriate security measures. Stay up to date with relevant regulations and compliance requirements in the jurisdiction where the corporation operates. Implement appropriate measures to ensure compliance with data protection, privacy, and security standards.
A Data Loss Protection (DLP) solution can play a crucial role in protecting privacy and mitigating the risks associated with AI chatbots, allowing organizations to enhance privacy protection, enforce data handling policies, and minimize the risks associated with data leakage or unauthorized disclosure. Such solutions enable proactive monitoring, control, and response to privacy risks, providing organizations with greater confidence in their chatbot interactions and compliance with privacy regulations.
AI-powered tools should be integrated into the organization's existing cybersecurity infrastructure and practices. This ensures that they align with established security protocols and complement other security measures effectively. If AI is used by attackers to perpetrate improved cyberattacks, then ensure your organization is also using AI-powered tools to defend itself against it. Invest in cybersecurity and deploy an Extended Detection & Response (XDR) solution to help identify and address cybersecurity threats across your whole organization, for all its components, from endpoints to network environment, applications and workloads, cloud-based data storage, etc. With the power of AI allowing to analyze immense amounts of data in no time, such solutions can detect any abnormal behaviors and threat-related patterns and take mitigations actions to stop possible threats before they even happen.
By implementing these strategies, corporations can significantly enhance their cybersecurity posture and reduce the risks associated with AI technologies. However, it's important to note that cybersecurity is an ongoing effort, and organizations should continuously adapt and evolve their practices and review their security posture to keep pace with evolving threats and technologies.
AI chatbots offer significant benefits in enhancing user experiences and streamlining customer interactions. However, it is essential to acknowledge and address the cybersecurity, legal, and privacy risks associated with these technologies. Your employees are your first line of defense and it’s important that they are informed and educated about AI and the risks that accompany it; this way, your team can implement appropriate security measures and mitigate potential threats to data privacy, user identity, and overall system integrity. With a proactive approach to cybersecurity, organizations can leverage the transformative potential of AI chatbots while safeguarding user trust and sensitive information.
Please do not hesitate to reach out to the team at ISEC7 with any questions about AI and how it affects your organization’s cybersecurity. We would be happy to provide an objective assessment of what can address the needs of your organization and/or risk mitigation needed to enhance your current digital workplace.
Find out more regarding ISEC7´s Services and Solutions.