IT Security Considerations on the Use of ChatGPT and AI LLM Engines


IT Cyber Security guidelines around the use of ChatGPT and AI chat engines pending a University-wide updated policy. 

*In September 2023 Secretariat published Guidance to Staff on the use of Artificial Intelligence which should be used for overarching AI guidance then in December 2023 Generative AI guidance for taught students was issued, this article predates both of those and is based specifically on the need for advice on security and data protection.  

The increasing use, availability and popularity of Artificial Intelligence (AI) engines such as ChatGPT, Dall-E, Bard etc brings significant interest and opportunity to academia.  With these opportunities there are some threats and security implications that must be taken into consideration.

The University will review and provide updated policies as this emerging field grows but colleagues should be aware of existing policies that govern the use of such technologies;

As general guidance, prompts/queries submitted must comply with the following;

This guidance is intended to include all Large Language Models (LLMs) and AI chatbots including ChatGPT, GPT-n/x, Bard, LLaMa, BLOOM etc.

Users should take into consideration that all queries/prompts made to an AI engine are visible to the organisation providing the LLM.

Users should take into consideration that AI models are designed to return responses that appear convincing and all output should be viewed from this position.  LLMs have in many cases been trained on a large corpus of material, and as such there is potential that the provenance of material and data reproduced by the models may be obscure, carrying the potential risk of copyright or other IP infringement.  All output must be independently validated.

These guidelines are for cloud hosted models, self-hosted models will need their own security assessment.

On 04/07/2023 the Russell Group published a joint statement containing a set of principles on the use of generative AI tools in education.

These principles state;

  1. Universities will support students and staff to become AI-literate
  2. Staff should be equipped to support students to use generative AI tools effectively and appropriately in their learning experience.
  3. Universities will adapt teaching and assessment to incorporate the ethical use of generative AI and support equal access.
  4. Universities will ensure academic rigour and integrity is upheld.
  5. Universities will work collaboratively to share best practice as the technology and its application in education evolves.

Users are directed to the linked document "Russell Group principles on the use of generative AI tools in education" for additional context and details to support these broad principles.

References:

https://www.ncsc.gov.uk/blog-post/chatgpt-and-large-language-models-whats-the-risk

https://doi.org/10.48550/arXiv.2012.07805 - Extracting Training Data from Large Language Models

https://doi.org/10.48550/arXiv.2304.09655 - How Secure is Code Generated by ChatGPT?

https://russellgroup.ac.uk/media/6137/rg_ai_principles-final.pdf - Russell Group principles on the use of generative AI tools in education

https://russellgroup.ac.uk/news/new-principles-on-use-of-ai-in-education/ - New principles on use of AI in education