IT Cyber Security guidelines around the use of ChatGPT and AI chat engines pending a University-wide updated policy.
*In September 2023 Secretariat published Guidance to Staff on the use of Artificial Intelligence which should be used for overarching AI guidance, this article predates that one and is based specifically on the need for advice on security and data protection.
The increasing use, availability and popularity of Artificial Intelligence (AI) engines such as ChatGPT, Dall-E, Bard etc brings significant interest and opportunity to academia. With these opportunities there are some threats and security implications that must be taken into consideration.
The University will review and provide updated policies as this emerging field grows but colleagues should be aware of existing policies that govern the use of such technologies;
As general guidance, prompts/queries submitted must comply with the following;
This guidance is intended to include all Large Language Models (LLMs) and AI chatbots including ChatGPT, GPT-n/x, Bard, LLaMa, BLOOM etc.
Users should take into consideration that all queries/prompts made to an AI engine are visible to the organisation providing the LLM.
Users should take into consideration that AI models are designed to return responses that appear convincing and all output should be viewed from this position. LLMs have in many cases been trained on a large corpus of material, and as such there is potential that the provenance of material and data reproduced by the models may be obscure, carrying the potential risk of copyright or other IP infringement. All output must be independently validated.
These guidelines are for cloud hosted models, self-hosted models will need their own security assessment.
On 04/07/2023 the Russell Group published a joint statement containing a set of principles on the use of generative AI tools in education.
These principles state;
Users are directed to the linked document "Russell Group principles on the use of generative AI tools in education" for additional context and details to support these broad principles.
https://doi.org/10.48550/arXiv.2012.07805 - Extracting Training Data from Large Language Models
https://doi.org/10.48550/arXiv.2304.09655 - How Secure is Code Generated by ChatGPT?
https://russellgroup.ac.uk/media/6137/rg_ai_principles-final.pdf - Russell Group principles on the use of generative AI tools in education
https://russellgroup.ac.uk/news/new-principles-on-use-of-ai-in-education/ - New principles on use of AI in education