IT Cyber Security guidelines around the use of ChatGPT and AI chat engines pending a University-wide updated policy.
The increasing use, availability and popularity of Artificial Intelligence (AI) engines such as ChatGPT, Dall-E, Bard etc brings significant interest and opportunity to academia. With these opportunities there are some threats and security implications that must be taken into consideration.
The University will review and provide updated policies as this emerging field grows but colleagues are should be aware of existing policies that govern the use of such technologies;
As general guidance, prompts/queries submitted must comply with the following;
This guidance is intended to include all Large Language Models (LLMs) and AI chatbots including ChatGPT, GPT-n/x, Bard, LLaMa, BLOOM etc.
Users should take into consideration that all queries/prompts made to an AI engine are visible to the organisation providing the LLM.
Users should take into consideration that AI models are designed to return responses that appear convincing and all output should be viewed from this position. LLMs have in many cases been trained on a large corpus of material, and as such there is potential that the provenance of material and data reproduced by the models may be obscure, carrying the potential risk of copyright or other IP infringement. All output must be independently validated.
These guidelines are for cloud hosted models, self-hosted models will need their own security assessment.
https://www.ncsc.gov.uk/blog-post/chatgpt-and-large-language-models-whats-the-risk
https://doi.org/10.48550/arXiv.2012.07805 - Extracting Training Data from Large Language Models