This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
| 2 minutes read

Did ChatGPT write this article for me?

The rapidly increasing use, and sophistication, of ChatGPT and other large language models (LLMs) has prompted even the CEO of OpenAI (ChatGPT's creator) to call for greater regulation of artificial intelligence; but while governments around the world are grappling with how best to address the challenges posed by increasingly intelligent LLMs, what should employers be doing in the meantime?

One thing we think employers should be thinking about now is introducing a policy around the use of ChatGPT and other LLMs, but before asking ChatGPT to write your company policy for you we recommend reading the below.

LLMs, the most well-known of which is currently ChatGPT, are a form of artificial intelligence trained on a large amount of data (including, in the case of ChatGPT, books, journals and news articles) which allows them to learn patterns and connections between words and phrases in order to generate accurate responses to questions, requests or prompts.

There are inherent risks that come with using LLMs, some of which we are not even aware yet. For example:

  • Unless users specifically opt-out, ChatGPT will learn from their input data and it may therefore use it in responding to future questions/requests/prompts, meaning confidential information could be regurgitated to other users.
  • ChatGPT is known to "hallucinate" which is the term used to describe the situation in which it generates inaccurate or irrelevant content.
  • Given the vast amount of data used to train LLMs, there is a risk that using its generated content will result in an infringement of third party copyright.
  • If employees regularly use LLMs in carrying out their roles, line managers will struggle to effectively appraise employees’ genuine capabilities.

In order to address these risks, employers may decide to prohibit employees' use of ChatGPT and other LLMs entirely. However, this is unlikely to be sustainable for the long-term and will deprive employers of the benefits of LLMs. A more balanced approach would be to introduce rules and restrictions around the use of LLMs. We set out some potential options for employers below:

  • Prohibiting employees from inputting employer confidential information into any LLM.
  • Ensuring employees verify generated content (and perhaps requiring them to keep a record of sources used for verification purposes).
  • Requiring employees to get manager approval before publishing content externally if created using a LLM.
  • Making sure employees opt-out of allowing ChatGPT to use their input as training data.
  • Monitoring employees’ use of LLMs (employers intending to go down this route should ensure that it is expressly stated that such monitoring may take place in the appropriate policy).

(...and no, ChatGPT did not write this article for me.)

Out of the 43 per cent of professionals who use generative AI for work, around 70 per cent claimed that they are using ChatGPT and other tools without disclosing its usage to their bosses.


artificial intelligence, employment