This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
| 2 minute read

Do algorithms dream of electric sheep?

If this title has caught your attention, then I expect that - like me - you are a fan of Ridley Scott's epic 1982 film, Blade Runner. Scott left the audience guessing whether the film's protagonist, Rick Deckard, was human or a replicant with implanted memories just like those he was tasked with hunting down and eliminating.

Forty years on, in 2022, it's fair to say we are living in a world in which machines are replicating what humans can do in increasingly sophisticated ways. Pre-trained natural language processing transformer models, such as OpenAI's GPT-3 and Microsoft/NVIDIA’s MT-NLG, can produce text which appears very much like it has been authored by a human. Would-be artists can play around on publicly accessible models such as Midjourney or Stable Diffusion to render images that very few of us could produce by our own hand and eye (for example, the image accompanying this post was rendered by Stable Diffusion following a prompt based on the title).

These models are doing some very heavy lifting and have considerable commercial utility - writing copy, blog creation and so on. However, they are not (yet) Rick Deckard. Why?

  • They are heavily dependent on the vast quantity and quality of data that is required to train them. 
  • They are also dependent on significant human input, whether that is in selecting the training data, defining the objectives to be achieved, defining the parameters and selecting the prompts required to guide the model towards achieving those objectives, and any subsequent refinement to the output rendered.
  • Small human refinements to the prompts (instructions such as: “/imagine prompt: elephant, on a bicycle, in the evening sky, in the style of photorealism”) can render different results. The more refined and detailed your prompts are, the closer the output will be to what the human has imagined. 

The human input still really matters. That is important because copyright law is still generally rooted to the notion that a human author and intellectual creativity is required for copyright to arise. And copyright is important when it comes to considering who has the ability to control and exploit the outputs rendered by these models. It is therefore foreseeable that as the adoption of these models expands, more disputes will emerge about which humans are entitled to any copyright in the outputs rendered by them. Perhaps more interestingly, the UK's 'outlier' provision for computer-generated works which have no human author (S9(3) CDPA 1988) will get a fresh airing in court.

Disclosure: I have lost myself for hours using the beta version of Midjourney typing in random prompts about angry judges and space llamas.

Pre-trained natural language processing transformer models, such as OpenAI's GPT-3 and Microsoft/NVIDIA’s MT-NLG, can produce text which appears very much like it has been authored by a human

Subscribe to receive our latest insights - on the topics that matter most to you - direct to your inbox, at your preferred frequency. Subscribe here

Tags

artificial intelligence, brands designs copyright, it and digital, technology, value in data