On the 5 June, the lead article in the Times reported that "AI systems could kill many humans within two years" and was attributed to Matt Clifford, who is helping the UK Prime Minister, Rishi Sunak, set up the government's AI taskforce.
On the face of it, this was just another warning in a long line of recent announcements from politicians, regulators and "AI gurus" who appear to believe that this new technology - referred to as just "Artificial Intelligence" - is now so powerful and complex that it leaves humanity on the brink of extinction, having lost control of it.
On the 7 June, Brad Smith, vice chairman and president of Microsoft, was reported, also in the Times, to have called recent commentary a "fear parade" and called upon commentators to "ratchet down the rhetoric".
What are we to make of such developments? Is this technology really "out of control" or is it simply a case of media hyperbole in order to attract eyeballs?
It is true that regulators around the world are looking to provide a legislative framework for the safe development and use of this new technology and this has to be a sensible approach - but it is also true that "Artificial Intelligence" as a term is just so broad that it fails to cover the use of so many diverse and interrelated technologies - think Large Language Models, Machine Learning, Neural Networks and Autonomous Systems, to name but a few. The result is that use of the term in headlines to define what is 'dangerous' and therefore requires regulation is extremely misleading.
An autonomous weapons system, designed to pick its own targets and destroy them without human involvement, is an example of 'Artificial Intelligence' but is distinctly different to a Spotify playlist, which also uses artificial intelligence in order to operate.
If the EU is finding it difficult to define what "Artificial Intelligence" is within the new AI Act, then perhaps we can forgive government advisors for lumping all examples of one technology together - but perhaps that is missing the point.
Perhaps regulators and advisors would be better served looking at the outputs and uses of the technology rather than grappling with an overly broad definition of what it is. An autonomous weapons system is infinitely more dangerous than a musical play list and notwithstanding the EU's attempts to define "High Risk AI Systems" over others, it still leaves us with the same issue in that broad definitions will catch less sinister areas of AI development and unnecessarily impose stifling regulation upon it.
Recent reports of Chat GPT's Sam Altman and other US based Tech Company CEOs operating in the generative AI space being critical of the EU's attempts to impose broad new responsibilities to control Large Language Models and related technology, has highlighted the differences of proposed regulatory approaches between various jurisdictions.
An approach whereby certain jurisdictions, like the UK, adopt a sectoral approach to AI regulation under a broad policy direction/guardrails appears to be more sensible as it allows regulators to look at the use case scenarios specific to each sector rather than battle with one, horizontal, piece of legislation that simply cannot cover all the bases.
If this output based approach is used, then it allows for more of a distinction to be made between 'good AI' and 'bad AI' with the obvious advantage of focusing the regulators' minds on protecting society from harm rather than hindering 'good AI' from delivering the benefits that it promises to do.
And perhaps it will also allow for more accurate and obvious headlines to appear, along the lines of "'AI designed to cause harm should be made illegal' says leading UK regulator".
We can but hope.