This blog posting represents the views of the author, David Fosberry. Those opinions may change over time. They do not constitute an expert legal or financial opinion.

If you have comments on this blog posting, please email me .

The Opinion Blog is organised by threads, so each post is identified by a thread number ("Major" index) and a post number ("Minor" index). If you want to view the index of blogs, click here to download it as an Excel spreadsheet.

Click here to see the whole Opinion Blog.

To view, save, share or refer to a particular blog post, use the link in that post (below/right, where it says "Show only this post").

Biden Likely To Back AI Weapons.

Posted on 6th March 2021

Show only this post
Show all posts in this thread (AI and Robotics).

This news piece on the BBC reports that President Biden is being pressured by the US National Security Commission to reject calls for a global ban on AI-powered autonomous weapons systems, and instead approve and fund their development. It seems likely that he will agree.

As the BBC report points out, "The most senior AI scientists on the planet have warned ... about the consequences ...". It seems that their warnings are not being taken seriously.

I have written about the risks of AI in general, and about AI weapons in particular, before. I am strongly against it/them.

If you are unconvinced, I strongly recommend that you either watch the movie "Screamers", or read the book upon which it is based ("Second Variety" by Philip K. Dick). The story is very much to the point, very plausible and thoroughly frightening.

The dangers of AI are a regular theme in science fiction, and many readers will have seen one or more of the movies or TV series that revolve around these dangers: "Avengers: Age of Ultron", the various movies in the "Terminator" franchise, "Next" (2020), "Westworld", "The Matrix" and sequels, "Blade Runner", "2001: A Space Odyssey" and "Ex Machina" are just a few of the better known of this genre. I challenge anyone to watch all the above and not be worried about AI.

The important thing to remember about the risks of AI and AI weapons is that, once the human race crosses the threshold into real AI, it will be impossible to go back. Once we start an AI based war, we are basically all doomed.

The UN cannot decide about Killer Robots!

Posted on 13th September 2018

Show only this post
Show all posts in this thread.

As a reminder to all of us about just how dangerous "killer robots" (AI weapons) are, there was this piece in Metro, reporting on a warning by Professor Jim Al-Khalili, incoming president of the British Science Association.

The good professor says "Until maybe a couple of years ago had I been asked what is the most pressing and important conversation we should be having about our future, I might have said climate change or one of the other big challenges facing humanity, such as terrorism, antimicrobial resistance, the threat of pandemics or world poverty. But today I am certain the most important conversation we should be having is about the future of AI. It will dominate what happens with all of these other issues for better or for worse." In short, AI is extremely dangerous, and the risks and benefits need to be openly discussed, otherwise untrustworthy governments and companies will utilise the technology without public accountability.

Imagine my surprise, then, when I read this report from the BBC, about how MEPs (Members of European Parliament) passed a resolution calling for an international ban on so-called killer robots. What worries me is not that the European Parliament is against killer robots, but the shocking fact that the UN was not able to pass a similar resolution: "Last month, talks at the UN failed to reach consensus on the issue, with some countries saying the benefits of autonomous weapons should be explored."

I wonder which countries were against the ban? My guess is the permanent members of the UN Security Council: the USA, Russia, China, France and Britain. Britain has been working on battlefield AI for at least 30 years; the other countries for a similar period. The end of that road is an unlivable planet and the extinction of the human race.

AI Learning From Video Games Is Inherently Dangerous

Posted on 24th May 2018

Show only this post
Show all posts in this thread.

This BBC report describes how the UK's Ministry of Defence is worried. Apparently "Robots that train themselves in battle tactics by playing video games could be used to mount cyber-attacks, the UK military fears".

I think there is much more to be worried about. AI is already being developed for use on the battlefield. If AI is teaching itself from video games (artificial realities where normal ethics are either non-existent or of lower than usual priority), then the robots engaging in the wars of the future will be the most ruthless and cruel soldiers that ever existed. This is not some vague risk in the future: "Researchers in Silicon Valley are using strategy games, such as Starcraft II, to teach systems how to solve complex problems on their own".

We don't allow our children unsupervised learning until they have developed some moral sense. We certainly can't expect that unsupervised learning by AI will be a safe and successful exercise.

Welcome to The Age of Ultron.

Another AI Nail In The Coffin Of The Human Race

Posted on 21st October 2017

Show only this post
Show all posts in this thread.

I am sure that the researchers at Google, described in this report on Business Insider, are very pleased with themselves. I am somewhat less than pleased.

In May this year, they developed an artificial intelligence (AI) designed to help them create other AIs. Now it has demonstrated its abilities by "building machine-learning software that’s more efficient and powerful than the best human-designed systems."

In my last post on this subject (here) I listed 3 things that AI must not be allowed to do, to avoid the AI apocalypse. The first 2 had already begun; this new work by Google is the last item.

As I wrote on 23rd February 2017, basically, we are screwed.

Would it be too much to ask the people working on AI to finally show some moral compunction, and to apply some common sense?

Why The Laws Of Robotics Cannot Work

Posted on 23rd February 2017

Show only this post
Show all posts in this thread.

There has been a lot of news about AI (Artificial Intelligence). There have been major advances in the technology, and many products and services are being rolled out using it: we have AI chat-bots, AI personal assistants, AI in translation tools, and AI being used to stamp out fake news, and is being developed for use on the battlefields of the future, to name but a few. This new breed of AI uses surprisingly few computing resources: it no longer needs a huge computer centre, but simpler AI programs will run even on portable devices such as mobile phones.

My position remains that AI is extremely dangerous. Smarter people than me, such as Professor Stephen Hawking and Elon Musk, have said that AI poses an existential threat to humanity. It is not difficult to imagine scenarios where AI goes out of control and poses a threat to our existence: sci-fi movies and literature are full of examples (see the other posts in this thread for some of these works).

I have argued in the past for researchers to take more seriously the laws of robotics, first proposed by Isaac Asimov in 1942. These laws are fairly simple:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

I now realise, however, that this approach cannot work. Modern AI is not programmed in the conventional sense; it learns by being fed data. This means that the creators of an AI system do not know how it represents the information that it has learned, and how it implements the rules and priorities that it has been given. This means that it is not possible to program the laws of robotics into the AI at a basic level, since the programmers do not know the language in which it is thinking. It is, of course, possible to include the laws of robotics in the information that is taught to the AI, but we can never know what priority these laws will really be given by the AI, nor even what it understands by words such as "harm" and "human".

Realistically, the only way we can protect ourselves from the doomsday scenarios of out of control AI is by ensuring that AI never has:

  1. The ability to do physical harm; this means no AI on the battlefield, in self-driving vehicles, in medical equipment, and a whole host of other applications. I seriously doubt that industry and government will be capable of, and trustworthy enough to, show such restraint.
  2. The ability to learn more, once deployed, which amounts to reprogramming itself. Since such continued learning is already one of the unique selling propositions of some AI products, the ship has already sailed on that piece of restraint.
  3. The ability to create more AI. At least this is not yet happening (as far as I know), but I suspect that it is only a matter of time before AI developers start to use AI tools to create the next generation of AI products,

So, basically, we are screwed!

Robots For Sex

Posted on 15th September 2015

Show only this post
Show all posts in this thread.

I am totally bemused by this article from the BBC. A campaign has been started to ban the use of robots as sex toys.

It seems to me that the whole point of robots is to perform tasks that are either unpleasant or dangerous for humans, and sex as a profession seems to qualify on both counts.

Although I have serous concerns about robots for combat (as discussed here), and believe that controls are needed to ensure that Isaac Asimov's laws of robotics are incorporated in all AI machines, I really do not see any inherent harm in the use of robots as sex toys. Their use in such roles should reduce the number of (mostly) women trapped in such demeaning and dangerous (due to STDs and violence) work.

Would Dr Kathleen Richardson prefer that we continue with the way things are now; would she want her own daughters (I actually don't know if she has any) to work in the sex trade?

Her idea seems to be be based on the fear that robots for sex "will contribute to detrimental relationships between men and women, adults and children, men and men and women and women". Well, Dr. Richardson, where is the proof? Studies on the impact of pornography suggest otherwise. My personal opinion is that people who have damaged relationship skills for whatever reason are probably less likely to be further damaged by playing out their fantasies with robots, rather than with real people.

Game Over!

Posted on 18th August 2015

Show only this post
Show all posts in this thread.

I am really becoming convinced that the majority of the human race wants to commit mass suicide This recent story from the BBC is the latest evidence.

In a previous post on this subject, I wrote a definition of the conditions under which AI and Robots become dangerous. I said: "If you build autonomous (i.e. AI-based) robots, and give them the ability to change their design, and to replicate themselves, then without a system that ensures that the laws of robotics are always programmed in to the machines without alteration, it is game over for the human race". Well, guess what researchers have now done: precisely that!

I am sure that AI will be applied by the people who will control it (more likely to be politicians than scientists) to solving thoroughly worthy problems: world peace, global warming, world hunger, pollution of our environment, etc. The trouble with all these problems is that the primary cause is clear: too many people. So the obvious solution will be to reduce the human population, possible to zero.

Do you want to live in a world where humans are culled by machines to limit the damage that we do to the planet? I sure don't.

AI and Robotics: A Threat to Us All.

Posted on 4th December 2014

Show only this post
Show all posts in this thread.

There have been a lot of stories about, and a general rise in popular interest in, robotics and AI recently. There have been robotics competitions, and rethinking of the famous Turing Test of Artificial Intelligence. Recent space projects have involved at least semi-autonomous functioning, due to the impracticality of remotely controlling devices at vast distances.

I have always been concerned about this area of technology, but have been accused of paranoia by my friends and colleagues. Now Prof. Stephen Hawking has shared that he is also worried (as reported in this BBC story), and has said that "efforts to create thinking machines pose a threat to our very existence".

Isaac Asimov first defined the laws of robotics in a story in 1942. The laws are intended as a safety feature, to ensure that robots do no harm to people, and do what they are told.

I have worked in the field of robotics (robot vehicles for military uses), and I know that Asimov's laws of robotics are generally completely ignored by researchers in robotics and AI. That is like building fast cars without brakes.

If anyone doesn't believe that robotics are being developed for the battlefield, check out this article in Popular Science.

If anyone finds the plot line of Terminator too fanciful, check out this BBC article about a project to connect robots to the Internet so that they can learn from public sources and each other. Sounds a lot like SkyNet to me.

I think the description that really puts the risks into perspective was written by Philip K. Dick: "Second Variety". It is a short story, also made into a movie called "Screamers". The message is clear. If you build autonomous (i.e. AI-based) robots, and give them the ability to change their design (already being experimented with in AI machines), and to replicate themselves (already being seriously considered by scientists and engineers), then without a system that ensures that the laws of robotics are always programmed in to the machines without alteration, it is game over for the human race. Of course, it only takes one rogue government, terrorist group or company to not play by the rules, and the rules become useless.

Maybe you also think I am being paranoid, but Stephen Hawking is a very smart guy, and he is worried. You at least owe it to yourselves to read Second Variety, or watch Screamers before you dismiss this.