This blog posting represents the views of the author, David Fosberry. Those opinions may change over time. They do not constitute an expert legal or financial opinion.

If you have comments on this blog posting, please email me .

The Opinion Blog is organised by threads, so each post is identified by a thread number ("Major" index) and a post number ("Minor" index). If you want to view the index of blogs, click here to download it as an Excel spreadsheet.

Click here to see the whole Opinion Blog.

To view, save, share or refer to a particular blog post, use the link in that post (below/right, where it says "Show only this post").

Lots Of Worrying News About AI.

Posted on 20th February 2024

Show only this post
Show all posts in this thread (AI and Robotics).

The roll-out of AI into all aspects of our lives is in full swing, but there has been a whole host of very worrying news as a result. Below are some examples.

The BBC reports that big-tech companies (including Amazon, Google and Microsoft) "have agreed to tackle what they are calling deceptive artificial intelligence (AI) in elections". Whilst this might seem to be good news, I suspect that their efforts will be unsuccessful, and the fact that they have decided to do something shows just how dangerous AI disinformation is. It is bad enough when humans engage in disinformation, but AI provides the tools to dramatically increase the quantity and quality of such propaganda. With this increase in the quantity of disinformation, it will be nearly impossible for the current fact checking processes to keep up. I foresee a world where bad actors generate disinformation using AI, and the gate-keepers use AI to warn us that such content is untrustworthy. Social media is already full of junk, and AI is going to make that mush worse.

According to Futurism, Amazon have been testing a new Large Language Model (LLM - AI), that is showing "emergent abilities". The AI system ("Big Adaptive Streamable TTS with Emergent abilities" or BASE TTS) showed the ability to understand (and therefore potentially to use) aspects of language that it was not trained on. This is extremely scary.

Fortune tells us that Microsoft is reporting that Iran, North Korea, Russia and China are starting to use generative AI (Microsoft's own AI tools) to launch cyberattacks. One issue with that is the potential of a vastly increased volume of such attacks; another is that AI may devise attacks that are cleverer and more effective than man-made attacks, thinking of attack techniques that humans have not come up with. Given our dependence on IT for everything (news, banking, utility company operations, government services, telecoms, etc.), the impact could be enormous. Although Microsoft says that they had detected and disrupted some such cyberattacks, what about the ones they missed (or didn't care enough about) and what about other company's AI tools?

Lifehacker reports on the announcement by Sam Altman, CEO of OpenAI, of Sora, an AI that can generate amazingly realistic videos from text input. The article contains some examples. Whilst some such fake videos will be harmless and entertaining the potential for disinformation is enormous. People are generally likely to accept videos as genuine, and indeed courts often accept video evidence as factual (audio recordings are not, unless there is independent evidence of their veracity), creating risks of miscarriages of justice. Although the article states that it is (currently) possible with careful examination of the videos to detect that they are fake, this (Sora) is only the the first version of such text to video AI, and with later versions it will become progressively more difficult to detect AI generated fakes.

This report on Ars Technica is a story that has appeared in many places. Air Canada's chatbot gave incorrect advice to a passenger about the airline's bereavement discount policy. When the passenger followed that advice by claiming the bereavement discount refund after booking, his claim was refused, so he sued Air Canada. He won his lawsuit, but the defence used by Air Canada in court is what I find worrying. They claimed that the chatbot was an independent entity, and that they were not responsible for the advice that it gave. Legally, the chatbot is an agent of the company, just as a human sales or customer support representative is, and any advice given has legal standing. What worries me, however, is that big companies who use chatbots and other AI will keep contesting this, and may eventually win, establishing a precedent that undermines customers' rights.

Forbes reports on a warning from Google about AI. AI is finding its was into the Apps on our smartphones (whether Android or iPhone), and it appears that we all have a blind spot regarding our Apps: we think that our interaction with them is private; it is not! The potential for leaking confidential information (your bank account information, the name of your lover, income that you may not have declared to the tax man, your political affiliations, etc.) is enormous. We all need to learn new habits if we want to keep our secrets secret.

With the roll-out of AI, many people were worried that they might lose their jobs to AI. This article on Tech.co lists some of the companies that have already replaced workers with AI (MSN, Google, Dukaan, Ikea, BlueFocus, Salesforce and Duolingo), and some that are planning to do so soon (IBM and BT). We were right to be worried! As a counterpoint, Futurism reports on a study by MIT which shows that replacing workers with AI is more expensive, although the costs of AI are likely to fall in the future and companies are likely to plow ahead anyway.

Vice.com reports on very worrying results from international conflict simulations. Military organisations around the world are evaluating the use of AI to support decision making. The results show that AI is very prone to sudden escalation: "In several instances, the AIs deployed nuclear weapons without warning". It sounds like an old movie which many of us will have seen.

One More Thing To Worry About.

Posted on 28th December 2023

Show only this post
Show all posts in this thread (AI and Robotics).

As if there weren't already enough things to worry about with robotics and AI, now we also have to worry about malfunctions, as reported by The Daily Mail.

An engineer at Tesla's Giga Texas factory near Austin was attacked and seriously injured by a malfunctioning robot, while programming two disabled Tesla robots nearby. He was seriously injured, and the factory floor was covered in blood.

These kind of robots are not in any sense smart, but we should remember that AI can also malfunction (and sometimes does), and I dread to think of the carnage that could result.

Developers Don’t Know How AI Works And Nor Does The AI Itself.

Posted on 18th July 2023

Show only this post
Show all posts in this thread (AI and Robotics).

This report on Vox highlights the scariest aspect of generative AI systems like ChatGPT; the developers cannot tell you how they work, and neither can the AI itself (many people have tried, but humans simply don't speak the language that AI uses internally).

The idea that we are using, and plan in future to use even more widely, systems that are inherently unpredictable and amoral, should worry us all.

Modern AI systems are not programmed in the conventional sense. The underlying neural network engine is coded, but the "intelligence" of AI comes from what it learns from huge sets of data, typically from the Internet (and we all know what a cess-pit of misinformation and immorality the Internet is).

This has some consequences:

  1. It is not possible to code rules of protection and morality (such as Asimov's laws of robotics) into AI.
  2. Any attempts to get an AI system to predict how it would behave (for example, would it ever attack humans) in various scenarios is inherently futile. Several people have attempted to do this, and the results are of great concern.
  3. It is impossible to ensure that AI will not allow itself to be used for illegal purposes; the only protection we have against illegal is in the conditions of use.
  4. Because the learning process is largely automatic, due to the necessary size of training datasets, it is impossible to prevent bias and prejudice from creeping into an AI's "mindset". There have already been a number of reported cases of racial bias found in AI's decisions.

There are many examples in science fiction of what could go wrong due to an amoral AI. One of the most extreme is Avengers: Age of Ultron.

Well, That Didn’t Take Long.

Posted on 18th July 2023

Show only this post
Show all posts in this thread (AI and Robotics).

This story on UPI reports on a new piece of software, WormGPT, which is intended for use by criminals. WormGPT uses generative artificial intelligence. In a test, researchers used it to create a scam email to persuade companies to pay fraudulent invoices; apparently it was a very persuasive email.

Whilst it is no surprising that AI is being used for criminal activity, it surprised me how quickly it was created and marketed.

AI: Scarier By The Minute.

Posted on 10th May 2023

Show only this post
Show all posts in this thread (AI and Robotics).

This article on The Telegraph (unfortunately, behind a paywall) reports that a professional body for technology workers has suggested that people should be required to have a licence in order to develop AI. Although it is a nice idea, it seems to be totally unenforceable. Nevertheless, it gives a good indication of how dangerous groups in the know feel AI is.

This report on The Guardian discusses the problem of "hallucination", which is when you ask an AI chatbot for a definition of something made up, and it gives you one, along with detailed footnotes. This is potentially extremely dangerous.

The most concerning article that I have seen about AI in the last few days is this report on ZDNet. The author asked different AI chatbots what worried them. The answer seems to be, AI worries them. That is totally frightening.

AI Development Going Full Steam Ahead Despite All The Warnings From Experts.

Posted on 7th May 2023

Show only this post
Show all posts in this thread (AI and Robotics).

This article on Yahoo Finance contains a warning by Paul Christiano, a former researcher at OpenAI, that AI "could soon surpass human capabilities", and that there is a non-zero chance that human- or superhuman-level A.I. could take control of humanity and even annihilate it." He said that ". “I think maybe there's a 10 to 20% chance of A.I. takeover [with] many, most humans dead.”

We are not talking about the usual doomsayers here, but an expert in the field, so his warning should not be taken lightly.

This piece on the BBC, reports on the resignation from Google of Geoffrey Hinton, widely considered the godfather of artificial intelligence. In his resignation letter he warns about the growing dangers of AI, and says that he now regrets his work.

This article on Venture Beat, by Louis Rosenberg (also an expert in the field of AI) not only outlines three of the better known risks posed by AI (the risk to jobs, the risk of fake content and the risk of sentient machines), but adds another to the list: AI generated interactive media, which will be way more targeted and manipulative than what we have today. This could take propaganda to a whole new level. This seems to me to be a very serious risk, and one that would bypass consumers' resistance to advertising and utterly undermine democracy. I also dread to think what might happen if/when crooks start to use AI to scam people.

Despite these and many other warnings, corporations are ploughing ahead with AI development (AI is getting smarter all the time) and deployment (AI is now accessible to all of us), and governments are doing nothing. Apparently, we are too stupid to save ourselves (as also seems to be the case with climate change).

How AI Is Taking Over Everything.

Posted on 29th April 2023

Show only this post
Show all posts in this thread (AI and Robotics).

I was having a conversation recently with someone who is the head of a large kindergarten. It was during the annual teacher-parent conference season, during which teachers have to prepare a document detailing each child's performance, highlighting any problems that need attention; there is then a meeting between the teachers and parents to discus the child's progress and problems. This year there were a number of problems, including plagiarism by some of the teachers, bad news being delivered in extremely undiplomatic language, and conference documents missing vital information that needed to be addressed by the parents. The boss of the kindergarten head suggested that it be investigated whether the conferences could be written by ChatGPT instead of by the teachers. I feel that this is a slippery slope; the ability to write these conference documents is a basic part of the teachers' jobs, and if they can't do it to an acceptable standard, they shouldn't be teaching.

Since then I was watching an episode of Bill Maher's Real Time, and heard how a university professor told his students that he knew they were using ChatGPT to wite the essays and assignments, and that they should realise that he was also using ChatGPT to mark/grade them. So, no human in the loop.

There are more examples of humans being removed from transactions:

  • Resumés (CVs) and job application letters are being written by AI (you can easily find advertisements for this service); at the same time, companies are using AI to screen and shortlist applications; humans are only involved in the selection of employees once interviews begin. This report on the BBC has more details.
  • This report on the BBC describes how a student in York used an AI chatbot to contest a wrongly issued parking fine; I find it shocking that a student felt it necessary to turn to AI for this, since presumably she has to write essays as part of her studies.
  • This article on NewsChain reports the statement by the UK's new Secretary of State for Science, Michelle Donelan, that AI like ChatGPT could have "a role in Whitehall". I shudder to think what kind of role or roles she has in mind: writing legislation? That would be a major step towards humans being ruled by AI.

Meanwhile, Elon Musk continues to try to warn the world about the dangers of AI, and suggests that we take a pause in the development of AI, according to this report on Reuters. Well, good luck with that idea; the genie is already out of the bottle. Also, the world is not listening; company greed trumps common sense and caution every time.

Just to prove Elon Musk right, this article on Science Focus reports on robots, created by AI from living tissue, which can reproduce! We are all doomed.

This Is Just One Step From The AI Apocalypse!

Posted on 14th April 2023

Show only this post
Show all posts in this thread (AI and Robotics).

This report on Ars Technica has me very worried.

It is about the "Wolverine" experiment, a program that can be used to give Python programs the ability to "fix" and re-execute themselves.

Self-modifying code is the beginning of the end for humanity. It is technology that should be avoided at all costs, because it will lead to the AI apocalypse. The risks of such technology has been, and continues to be, thoroughly explored in many science fiction books, movies and TV series (e.g. "2001: A Space Odyssey" "Next"), so I don't feel that I need to explain it all again.

If this ability can be given to Python programs, it can easily be extended to other programming languages.

We are all doomed!

AI Can Now Fake Your Voice.

Posted on 17th January 2023

Show only this post
Show all posts in this thread (AI and Robotics).

This article on ZDNet reports that Microsoft has announced that it has created an artificial intelligence system that only needs a 3 second recording "of you saying something in order to fake longer sentences and perhaps large speeches that weren't made by you, but sound pretty much like you". I expect that other technology companies will quickly match Microsoft's achievement.

I don't see this as a good development; we can look forward to a whole world of trouble once this technology becomes widely available. We already have deep-fake videos, and now we will have deep-fake audio.

Biden Likely To Back AI Weapons.

Posted on 6th March 2021

Show only this post
Show all posts in this thread (AI and Robotics).

This news piece on the BBC reports that President Biden is being pressured by the US National Security Commission to reject calls for a global ban on AI-powered autonomous weapons systems, and instead approve and fund their development. It seems likely that he will agree.

As the BBC report points out, "The most senior AI scientists on the planet have warned ... about the consequences ...". It seems that their warnings are not being taken seriously.

I have written about the risks of AI in general, and about AI weapons in particular, before. I am strongly against it/them.

If you are unconvinced, I strongly recommend that you either watch the movie "Screamers", or read the book upon which it is based ("Second Variety" by Philip K. Dick). The story is very much to the point, very plausible and thoroughly frightening.

The dangers of AI are a regular theme in science fiction, and many readers will have seen one or more of the movies or TV series that revolve around these dangers: "Avengers: Age of Ultron", the various movies in the "Terminator" franchise, "Next" (2020), "Westworld", "The Matrix" and sequels, "Blade Runner", "2001: A Space Odyssey" and "Ex Machina" are just a few of the better known of this genre. I challenge anyone to watch all the above and not be worried about AI.

The important thing to remember about the risks of AI and AI weapons is that, once the human race crosses the threshold into real AI, it will be impossible to go back. Once we start an AI based war, we are basically all doomed.

The UN cannot decide about Killer Robots!

Posted on 13th September 2018

Show only this post
Show all posts in this thread.

As a reminder to all of us about just how dangerous "killer robots" (AI weapons) are, there was this piece in Metro, reporting on a warning by Professor Jim Al-Khalili, incoming president of the British Science Association.

The good professor says "Until maybe a couple of years ago had I been asked what is the most pressing and important conversation we should be having about our future, I might have said climate change or one of the other big challenges facing humanity, such as terrorism, antimicrobial resistance, the threat of pandemics or world poverty. But today I am certain the most important conversation we should be having is about the future of AI. It will dominate what happens with all of these other issues for better or for worse." In short, AI is extremely dangerous, and the risks and benefits need to be openly discussed, otherwise untrustworthy governments and companies will utilise the technology without public accountability.

Imagine my surprise, then, when I read this report from the BBC, about how MEPs (Members of European Parliament) passed a resolution calling for an international ban on so-called killer robots. What worries me is not that the European Parliament is against killer robots, but the shocking fact that the UN was not able to pass a similar resolution: "Last month, talks at the UN failed to reach consensus on the issue, with some countries saying the benefits of autonomous weapons should be explored."

I wonder which countries were against the ban? My guess is the permanent members of the UN Security Council: the USA, Russia, China, France and Britain. Britain has been working on battlefield AI for at least 30 years; the other countries for a similar period. The end of that road is an unlivable planet and the extinction of the human race.

AI Learning From Video Games Is Inherently Dangerous

Posted on 24th May 2018

Show only this post
Show all posts in this thread.

This BBC report describes how the UK's Ministry of Defence is worried. Apparently "Robots that train themselves in battle tactics by playing video games could be used to mount cyber-attacks, the UK military fears".

I think there is much more to be worried about. AI is already being developed for use on the battlefield. If AI is teaching itself from video games (artificial realities where normal ethics are either non-existent or of lower than usual priority), then the robots engaging in the wars of the future will be the most ruthless and cruel soldiers that ever existed. This is not some vague risk in the future: "Researchers in Silicon Valley are using strategy games, such as Starcraft II, to teach systems how to solve complex problems on their own".

We don't allow our children unsupervised learning until they have developed some moral sense. We certainly can't expect that unsupervised learning by AI will be a safe and successful exercise.

Welcome to The Age of Ultron.

Another AI Nail In The Coffin Of The Human Race

Posted on 21st October 2017

Show only this post
Show all posts in this thread.

I am sure that the researchers at Google, described in this report on Business Insider, are very pleased with themselves. I am somewhat less than pleased.

In May this year, they developed an artificial intelligence (AI) designed to help them create other AIs. Now it has demonstrated its abilities by "building machine-learning software that’s more efficient and powerful than the best human-designed systems."

In my last post on this subject (here) I listed 3 things that AI must not be allowed to do, to avoid the AI apocalypse. The first 2 had already begun; this new work by Google is the last item.

As I wrote on 23rd February 2017, basically, we are screwed.

Would it be too much to ask the people working on AI to finally show some moral compunction, and to apply some common sense?

Why The Laws Of Robotics Cannot Work

Posted on 23rd February 2017

Show only this post
Show all posts in this thread.

There has been a lot of news about AI (Artificial Intelligence). There have been major advances in the technology, and many products and services are being rolled out using it: we have AI chat-bots, AI personal assistants, AI in translation tools, and AI being used to stamp out fake news, and is being developed for use on the battlefields of the future, to name but a few. This new breed of AI uses surprisingly few computing resources: it no longer needs a huge computer centre, but simpler AI programs will run even on portable devices such as mobile phones.

My position remains that AI is extremely dangerous. Smarter people than me, such as Professor Stephen Hawking and Elon Musk, have said that AI poses an existential threat to humanity. It is not difficult to imagine scenarios where AI goes out of control and poses a threat to our existence: sci-fi movies and literature are full of examples (see the other posts in this thread for some of these works).

I have argued in the past for researchers to take more seriously the laws of robotics, first proposed by Isaac Asimov in 1942. These laws are fairly simple:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

I now realise, however, that this approach cannot work. Modern AI is not programmed in the conventional sense; it learns by being fed data. This means that the creators of an AI system do not know how it represents the information that it has learned, and how it implements the rules and priorities that it has been given. This means that it is not possible to program the laws of robotics into the AI at a basic level, since the programmers do not know the language in which it is thinking. It is, of course, possible to include the laws of robotics in the information that is taught to the AI, but we can never know what priority these laws will really be given by the AI, nor even what it understands by words such as "harm" and "human".

Realistically, the only way we can protect ourselves from the doomsday scenarios of out of control AI is by ensuring that AI never has:

  1. The ability to do physical harm; this means no AI on the battlefield, in self-driving vehicles, in medical equipment, and a whole host of other applications. I seriously doubt that industry and government will be capable of, and trustworthy enough to, show such restraint.
  2. The ability to learn more, once deployed, which amounts to reprogramming itself. Since such continued learning is already one of the unique selling propositions of some AI products, the ship has already sailed on that piece of restraint.
  3. The ability to create more AI. At least this is not yet happening (as far as I know), but I suspect that it is only a matter of time before AI developers start to use AI tools to create the next generation of AI products,

So, basically, we are screwed!

Robots For Sex

Posted on 15th September 2015

Show only this post
Show all posts in this thread.

I am totally bemused by this article from the BBC. A campaign has been started to ban the use of robots as sex toys.

It seems to me that the whole point of robots is to perform tasks that are either unpleasant or dangerous for humans, and sex as a profession seems to qualify on both counts.

Although I have serous concerns about robots for combat (as discussed here), and believe that controls are needed to ensure that Isaac Asimov's laws of robotics are incorporated in all AI machines, I really do not see any inherent harm in the use of robots as sex toys. Their use in such roles should reduce the number of (mostly) women trapped in such demeaning and dangerous (due to STDs and violence) work.

Would Dr Kathleen Richardson prefer that we continue with the way things are now; would she want her own daughters (I actually don't know if she has any) to work in the sex trade?

Her idea seems to be be based on the fear that robots for sex "will contribute to detrimental relationships between men and women, adults and children, men and men and women and women". Well, Dr. Richardson, where is the proof? Studies on the impact of pornography suggest otherwise. My personal opinion is that people who have damaged relationship skills for whatever reason are probably less likely to be further damaged by playing out their fantasies with robots, rather than with real people.

Game Over!

Posted on 18th August 2015

Show only this post
Show all posts in this thread.

I am really becoming convinced that the majority of the human race wants to commit mass suicide This recent story from the BBC is the latest evidence.

In a previous post on this subject, I wrote a definition of the conditions under which AI and Robots become dangerous. I said: "If you build autonomous (i.e. AI-based) robots, and give them the ability to change their design, and to replicate themselves, then without a system that ensures that the laws of robotics are always programmed in to the machines without alteration, it is game over for the human race". Well, guess what researchers have now done: precisely that!

I am sure that AI will be applied by the people who will control it (more likely to be politicians than scientists) to solving thoroughly worthy problems: world peace, global warming, world hunger, pollution of our environment, etc. The trouble with all these problems is that the primary cause is clear: too many people. So the obvious solution will be to reduce the human population, possible to zero.

Do you want to live in a world where humans are culled by machines to limit the damage that we do to the planet? I sure don't.

AI and Robotics: A Threat to Us All.

Posted on 4th December 2014

Show only this post
Show all posts in this thread.

There have been a lot of stories about, and a general rise in popular interest in, robotics and AI recently. There have been robotics competitions, and rethinking of the famous Turing Test of Artificial Intelligence. Recent space projects have involved at least semi-autonomous functioning, due to the impracticality of remotely controlling devices at vast distances.

I have always been concerned about this area of technology, but have been accused of paranoia by my friends and colleagues. Now Prof. Stephen Hawking has shared that he is also worried (as reported in this BBC story), and has said that "efforts to create thinking machines pose a threat to our very existence".

Isaac Asimov first defined the laws of robotics in a story in 1942. The laws are intended as a safety feature, to ensure that robots do no harm to people, and do what they are told.

I have worked in the field of robotics (robot vehicles for military uses), and I know that Asimov's laws of robotics are generally completely ignored by researchers in robotics and AI. That is like building fast cars without brakes.

If anyone doesn't believe that robotics are being developed for the battlefield, check out this article in Popular Science.

If anyone finds the plot line of Terminator too fanciful, check out this BBC article about a project to connect robots to the Internet so that they can learn from public sources and each other. Sounds a lot like SkyNet to me.

I think the description that really puts the risks into perspective was written by Philip K. Dick: "Second Variety". It is a short story, also made into a movie called "Screamers". The message is clear. If you build autonomous (i.e. AI-based) robots, and give them the ability to change their design (already being experimented with in AI machines), and to replicate themselves (already being seriously considered by scientists and engineers), then without a system that ensures that the laws of robotics are always programmed in to the machines without alteration, it is game over for the human race. Of course, it only takes one rogue government, terrorist group or company to not play by the rules, and the rules become useless.

Maybe you also think I am being paranoid, but Stephen Hawking is a very smart guy, and he is worried. You at least owe it to yourselves to read Second Variety, or watch Screamers before you dismiss this.