This blog posting represents the views of the author, David Fosberry. Those opinions may change over time. They do not constitute an expert legal or financial opinion.

If you have comments on this blog posting, please email me .

The Opinion Blog is organised by threads, so each post is identified by a thread number ("Major" index) and a post number ("Minor" index). If you want to view the index of blogs, click here to download it as an Excel spreadsheet.

Click here to see the whole Opinion Blog.

To view, save, share or refer to a particular blog post, use the link in that post (below/right, where it says "Show only this post").

Lots Of Worrying News About AI.

Posted on 20th February 2024

Show only this post
Show all posts in this thread (AI and Robotics).

The roll-out of AI into all aspects of our lives is in full swing, but there has been a whole host of very worrying news as a result. Below are some examples.

The BBC reports that big-tech companies (including Amazon, Google and Microsoft) "have agreed to tackle what they are calling deceptive artificial intelligence (AI) in elections". Whilst this might seem to be good news, I suspect that their efforts will be unsuccessful, and the fact that they have decided to do something shows just how dangerous AI disinformation is. It is bad enough when humans engage in disinformation, but AI provides the tools to dramatically increase the quantity and quality of such propaganda. With this increase in the quantity of disinformation, it will be nearly impossible for the current fact checking processes to keep up. I foresee a world where bad actors generate disinformation using AI, and the gate-keepers use AI to warn us that such content is untrustworthy. Social media is already full of junk, and AI is going to make that mush worse.

According to Futurism, Amazon have been testing a new Large Language Model (LLM - AI), that is showing "emergent abilities". The AI system ("Big Adaptive Streamable TTS with Emergent abilities" or BASE TTS) showed the ability to understand (and therefore potentially to use) aspects of language that it was not trained on. This is extremely scary.

Fortune tells us that Microsoft is reporting that Iran, North Korea, Russia and China are starting to use generative AI (Microsoft's own AI tools) to launch cyberattacks. One issue with that is the potential of a vastly increased volume of such attacks; another is that AI may devise attacks that are cleverer and more effective than man-made attacks, thinking of attack techniques that humans have not come up with. Given our dependence on IT for everything (news, banking, utility company operations, government services, telecoms, etc.), the impact could be enormous. Although Microsoft says that they had detected and disrupted some such cyberattacks, what about the ones they missed (or didn't care enough about) and what about other company's AI tools?

Lifehacker reports on the announcement by Sam Altman, CEO of OpenAI, of Sora, an AI that can generate amazingly realistic videos from text input. The article contains some examples. Whilst some such fake videos will be harmless and entertaining the potential for disinformation is enormous. People are generally likely to accept videos as genuine, and indeed courts often accept video evidence as factual (audio recordings are not, unless there is independent evidence of their veracity), creating risks of miscarriages of justice. Although the article states that it is (currently) possible with careful examination of the videos to detect that they are fake, this (Sora) is only the the first version of such text to video AI, and with later versions it will become progressively more difficult to detect AI generated fakes.

This report on Ars Technica is a story that has appeared in many places. Air Canada's chatbot gave incorrect advice to a passenger about the airline's bereavement discount policy. When the passenger followed that advice by claiming the bereavement discount refund after booking, his claim was refused, so he sued Air Canada. He won his lawsuit, but the defence used by Air Canada in court is what I find worrying. They claimed that the chatbot was an independent entity, and that they were not responsible for the advice that it gave. Legally, the chatbot is an agent of the company, just as a human sales or customer support representative is, and any advice given has legal standing. What worries me, however, is that big companies who use chatbots and other AI will keep contesting this, and may eventually win, establishing a precedent that undermines customers' rights.

Forbes reports on a warning from Google about AI. AI is finding its was into the Apps on our smartphones (whether Android or iPhone), and it appears that we all have a blind spot regarding our Apps: we think that our interaction with them is private; it is not! The potential for leaking confidential information (your bank account information, the name of your lover, income that you may not have declared to the tax man, your political affiliations, etc.) is enormous. We all need to learn new habits if we want to keep our secrets secret.

With the roll-out of AI, many people were worried that they might lose their jobs to AI. This article on Tech.co lists some of the companies that have already replaced workers with AI (MSN, Google, Dukaan, Ikea, BlueFocus, Salesforce and Duolingo), and some that are planning to do so soon (IBM and BT). We were right to be worried! As a counterpoint, Futurism reports on a study by MIT which shows that replacing workers with AI is more expensive, although the costs of AI are likely to fall in the future and companies are likely to plow ahead anyway.

Vice.com reports on very worrying results from international conflict simulations. Military organisations around the world are evaluating the use of AI to support decision making. The results show that AI is very prone to sudden escalation: "In several instances, the AIs deployed nuclear weapons without warning". It sounds like an old movie which many of us will have seen.