This blog posting represents the views of the author, David Fosberry. Those opinions may change over time. They do not constitute an expert legal or financial opinion.

If you have comments on this blog posting, please email me .

The Opinion Blog is organised by threads, so each post is identified by a thread number ("Major" index) and a post number ("Minor" index). If you want to view the index of blogs, click here to download it as an Excel spreadsheet.

Click here to see the whole Opinion Blog.

To view, save, share or refer to a particular blog post, use the link in that post (below/right, where it says "Show only this post").

Developers Don’t Know How AI Works And Nor Does The AI Itself.

Posted on 18th July 2023

Show only this post
Show all posts in this thread (AI and Robotics).

This report on Vox highlights the scariest aspect of generative AI systems like ChatGPT; the developers cannot tell you how they work, and neither can the AI itself (many people have tried, but humans simply don't speak the language that AI uses internally).

The idea that we are using, and plan in future to use even more widely, systems that are inherently unpredictable and amoral, should worry us all.

Modern AI systems are not programmed in the conventional sense. The underlying neural network engine is coded, but the "intelligence" of AI comes from what it learns from huge sets of data, typically from the Internet (and we all know what a cess-pit of misinformation and immorality the Internet is).

This has some consequences:

  1. It is not possible to code rules of protection and morality (such as Asimov's laws of robotics) into AI.
  2. Any attempts to get an AI system to predict how it would behave (for example, would it ever attack humans) in various scenarios is inherently futile. Several people have attempted to do this, and the results are of great concern.
  3. It is impossible to ensure that AI will not allow itself to be used for illegal purposes; the only protection we have against illegal is in the conditions of use.
  4. Because the learning process is largely automatic, due to the necessary size of training datasets, it is impossible to prevent bias and prejudice from creeping into an AI's "mindset". There have already been a number of reported cases of racial bias found in AI's decisions.

There are many examples in science fiction of what could go wrong due to an amoral AI. One of the most extreme is Avengers: Age of Ultron.