This blog posting represents the views of the author, David Fosberry. Those opinions may change over time. They do not constitute an expert legal or financial opinion.
If you have comments on this blog posting, please email me .
The Opinion Blog is organised by threads, so each post is identified by a thread number ("Major" index) and a post number ("Minor" index). If you want to view the index of blogs, click here to download it as an Excel spreadsheet.
Click here to see the whole Opinion Blog.
To view, save, share or refer to a particular blog post, use the link in that post (below/right, where it says "Show only this post").
Posted on 18th July 2023 |
Show only this post Show all posts in this thread (AI and Robotics). |
This report on Vox highlights the scariest aspect of generative AI systems like ChatGPT; the developers cannot tell you how they work, and neither can the AI itself (many people have tried, but humans simply don't speak the language that AI uses internally). The idea that we are using, and plan in future to use even more widely, systems that are inherently unpredictable and amoral, should worry us all. Modern AI systems are not programmed in the conventional sense. The underlying neural network engine is coded, but the "intelligence" of AI comes from what it learns from huge sets of data, typically from the Internet (and we all know what a cess-pit of misinformation and immorality the Internet is). This has some consequences:
There are many examples in science fiction of what could go wrong due to an amoral AI. One of the most extreme is Avengers: Age of Ultron. |