Critical remarks on current generative AI (part I)

Gabriele Scheler
3 min readMar 26, 2024

End — not beginning of a technological development. Even though there is a difference between generative AI and classic AI (more accurately machine learning)— the differences are not really very big.
Both cases rely exclusively on statistical data analysis, and crucially no serious internal information processing takes place.
The model reproduces its data. It can do nothing else.

If you know the data very precisely and have enough of them of good quality, then the predictions can of course be very useful. It is completely data dependent. The worse the data, the worse the output.

Neural networks (classical AI) are very good for getting an overview; they are a tool in the field of machine learning. For generative AI, there is currently only one technology used (transformers with attention heads, encoder-decoder principle), but that doesn’t have to be the case in the future.

How does it work now? If you define a prompt and an image is created in which the components appear and the whole thing fits together somewhat? Well, we now have huge amounts of data from publicly accessible — not necessarily legally available— sources that can be stored in large networks with billions of parameters. (OpenAI has 1.2 trillion, I read). They are large databases, but with completely different properties compared to previous databases.

The database contains a large number of links, derived from individual sentences…

--

--

Gabriele Scheler
Gabriele Scheler

Written by Gabriele Scheler

Computer scientist and AI researcher turned neuroscientist, supporting a non-profit foundation, Carl Correns Foundation for Mathematical Biology.

No responses yet