Beyond current AI (partII): How do genAI models differ from human cognition?

The human thought process has of course been the subject of several sciences for a long time, and in philosophy you can certainly find a lot about it in Plato over 2000 years ago. If you ask only the question of implementation, then cognitive models exist since the 70s, and some of them have continued to be developed further, e.g. ACT-R, or ART or semantic networks, WordNet, Knowledge Graphs. A big problem for AI in the past was world knowledge because it is so extensive and difficult to code. There was CYC, which was explicit coding by humans, then Wolfram alpha for mathematics and general knowledge. But in the meantime, all world knowledge became available in machine-readable form on the internet. It is there, often given in knowledge-entity graphs or easily transformable. OpenAI were the first to get the money from Microsoft to read and process large amounts of it. Originally I think $10 billion, then probably…

Gabriele Scheler
2 min readMar 29, 2024

--

--

Gabriele Scheler

Computer scientist and AI researcher turned neuroscientist, supporting a non-profit foundation, Carl Correns Foundation for Mathematical Biology.