It actually seems funny to write answer to this question (as it's so unusual to find an article about this these days 🤔). AI is short for Artificial Intelligence or intelligence which is created by humans. But what is intelligence then? Intelligence breaks into the tasks that beings are capable of doing. Like thinking, memorising, remembering, deciding, reasoning, predicting, recognising, improving, inventing, reproducing, dreaming, assuming, surviving, feeling, hoping, coping, all these tasks ending with an 'ing' reminds us that they will never end until life (except for reproducing😉). The thing that makes beings actually alive is knowing that they are. But are all the beings intelligent? Not all of them carry out all those tasks. As being smartest of all we humans still don't know if a mouse dreams or not (atleast Jerry does😏). But we do know, beings with a smaller brain or number of brain cells cannot carry out complex tasks. I'm sure ameoba can't recog...
Large Language Models or LLMs seem very new as these terms came to popularity with ChatGPT (2022) but for a fact have always existed (since 1948 Claude Shannon's paper "A Mathematical theory of Communications"). Although at that time the Language models were not large, the main idea is still the same. The "A Mathematical theory of Communications" introduced a method of n-grams in which a probability for the next possible word, from a vocabulary list, was calculated based on the neighbouring 'n' words in a sentence (hence n-gram). It is based on the notion that the context (neighbouring words) has some relation to the main word in consideration. If you are aware of Convolutional Neural Networks they have the same idea but in terms of image pixels. In the 1980's a paper from Mathematical biophysics introduced the concept of Recurrent Neural Networks (RNNs) in which a series of mathematical neurons connected sequentially predict an output for a sequent...