ChatGPT and other Large Language Models or LLMs work by take an input prompt and predicting the next "token" or most likely element by using massive tables of "pre-trained" data.
All of the training data is weighted using high-dimensional vectors which are basically mathematical representations of language. Several methods are used to determine the context of the input and weight against the most likely desired output. Finally, the completed request is printed on screen.
LLMs are very good at guessing the next token in a sequence, but it cannot reason or “guess” at new information.
Slide presentations are available for paid members below: