This article is the first in a multi-part series discussing the Prompting paradigm in NLP. The author introduces four major paradigms that have occurred in NLP over the years: Fully-Supervised Learning (Non-Neural Network), Fully-Supervised Learning (Neural Network), Pre-train and Fine-Tune, and Pre-train, Prompt, and Predict. The article focuses on the latter paradigm, where downstream tasks are reformulated using a Textual Prompt, and introduces the concept of Prompt Engineering. Part 2 of the series will dive into Prompting in more detail.

source update: Pre-train, Prompt, and Predict – Part1 – Towards AI