Perplexity in Language Models

Evaluating NLP models using the weighted branching factor

Chiara Campagnola
11 min readMay 18, 2020

--

Perplexity is a useful metric to evaluate models in Natural Language Processing (NLP). This article will cover the two ways in which it is normally defined and the intuitions behind them.

Outline

  1. A quick recap of language models

--

--