Perplexity in Language Models
Evaluating NLP models using the weighted branching factor
Published in
11 min readMay 18, 2020
Perplexity is a useful metric to evaluate models in Natural Language Processing (NLP). This article will cover the two ways in which it is normally defined and the intuitions behind them.
Outline
- A quick recap of language models