> Model complexity is a key concept in statistics and machine learning, and is a core consideration in prediction problems—a higher complexity allows for a better fit to the training data, but may result in overfitting, whereas a lower complexity may lack the ability to capture sufficiently rich behavior, and hence lead to underfitting. There are numerous different ways to quantify the complexity of a prediction model. One such way is called the (effective) degrees of freedom (Efron, 1983, 1986; Hastie and Tibshirani, 1987) of a model, which is a classical concept in statistics, and will play a central role in our paper. This is often interpreted as the number of “free parameters” in the fitted model.[^1]
[^1]: Patil, P., Du, J. H., & Tibshirani, R. J. (2024). Revisiting Optimism and Model Complexity in the Wake of Overparameterized Machine Learning. arXiv preprint arXiv:2410.01259.
> [!info]- Last updated: February 3, 2025