Double descent in statistics and machine learning is the phenomenon where a model with a small number of parameters and a model with an extremely large number of parameters have a small test error, but a model whose number of parameters is about the same as the number of data points used to train the model will have a large error.[2] This phenomenon has been considered surprising, as it contradicts assumptions about overfitting in classical machine learning.[3]