It also illustrates the predictions (in light red) of other The upper left figure illustrates the predictions (in dark red) of a singleĭecision tree trained over a random dataset LS (the blue dots) of a toy 1d Of the error which is due the variability in the data. Finally, the noise measures the irreducible part The variability of the predictions of the estimator when fit over different Predictions of the estimator differ from the predictions of the best possibleĮstimator for the problem (i.e., the Bayes model). The regression problem, the bias term measures the average amount by which the In regression, the expected mean squared error of an estimator can beĭecomposed in terms of bias, variance and noise. This example illustrates and compares the bias-variance decomposition of theĮxpected mean squared error of a single estimator against a bagging ensemble. To download the full example code or to run this example in your browser via JupyterLite or Binder Single estimator versus bagging: bias-variance decomposition ¶