##### When you perform a regression, there are three statistics that the calculator might display in order to give you an idea of how well your regression model fits the data provided:

- The Pearson correlation coefficient (\(r\)). This measures the strength of the linear correlation between two data sets, so Desmos will display it in the special case that your model is linear and contains both slope and intercept parameters (e.g., \(y_1 \sim mx_1 + b\)).
- The coefficient of determination (\(R^{2}\)). This measures your regression model’s “goodness of fit.” Roughly speaking, \(R^{2}\) tells you what fraction of your variance is explained by the model. Linearity is irrelevant for this measure, so Desmos will show \(R^{2}\) values for more general models of the form \(y_1 \sim f(x_1,x_2,...\)).
- The root-mean-square-error (RMSE). This is the most general way to quantify how well a model predicts your observed data, because RMSE can be calculated even when there is no principled way to distinguish between the dependent and independent variables. It is simply the square root of the average squared error. Desmos will show RMSE for any model of the form \(f(y_1) \sim g(x_1)\) or \(h(x_1,y_1) \sim 0\).

Note: that the measures are listed in increasing order of generality, and Desmos will display the most specific measure possible: \(r\) in the special case of a linear model with intercept, \(R^{2}\) for a nonlinear model where the independent and dependent variables are clearly defined, and RMSE otherwise.

There is one case where Desmos will display both \(r\) and \(R^{2}\), and that is when you have a nonlinear model that turns out to have \(R^{2}\) exactly equal to \(r^{2}\). In other words, if your nonlinear model fits the data precisely as well as a linear model would, we will report the linear correlation as a convenience.