Nonlinear Regressions

Some regressions can be solved exactly. These are called "linear" regressions and include any regression that is linear in each of its unknown parameters.

Models that are “nonlinear” in at least one of their parameters can’t be solved using the same deterministic methods, so the calculator must rely on numerical techniques to approximate parameter values. These techniques are not guaranteed to succeed in every case, potentially leading to a poor overall fit or parameter values outside a range that makes sense for your model.

Therefore, you may sometimes get surprising results with nonlinear regressions (e.g. sinusoidal or exponential regressions). This article will define those terms and offer some suggestions.

Definitions and Challenges

All regressions in Desmos use the method of least squares. Given a model with free parameters, the calculator attempts to find the parameter values that yield the smallest total squared error. A model falls into one of two categories: linear or nonlinear.

Linear:

  • Depends linearly on all its parameters (e.g. \(y_{1}\sim mx_{1}+b)\). Note that "linear" in this context refers only to the parameters. For example, \(y_{1}\sim ax_{1}^{2}+bx_{1}+c\) is also linear since it is a linear combination of \(a\), \(b\), and \(c\). That the model is quadratic in \(x_{1}\) is irrelevant.
  • Has a closed-form solution, which means the calculator can find optimal parameter values (up to the limits of floating-point precision) deterministically in a single step.

Nonlinear:

  • Depends nonlinearly on at least one of its free parameters. For example, there is no way to express \(y_{1}\sim ax_{1}^{b}\) as a linear combination of \(a\) and \(b\) because \(b\) appears in the exponent.
  • It's only possible to fit a nonlinear model by evaluating a series of approximations and adjusting the parameter values at each step until the sum of squared errors (SSE) is as small as possible.

Nonlinear regressions face some special challenges:

It's difficult to know when the SSE is actually as small as possible. As the calculator "walks" the parameter values toward smaller and smaller error, it might end up at a local minimum that is not the global minimum.

Before it can even begin this iterative process the calculator must generate a preliminary guess for each parameter. Because the final result can be sensitive to initial conditions, unfavorable guesses might lead to suboptimal solutions (local minima), slow convergence, or both.

The global minimum error might not be unique. Different combinations of parameter values might yield the same equally low SSE, with no principled reason to prefer one solution over another.

Improving your Results

Desmos employs various strategies to mitigate these difficulties. The calculator makes several initial guesses for each parameter, refines them many times, and returns the best one. The calculator also uses some heuristics to help choose among several solutions when it encounters a best-fit that is not unique. For instance, it biases trigonometric models toward low frequencies and ignores negative bases in exponential models.

However, these strategies are not infallible. There are some steps that users can take to increase the probability of finding a good fit.

Increase your probability of finding a good fit:

Place restrictions on parameter values. If you have a model like \(y_{1}\sim ax_{1}^{b}\) and think the exponent should be between 1 and 5, you can write \(y_{1}\sim ax_{1}^{b}\left\{1 \lt b \lt 5\right\}\). That way, the calculator won't even attempt to evaluate parameter values outside your preferred range.

Measure your data in units that make the parameters not too big or too small. For parameters without user restrictions, the calculator makes initial guesses between \(-5000\) and \(5000\), with more values near \(0\) than far from it. If your parameters fall outside that range, the calculator's initial guesses may be poor.

Rewrite nonlinear models in linear form if you can. For instance, \(y_{1}\sim a\left(x_{1}-h\right)^{2}+k\) and \(y_{1}\sim ax_{1}^{2}+bx_{1}+c\) describe the same relationship, just parameterized in two different ways. The first model is nonlinear, but the second one isn't.

Even with the calculator and the user working together, nonlinear regressions simply aren't mathematically guaranteed to succeed in the same way as their linear counterparts. Because Desmos allows you to use any conceivable relation between lists of data as a regression model, you may encounter cases that fail to yield good results. If that happens, feel free to contact support@desmos.com so that we can continue to improve.

 

Learn More

Please write in with any questions or feedback to support@desmos.com.