Adept Scientific - English
The world's best software and hardware for research, science and engineering.
flag arrow
clearclear

 Adept Store | register Join My Adept | Flags  
Adept Scientific | Amor Way | Letchworth Garden City | Herts | SG6 1ZA | Tel: +44 (0)1462 480055  
UKdedksvnofi
Home
Products
Training
Events
 Buy Online
Downloads
Academic Discounts
Support
My Adept
International |  About Us |  Adept Scientific Blog |  Contact Us |  Press Room |  Jobs
Adept Scientific on Facebook Adept Scientific on Twitter Adept Scientific on YouBube Adept Scientific on LinkedIn
Knowledge Base  > Browse Categories  > Axum
'Singular Gradient Matrix' Errors and Nonlinear Regression in Axum.

Email This Article

To email this article enter and address below and click on the "Email" Article button.



Last Modified: 3rd Jan 2013
Category: Mathematics and Simulation > Axum
Platform: All
Version: 7
Article Ref.: 12246
»Return to previous search
»Print friendly version of this article.
2 out of 2 people have found this article useful.

I cannot get the Nonlinear Regression to work correctly. I keep getting error messages like 'Singular Gradient Matrix'.

Error regarding "singular gradient matrix" - Example.

Problem in nls(EB ~ E - (a * T^2)/(1 + b * T), d..: singular gradient matrix Use traceback() to see the call stack

The message

'Error in nls(..): Singular gradient matrix'

Is returned by the 'Statistics -> Nonlinear -> Regression' dialog box and the nls() function when the problem matrix is nearly singular (or non-invertible). This error message is a side effect of the underlying nonlinear least-squares algorithm, and is usually dependent on your chosen starting values.

For your reference, there follows a discussion below on the theory behind the "singular gradient matrix" error message. The Nonlinear Regression dialog relies on the function 'nls()' for its main algorithm, and this is the function that is referred to in the discussion. It is explained below that the "singular gradient" error is usually eliminated by choosing different initial values for the model parameters, or if you have already tried this, it may be possible that your nonlinear model is singular in many points surrounding the optimal parameter values. If you have nonlinear parameters in your model, you might try selecting the 'Partial Linear Algorithm' box in the Options tab of the Nonlinear Regression dialog. You might also find the 'Graph -> 2D Plots -> Fit - Curvefit' plotting functions helpful in visualizing the fit of your dataset.

The "singular gradient matrix" message in nls()

nls() uses a method known as the Gauss-Newton method to minimise sums of squares of nonlinear functions. In the case of nls() the functions are just the residual function of a nonlinear model at certain parameter values and a given response vector.

If r(x) is the vector of residuals (response - fitted values), then at each iteration the Gauss-Newton method moves from the current parameter values, x, along a direction d that is the solution to

(1) J(x)^T J(x) d = - J(x)^T r(x) (I'm using ^T to denote the transpose)

where J(x) is the Jacobian matrix of r(x). The next iterate is then x + a d, for some positive scalar a. The right hand side of (1) is the negative gradient of residuals, and if Newton's method were being used, then the next iterate would be in a direction d that solves

(2) H(x) d = - J(x)^T r(x)

where H(x) is the Hessian matrix of the sum of squares of the residuals r(x). H(x) has the form J(x)^T J(x) + B(x), where B(x) vanishes at points
where r(x) = 0, so what Gauss-Newton is doing is using J(x)^T J(x) to approximate the Hessian matrix of the sum of squares.

Now (1) is just the normal equations for the least-squares problem

(3) J(x) d = -r(x)

and least-squares techniques are typically used to obtain d for reasons of numerical stability.

Assuming there are more residuals than parameters, the Gauss-Newton method is guaranteed to converge to a local minimum of the sum of squares provided J(x) has full rank at each iterate. However, when J(x) is rank-deficient or nearly rank-deficient, the solution to (1) (and (3)) is no longer unique. You can still solve for the minimum norm solution, which is unique, but can cause computational problems because it involves a rank determination. Moreover, even if you were able to compute the exact minimum norm solution, the Gauss-Newton method would not be guaranteed to converge to a local minimum.

What nls() means when it gives you the message "singular gradient matrix" is that the Jacobian matrix J(x) is nearly rank deficient. (I don't know why they call it a "gradient" matrix --- as far as I know that terminology is not widely used.) When this happens, nls() actually gives up, and in Statistical Models in S, they tell you to try different initial estimates when this happens.

There are nonlinear least-squares problems with well-defined solutions that have rank-deficient or nearly rank-deficient Jacobians at many points. In fact this situation occurs not infrequently at the solution itself, so that using a different starting estimate will not help you with nls(). However, there are more robust methods for nonlinear least-squares that routinely handle nearly rank-deficient or rank deficient Jacobians. In S-PLUS, one such method is used by the function nlregb(). You can also solve the problem as an unconstrained minimization (in S-PLUS, using ms() or nlminb()).

 
 
Related Articles
adept

Top of the Page

Popular Links: ChemDraw | ChemOffice | Data Acquisition | Data Analysis | EndNote | Maple | MapleSim | Mathcad | MathType | Quality Analyst | Reference Manager | VisSim

EU ePrivacy Directive | Our Privacy and Terms and Conditions Statement
All Trademarks Recognised. Copyright © 2013, Adept Scientific plc.
Site designed and maintained by Lyndon Ash

Adept Scientific | Amor Way | Letchworth Garden City | Herts | SG6 1ZA | Tel: +44 (0)1462 480055