## Friday, June 20, 2008

### Looking Back

Several methods have been proposed to calculate the LP coefficients; the fact that no clear winner has emerged demonstrates that it's a tricky task. My choice was casual. I remember being into the library (it was probably 1991), browsing the latest issue of a chemistry journal (don't remember which) where I probably I read the expression "Linear Prediction" for the first time in my life. What I remember with certainty are the names of the author: Gesmar and Led. They were discussing the application of a novel algorithm ("Fast Linear Prediction"). Curious to see it in action, I wrote to the duo asking for the source code. They replied that the code was available for sale, and the price resulted to be higher than my monthly salary. What they were selling in practice was the FORTRAN source code of a program written for the VAX. A program with a single routine. I had no FORTRAN compiler, no VAX machine and no idea about a practical application of LP that could justify the expense. Now, being that the algorithm had been invented by someone else, a mathematician called George Cybenko, I wrote to the latter. Unfortunately, during that period he was recovering from a broken leg (skiing). When he returned to work, after months, he sent a fax. From that moment on, he began collaborating with me and, fax after fax, helped me in writing his algorithm into my language (during the period I had a Pascal compiler). I spent a lot of time first to understand the algorithm, then to write it, finally to optimize it for speed on my computer. Enough reasons to remain affectionate and faithful to it for years, yet now I almost convinced it represents a second choice.
Do you remember the synthetic spectra I used to demonstrate the properties of zero-filling? It contains 10 exponentially decaying sinusoids, with identical line-widths, not-too-different intensities and spread-out frequencies. I have used Cybenko's algorithm to calculate 10 prediction coefficients; then I have told the computer to prolong the FID using the coefficients. The synthetic FID is perfect:The part at the right of the vertical red line has been predicted correctly. So far, so good. All of a sudden, if I repeat the calculation with 11 coefficients instead of 10, it's a disaster:

This time there is no necessity for the red line! What's the cause of the disaster? The lack of noise! When there is a minimal amount of noise, it can neutralize the excessive number of coefficients. When the number of coefficients grows further (e.g. > 40) the disaster is almost guaranteed, however, there's no excuse. I avoid using this algorithm with more than 24 coefficients. The alternative is to use a generic algorithm, therefore slower, to solve the same system of equations. A popular algorithm, probably the most popular method of Linear Algebra, is the singular value decomposition. With SVD I get the same perfect results if I calculate 10, 11 or even 100 coefficients. There's more. I also get the ordered list of the so-called singular values. For example, if I ask the computer to compute 20 LP coefficients, it can also give me the following singular values:
``01        227399.5554802        212850.0966003        208405.4452904        171681.0154805        154987.6451306        120658.5313707        104391.1911208         99881.4408809         83773.7734910         64064.1761011             0.0054712             0.0054013             0.0053614             0.0053515             0.0053216             0.0052717             0.0051918             0.0051219             0.0050320             0.00500 ``

I have not explained what a singular value is because I am an ignorant. Even an ignorant, however, understands that SVD offers the welcome bonus of measuring the number of signals contained into an unknown FID. A useful option of SVD is that the computer, after having detected that the nature of the problem can be better described by a smaller number of values, can ignore the rest. This option is ON by default. I supposed that this self-regulation was the cause of the stability of SVD. I wanted to verify what happens if I disabled the option. Surprise: nothing changes! (and nothing remains for me to speculate about).
From this single, unrepresentative test, it's clear that SVD is far better, but I haven't the slightest idea why. (For completeness: SVD can cut down the number of singular values, but not the number of LP coefficients; this is something you need to do on your own).
There are also abundant methods to save the day when the coefficients don't work, and they don't depend on how you have extracted them. When I introduced my "theory" of LP, two days ago, I explained that the LP coefficients of the isolated signals (related to the frequency and line-width) are different from the LP coefficients of the full spectrum, but can be recalculated from the latter. It's enough... (!) to solve a single equation of the nth degree (where n is the number of coefficients). Assuming that the task is simple, when you have the roots of the equation, you can verify the only condition we know: the ratio α = I₂ / I₁ must be < 1 in the case of forward LP and > 1 in the case of backward LP. (Cfg. my old post for the meaning of the symbols).
Everybody agrees on this point. No theory, unfortunately, says what we should do when the condition is not verified (and it happens incredibly often!). In the case of forward LP it is possible to substitute α with its reciprocal. The consequence is that the FID decays even faster than it would do naturally. The result is an hybrid between LP and zero-filling, and it can't be harmful. In the case of backward LP, instead, the same trick will make the FID grow (backward) more than it ought to do, with invariably catastrophic results.
That's why I prefer forward LP.

#### 4 Comments:

At 10:28 PM,  Kevembuangga said...

May I ask which implementation (software package) of SVD you are using?
I am looking for one, there are plenty, but they all seem to be nested/hosted within HUGE "Rube Goldberg" math libraries.

At 10:47 PM,  old swan said...

I am using CLAPACK. In detail, I call the routine called zgesdd_. The choice of CLAPACK was natural, because it was installed on my computer since it came out of the factory. I don't like CLAPACK, it has been the only time I have used it and it wasn't easy to understand how the routine was supposed to be called. Now, however, I am extremely satisfied by zgesdd and all is clear. My computer is an Apple iMac G5, still with the original OS (of 2005).

At 10:08 AM,  Kevembuangga said...

Thanks for your answer.
I had seen CLAPACK "en passant" I will have a closer look.
My search seem pretty hopeless anyway, I am looking for a "bare bones" implementation such as not to drag along the weight of other irrelevant math subroutines.
From Google searches I have seen other people involved in, for instance, embedded systems, looking from the same stripped down SVD to no avail, sigh...

At 12:47 PM,  old swan said...

I am convinced that some "light" SVD is available, maybe from the "Numerical Recipes" book. If you are interested into Linear Prediction, you can try the algorithm by Cybenko, which is quite short.