Wednesday, June 25, 2008


In the last few days I have been giving the impression that line-fitting is much more time-consuming, difficult and error-prone than integration. It's true, not because line-fitting is more complicated, but because integration is simpler. There are also notable exceptions. The last tutorial published on illustrates the case. Here is the spectrum to "integrate":
It's the kinetic study of a chemical reaction. Not only the concentrations change, but also the frequencies. That's not at all a problem. If you give a look at the article, you see that it's possible to measure the concentrations of all the diagnostic peaks in all the spectra with a single operations. In other words the whole job, including the formatted table of the estimated areas, can be done by the computer in automatic fashion.
In this case, they exploit that fact that line-fitting not only gives the intenisities, but the frequencies too. In this way it's easy for the computer to drift along the ppm scale without error. Actually, there are a few, unavoidable, errors, when the computer attempts to measure the concentration of the product at the beginning of the reaction. Examining the bottom spectrum visually, for example at 5 and 11 ppm, I can't even tell where is the peak. In such a case I think that deconvolution is too risky to rely upon it. It is better to integrate a frequency region large enough to contain the peak, even if I don't know its exact position, to measure an adjacent and transparent region of the same width, then to calculate the difference. The cumulative error can still be quite high.
Yesterday I dispensed a lot of strategies for line-fitting, today I am showing that you can do it blindly. Both things are true at the same time. In today's case, the operator first performed the process manually on one of the spectra, then wrote down a macro to extend the treatment to the whole experiment. Apart the very first spectra where the product is invisible, the rest of the task is easy. There are mostly doublets or singlets. (Connecting to my Monday's post: why using the term "deconvolution" in such a case, when the purpose is merely to remove the noise, not to remove the broadening?).
Furthermore, there is so much information available (a whole matrix of correlated values) that you can't be obsessed by the fear of errors. If there is an error, it is easily spotted. Different is the case when you have a single spectrum to process and you can't even find a denomination for the multiplet structure.
A proton spectrum or the altimetric outline of a dolomitic stage at the Giro?

What I am going to opinionate now it's not specific to deconvolutions, but applies equally well to integration. We are so used to equate the integral with the concentration that are tempted to make the big mistake of comparing the integrals of two different spectra. Even when the sample tube has not been extracted from the probe and the acquisition parameters are exactly the same, it's not always a safe practice to assume that the concentrations are proportional to the integrals. That's what we do to measure the relaxation times, and it's OK, because, in that case, less accuracy is tolerated. When studying a kinetic we can, and therefore should, use an immutable peak as a concentration standard. If the immutable peak is not available, or even when it is, we can start from the integrals to measure the percentages of the two compounds (starting and final). In this case, if we assume that there is no degradation to a by-product, the total concentration is constant and the percentage values are proportional to the single concentrations. In other words, the percentages are internally normalized and therefore comparable between different spectra. It doesn't represent an extra workload, because it is easily done inside Excel, and you are going to use an Excel-like program in any case.


Post a Comment

<< Home