Thursday, December 27, 2007

Wizardry

Simplicity and the management of my spectroscopic data. This is all I ask to NMR software. Increase in resolution and sensitivity is something that I expect from advancement in hardware. Progress in the software field requires little time. If something was possible, somebody already did it. If nobody did it, it means that it was not possible.
Even an ignorant can increase both resolution and sensitivity, simultaneously, thousands of times or more. It's enough to use a non-linear method. Suppose you have an ugly spectrum, like this:

I don't know what it is, it's the first spectrum I have found today. Suppose I know, instead, what it is. I may be certain that they are two triplets. I can perform peak-picking, retain the 6 highest points, then create a synthetic spectrum containing 6 perfect lorentzian shapes, with line width = 0.1 Hz and noise = 0. Maximum resolution and maximum signal/noise.

Non linear methods work like this. Either they filter out all signals below a given threshold, or they select the given number of highest signals. It's risky, unless you already know what to expect (if you know everything, however, there is no need to collect the spectrum). If you need elegant presentations, you can get them with software. Increased knowledge is a different thing.
What does not work in general, can often be the perfect solution in some special cases. At the beginning of the year I mentioned the Chenomx Suite of programs. They require reference deconvolution to be applied as a pre-treatment on every experimental spectrum. Reference Deconvolution is an ancient trick that never found widespread implementation. After years it has found a reason to exist. Linear Prediction is a technique that emerged in the 80s. Its purpose was to extract the NMR parameters from the FID avoiding both FT and spectroscopic analysis. Nobody was using it. After a decade it found a different application: it allowed shorter acquisition times for HSQC and similar experiments.
Research in the field of NMR processing is precious, but I don't expect it to change the whole field. It can find specific solutions to specific problems.

Automatic and Manual

Automatic methods are good: they accomplish a task for you. Manual methods are twice better: you have the job done _the_way_you_like_it_ and, at the same time, you are acquiring a skill.
If you have ever tried to shim a magnet, you know that manual adjustments can be discomfortable. For example, shimming an aceton solution is not as simple as shimming a DMSO solution. Trimming Z1 alone is simple, but optimizing 10 gradients is more time consuming, to say the least, etc... In summary, there are many reasons why manual methods can become unpractical: long reaction times; mutual dependence of the parameters to be adjusted; extreme distance from the optimum; weak response.
These reasons have often been removed, in the field of NMR processing. Twenty years ago, when there weren't so many automatic methods, manual processing was painful and time consuming, but there was no alternative. Today's programs are so automatic that don't even ask you if you want your spectrum processed. They do it and that's all. Vendors say they save your "precious time". How much time? Not more than 1 minute, I guess.
Manual methods remain more flexible and precise. Thanks to faster computers, (rare) faster software, improved graphics and peripherals and increased knowledge, the reasons why manual methods used to be cumbersome and frustrating have been removed. Tomorrow's manual processing has little to share with past experiences... unless nobody cares. And it seems that nobody cares! Automatic methods have far more marketing potential, that's for sure.

Wednesday, December 26, 2007

Consequences

This year has been the year of NMR blogs. Today I am introducing the most recent ones.
University of Ottawa NMR Facility Blog
"A blog for the NMR users at the University of Ottawa" is written by Glenn Facey. It's the most frequently updated blog, at this writing moment. Despite the name, is useful to beginners involved with practical NMR spectroscopy, wherever they are based, not only at the University of Ottawa. Expert spectroscopist can as well find several useful posts. There are links to other NMR resources on the web, many clear pictures to make the concepts clear, processing tips and suggestions for useful pulse sequences.
The home page contains the customary exchange of courtesies with other NMR blogs. At the top of the list there is Carlos' NMR blog which is very recent indeed (it didn't exist at the time of my last post).
Carlos is a friend of mine since 1996 and was already active in the field of NMR software at that time. The title of his blog is "NMR Analysis (Processing - Prediction - Verification)". Doesn't it sound as a replica of my own blog? Wrong, it's a completely different thing! I'll explain why at the end.
Carlos' blog has already received its dose of celebration, justified by the fame and of the competence of the author. The first 7 posts have been impeccably prepared, yet, IMHO, it's been a false start.
Carlos is always busy. Not only he's continuously inventing new amazing algorithms, he's also writing a commercial NMR program, is the president of a company, attends countless conferences and, like most of us, has a family and a couple of hobbies. Do you believe he started a blog to defeat the boredom? He felt the need to bring to the public attention his latest inventions, but can't afford to enter into the details, not only because time is short, but also because he is reserving the details for the usual scientific articles. During the years Carlos has always shared his inventions with me. I have found a few of them extremely useful and included into my own programs. Not all of them, however, because he invents a lot of algorithms. Even when they are useless (IMHO), they are still a source of inspiration. Sometimes it happens that the inner workings of the method are the only things that interests me, because the results are discouraging. Other times I am amazed by the results, while Carlos is frustrated because he has found a rare exception that makes the method inapplicable in 1 case over 1 thousand.
Up to now the blog has proposed two novel methods to the general public, without disclosing their mathematical basis. I have already commented on that blog why both methods can't be included into any processing routine, therefore I will not repeat myself here. It would have been more useful and wise to present two old and established methods, possibly written by somebody else (the literature is, already, mostly made by people that praises themselves, why can't we have third party criticism anywhere?), but Carlos only has the incentive to write about his own latest inventions. Eventually the new blog is similar to Ryan's blog (the difference being that Carlos is a programmer while Ryan is not). My impression is that Ryan started a blog because he was afraid of what Carlos was doing and Carlos replied with another blog because he was worried by what Ryan was writing. I am certain they will say I am a fool to think such a thing. What's your opinion?
Ryan has clearly stated that his blog will deal with his own products exclusively. Carlos has promised more variety, but up to now has delighted us with pictures taken exclusively from MestreNova. Apart from my worthless impressions, I have a couple of opinions that put me apart from the other bloggers and the rest of the world too. First: We don't need automatic processing. Second: I am not expecting miracles to arrive from software. These will be the subjects of my next two posts.