I don 't mean to be critical but I think that getting CW decoders to decode poorly sent CW is probably a more important problem to tackle than getting them to decode well sent CW under noisy conditions. I've used Fldigi's CW decoder on several occasions and it fails to do an adequate job on all but near-perfect code.
How to handle noisy conditions is one of the issues that I would like to solve. Did you look at this CER/SNR study I did few days ago? http://ag1le.blogspot.com/2013/01/morse-decoder-snr-vs-cer-testing.html
. You can see clearly how important filter bandwidth is to CER (character error rate). I did some work in May / June 2012 to implement a prototype Matched filter for FLDIGI what Dave W1HKJ re-wrote using faster FFT/FIR based methods. He also added capability to adjust FFT filter band with with automatic speed tracking. I should probably run some earlier version of FLDIGI as comparison how much these features have improved the decode.
However, I would agree with you that decoding poorly sent CW is another very important topic. Different non-standard rhytms and timing will get FLDIGI decoder confused. I worked last summer also to implement a SOM decoder feature that is based on self organizing maps neural network algorithm. As this is still heavily relying on the existing FLDIGI finite state machine that is the heart of the decoder the improvement that SOM brings is only 5 - 10% in CER depending on the SNR.
Bayesian framework is a totally new approach to this problem, we use probabilities instead of hard thresholds & limits and pass those probabilities up in the detection chain. I am still learning as I go here but this looks like a promising area of research and experimentation. CW Skimmer is using Bayesian decoder and it does pretty good job in pile-ups and contest situations.
Any suggestions how I could improve the FLDIGI decoder further? What kind of metric should be used as the standard to compare against?