A matched filter has an impulse response that is the mirror of the expected signal. with the result that the output, which is the convolution of the 2 peaks, which make the signal detectable when buried in noise. The power distributed over time in the signal is collected to one moment at the output, so that makes the signal to noise ratio optimal.
The original version of matched filter I created was using convolution in time domain: http://ag1le.blogspot.com/2012/05/fldigi-adding-matched-filter-feature-to.html
. The problem with that approach was slight delay in decoding - for best results this created 2 -3 seconds of time delay in order to buffer and calculate the result. Also, I had a problem stitching the audio segments together despite some overlap in buffering. But you are right - it was pretty amazing to see stuff deeply buried in noise coming visible!
Also, due to the fact that the envelope of the filtered signals didn't look like nice square wave of dits and dahs, more like noisy triangles I started working on another project. I implemented algorithm from "self organizing maps" to calculate best match against known codebook patterns. This code is now used in FLDIGI ("SOM" decoding feature).
In Morse code you have different speeds and different characters, so matched filtering will require a set of at least 40 filters, for each expected speed.
That is what I thought but I probably will be wrong.
Dave W1HKJ took my matched filter code and re-implemented it in frequency domain using FFT/FIR based methods. Current version in FLDIGI is doing real time filtering and instead of using 40 different filters it automatically calculates the filter parameters when you change the station you listen. The software also tracks the Morse speed automatically and adjusts the filter bandwidth to the speed. If you have a chance to look at the source code it is actually pretty slick implementation.
I suppose you made your noise by a random number generator added to a sine wave Morse signal in your wav files.
It looks to me better to limit the bandwidth of the noise to 500 Hz in order to get realistic comparison.
Thanks for this tip. I will look into how to add a lowpass filter in the noise generated by Octave code.
I have also used the PathSim program that has capability to adjust the noise bandwidth but also add different
delay effects simulating ionospheric propagation effects.
White noise has a power which is proportional to the bandwidth, so 3 kHz noisepower is 20 dB more then 30 Hz, but because it is interesting to compare with human copy by head, you have to think about the fact that your ear is less sensitive for high frequency noise. In telecommunication the work for that reason with psophometric weighting. All that is unnecessary when you limit your bandwidth of the noise and his power to a few hundred Hz.
Thanks - the answer was so obvious that I am almost embarrassed. I am using computer generated white noise and I was aware that human auditory system has different sensitivity to noise at different frequencies. Using "psophometric weighting" search in Google provided so many new sources that I will study this subject bit more.
as you seem to have very good insights in these topics I am asking the following:
Are there any other good metrics when comparing Morse decoder performance with human copy by head?
So far I have the following:
- CER - Character Error Rate
SNR - Signal to Noise Ratio
WPM - speed in words per minute
Psophometric Noise distribution - to adjust to human auditory system
I am interested in how brain detects and decodes Morse code as well as the learning process itself.
Having a set of consistent quantifiable metrics would help - it looks like Fabian DJ1YFK has also similar interests in this field.