R
RobertMacy
Thank you EVERYBODY for replying. Gave me food for thought and confirmed
earlier efforts.
Finally, I found absolutely NO difference between results obtained from
doing one very long packet, or averaging together the SAME information in
smaller packets!
What had prevented earlier confirmation of that fact was...I made a
mistake implementing the technique - which led me to observing a 'slight'
difference. Sorry about all the brouhaha
Now in an ACTUAL implementation with ACTUAL signals the difference has
dropped to almost numerical accuracies. We're talking less that 0.001 ppm
difference! Which is good, because that should be the difference.
However from the excellent responses I learned some new techniques AND a
suggestion to look at the data and make certain it is contiguous! Small
bits and gaps would cause some horrific effects.
One idea came to mind during all this,
Is there any advantage to 'slipping' the packets?
An example would be assume you have two 1000 length packets containing
known signals you're looking for buried in white noise.
You can do two FFT's and average, or one long FFT and get identical
results. or
do an FFT on 1000 length, slip one sample, do FFT on 1000 length sample,
slip another sample etc. end up averaging in a special way 1000 FFTS.
Would that yield any improvement? Thoughts? Anybody tried that?
The idea is that coherent signals keep adding their energy, but the white
noise could be destroyed by its own randomness.
Or, is it just a case of an averaging process reaching its limits where
two times 1000 sample points and you can't get better than that.??
earlier efforts.
Finally, I found absolutely NO difference between results obtained from
doing one very long packet, or averaging together the SAME information in
smaller packets!
What had prevented earlier confirmation of that fact was...I made a
mistake implementing the technique - which led me to observing a 'slight'
difference. Sorry about all the brouhaha
Now in an ACTUAL implementation with ACTUAL signals the difference has
dropped to almost numerical accuracies. We're talking less that 0.001 ppm
difference! Which is good, because that should be the difference.
However from the excellent responses I learned some new techniques AND a
suggestion to look at the data and make certain it is contiguous! Small
bits and gaps would cause some horrific effects.
One idea came to mind during all this,
Is there any advantage to 'slipping' the packets?
An example would be assume you have two 1000 length packets containing
known signals you're looking for buried in white noise.
You can do two FFT's and average, or one long FFT and get identical
results. or
do an FFT on 1000 length, slip one sample, do FFT on 1000 length sample,
slip another sample etc. end up averaging in a special way 1000 FFTS.
Would that yield any improvement? Thoughts? Anybody tried that?
The idea is that coherent signals keep adding their energy, but the white
noise could be destroyed by its own randomness.
Or, is it just a case of an averaging process reaching its limits where
two times 1000 sample points and you can't get better than that.??