Maker Pro
Maker Pro

Help comparing two FFT approaches - UPDATE

R

RobertMacy

Thank you EVERYBODY for replying. Gave me food for thought and confirmed
earlier efforts.

Finally, I found absolutely NO difference between results obtained from
doing one very long packet, or averaging together the SAME information in
smaller packets!

What had prevented earlier confirmation of that fact was...I made a
mistake implementing the technique - which led me to observing a 'slight'
difference. Sorry about all the brouhaha

Now in an ACTUAL implementation with ACTUAL signals the difference has
dropped to almost numerical accuracies. We're talking less that 0.001 ppm
difference! Which is good, because that should be the difference.

However from the excellent responses I learned some new techniques AND a
suggestion to look at the data and make certain it is contiguous! Small
bits and gaps would cause some horrific effects.


One idea came to mind during all this,

Is there any advantage to 'slipping' the packets?

An example would be assume you have two 1000 length packets containing
known signals you're looking for buried in white noise.
You can do two FFT's and average, or one long FFT and get identical
results. or
do an FFT on 1000 length, slip one sample, do FFT on 1000 length sample,
slip another sample etc. end up averaging in a special way 1000 FFTS.
Would that yield any improvement? Thoughts? Anybody tried that?

The idea is that coherent signals keep adding their energy, but the white
noise could be destroyed by its own randomness.

Or, is it just a case of an averaging process reaching its limits where
two times 1000 sample points and you can't get better than that.??
 
R

RobertMacy

Thank you EVERYBODY for replying. Gave me food for thought and confirmed
earlier efforts.

...snip....
One idea came to mind during all this,

Is there any advantage to 'slipping' the packets?

An example would be assume you have two 1000 length packets containing
known signals you're looking for buried in white noise.
You can do two FFT's and average, or one long FFT and get identical
results. or
do an FFT on 1000 length, slip one sample, do FFT on 1000 length sample,
slip another sample etc. end up averaging in a special way 1000 FFTS.
Would that yield any improvement? Thoughts? Anybody tried that?

The idea is that coherent signals keep adding their energy, but the
white noise could be destroyed by its own randomness.

Or, is it just a case of an averaging process reaching its limits where
two times 1000 sample points and you can't get better than that.??

further note: shifting one sample and one sample etc then averaging yields
EXACTLY the same as if you did one pass, sigh.
 
M

Maynard A. Philbrook Jr.

Google "hopping transform" and "sliding transform".

Each FFT bin is _exactly_ equivalent to a filter centred on some
multiple of 1/T Hz, whose transfer function is the Fourier transform of
the window function used. (If you don't intentionally window, you get
the sinc function corresponding to a rectangle function of length T.)

If you slide the transform over by one sample per time, you're sampling
each filter at the full rate. That's a waste, because its bandwidth is
1000 times narrower than the original signal's. So in practice, you hop
by, say, 1/4 of the transform length, leading to an oversampling by a
factor of 4. (This assumes that you're using a good window function, so
that the equivalent channel filter is adequately sampled at that rate.
Rectangles aren't too friendly for that.)

Cheers
Here is a image (20 meters), decoded via 8 bit 11k Rate zero crossing
(no DFT/FFT at all)
http://webpages.charter.net/jamie_5/1.jpg

Here.
http://webpages.charter.net/jamie_5/2.jpg
Was 22k 16 bit using FFT.

Problem, with the FFT/IFFT math I was using, higher sampling rates
inserted artifacts that didn't seem to be related to nyquist issues,
at least with all the test I ran I didn't see that, because low
sampling rates perform much better, also makes the code much quicker
to process.

The only thing I could come up with was the problem of using complex
numbers where things started to drop off in the vapor. Maybe I should
revisit that some day..

I used buffer chunks and overlapped them.

Jamie
 
Top