Hmmm. Not sure just what happened. Can you see the headers of this news
post?
Yes, and the email address is exactly what I used. Here it is with the
address slightly munged.
<joseph_barrett at sbcglobal.com>:
Sorry, I wasn't able to establish an SMTP connection. (#4.4.1)
I'm not going to try again; this message has been in the queue too long.
Plus after getting into my books again i think i can explain just how to
do it without fussing with a spreadsheet.
The spread sheet isn't important, but I don't have an image of what you
are talking about below.
It is a matter of weighting the coefficients (and using more cardinal
points).
For a section of table spanning 5 (or just 3) cardinal points; we take the
sub-interval centered on the 'nearest' cardinal point. (semi open segment,
-2^n to +2^n-1; where 2*2^n-1 is the number of interpolated points.) For
5 cardinal points this point gets maximum weight call it 1/2. The two
points to either side get weighted 1/4 each, and the two outermost
cardinal points get weighted 1/8 each. For three cardinal points just
eliminate the outer two. Then the rest of your interpolator tool should
do the rest.
One of the things I have realized is that with the table size I am
using, the curve segments that I am interpolating become very good
matches to a straight line to within 18 bits. I have a 512 entry table
and interestingly enough, there is this give and take between the range
of the interpolation and the non-linearity of the curve at that point.
The part of the quarter sine table that has the highest step size is at
zero where the linearity is near dead on. The part of the table that
has the worst approximation to linear is at 90 degrees where the step
size is near zero. I think in an 18 bit table the step is just one
count! So there is not much improvement to be had over simple linear
interpolation. In fact, I can't say there is much of an improvement by
offsetting any of the end point.
If the word width increases from 18 bits, then perhaps there might be
some advantage to other methods. But even an 18 bit table gives a 19
bit result when the sine bit is added. That is way down there in the
noise really. I am looking at a software version and to keep from going
to double precision I might just drop the extra bit and limit the signed
result to 18 bits. This is in a custom processor in an FPGA. 18 bits
comes from the memory size in the block RAMs. We'll see. The
difference is only near the end of the code section and *something* has
to be sent out to the CODEC even if the extra bits are just noise in the
DAC, 24 bits, 90 dB SINAD. Why so many bits you have to wonder...