Maker Pro
Maker Pro

Sine Lookup Table with Linear Interpolation

R

rickman

I've been studying an approach to implementing a lookup table (LUT) to
implement a sine function. The two msbs of the phase define the
quadrant. I have decided that an 8 bit address for a single quadrant is
sufficient with an 18 bit output. Another 11 bits of phase will give me
sufficient resolution to interpolate the sin() to 18 bits.

If you assume a straight line between the two endpoints the midpoint of
each interpolated segment will have an error of
((Sin(high)-sin(low))/2)-sin(mid)

Without considering rounding, this reaches a maximum at the last segment
before the 90 degree point. I calculate about 4-5 ppm which is about
the same as the quantization of an 18 bit number.

There are two issues I have not come across regarding these LUTs. One
is adding a bias to the index before converting to the sin() value for
the table. Some would say the index 0 represents the phase 0 and the
index 2^n represents 90 degrees. But this is 2^n+1 points which makes a
LUT inefficient, especially in hardware. If a bias of half the lsb is
added to the index before converting to a sin() value the value 0 to
2^n-1 becomes symmetrical with 2^n to 2^(n+1)-1 fitting a binary sized
table properly. I assume this is commonly done and I just can't find a
mention of it.

The other issue is how to calculate the values for the table to give the
best advantage to the linear interpolation. Rather than using the exact
match to the end points stored in the table, an adjustment could be done
to minimize the deviation over each interpolated segment. Without this,
the errors are always in the same direction. With an adjustment the
errors become bipolar and so will reduce the magnitude by half (approx).
Is this commonly done? It will require a bit of computation to get
these values, but even a rough approximation should improve the max
error by a factor of two to around 2-3 ppm.

Now if I can squeeze another 16 dB of SINAD out of my CODEC to take
advantage of this resolution! lol

One thing I learned while doing this is that near 0 degrees the sin()
function is linear (we all knew that, right?) but near 90 degrees, the
sin() function is essentially quadratic. Who would have thunk it?
 
R

robert bristow-johnson

this *should* be a relatively simple issue, but i am confused

I've been studying an approach to implementing a lookup table (LUT) to
implement a sine function. The two msbs of the phase define the
quadrant. I have decided that an 8 bit address for a single quadrant is
sufficient with an 18 bit output.

10 bits or 1024 points. since you're doing linear interpolation, add
one more, copy the zeroth point x[0] to the last x[1024] so you don't
have to do any modulo (by ANDing with (1023-1) on the address of the
second point. (probably not necessary for hardware implementation.)


x[n] = sin( (pi/512)*n ) for 0 <= n <= 1024
Another 11 bits of phase will give me
sufficient resolution to interpolate the sin() to 18 bits.

so a 21 bit total index. your frequency resolution would be 2^(-21) in
cycles per sampling period or 2^(-21) * Fs. those 2 million values
would be the only frequencies you can meaningful
If you assume a straight line between the two endpoints the midpoint of
each interpolated segment will have an error of
((Sin(high)-sin(low))/2)-sin(mid)

do you mean "+" instead of the first "-"? to be explicit:

( sin((pi/512)*(n+1)) + sin((pi/512)*n) )/2 - sin((pi/512)*(n+0.5))

that's the error in the middle. dunno if it's the max error, but it's
might be.
Without considering rounding, this reaches a maximum at the last segment
before the 90 degree point.

at both the 90 and 270 degree points. (or just before and after those
points.)
I calculate about 4-5 ppm which is about the
same as the quantization of an 18 bit number.

There are two issues I have not come across regarding these LUTs. One is
adding a bias to the index before converting to the sin() value for the
table. Some would say the index 0 represents the phase 0 and the index
2^n represents 90 degrees. But this is 2^n+1 points which makes a LUT
inefficient, especially in hardware. If a bias of half the lsb is added
to the index before converting to a sin() value the value 0 to 2^n-1
becomes symmetrical with 2^n to 2^(n+1)-1 fitting a binary sized table
properly. I assume this is commonly done and I just can't find a mention
of it.

do you mean biasing by 1/2 of a point? then your max error will be *at*
the 90 and 270 degree points and it will be slightly more than what you
had before.
The other issue is how to calculate the values for the table to give the
best advantage to the linear interpolation. Rather than using the exact
match to the end points stored in the table, an adjustment could be done
to minimize the deviation over each interpolated segment. Without this,
the errors are always in the same direction. With an adjustment the
errors become bipolar and so will reduce the magnitude by half (approx).
Is this commonly done? It will require a bit of computation to get these
values, but even a rough approximation should improve the max error by a
factor of two to around 2-3 ppm.

if you assume an approximate quadratic behavior over that short segment,
you can compute the straight line where the error in the middle is equal
in magnitude (and opposite in sign) to the error at the end points.
that's a closed form solution, i think.

dunno if that is what you actually want for a sinusoidal waveform
generator. i might think you want to minimize the mean square error.
Now if I can squeeze another 16 dB of SINAD out of my CODEC to take
advantage of this resolution! lol

One thing I learned while doing this is that near 0 degrees the sin()
function is linear (we all knew that, right?) but near 90 degrees, the
sin() function is essentially quadratic. Who would have thunk it?

Newton? Leibnitz? Gauss?

sin(t + pi/2) = cos(t)
 
I've been studying an approach to implementing a lookup table (LUT) to
implement a sine function.  The two msbs of the phase define the
quadrant.  I have decided that an 8 bit address for a single quadrant is
sufficient with an 18 bit output.  Another 11 bits of phase will give me
sufficient resolution to interpolate the sin() to 18 bits.

If you assume a straight line between the two endpoints the midpoint of
each interpolated segment will have an error of
((Sin(high)-sin(low))/2)-sin(mid)

Without considering rounding, this reaches a maximum at the last segment
before the 90 degree point.  I calculate about 4-5 ppm which is about
the same as the quantization of an 18 bit number.

There are two issues I have not come across regarding these LUTs.  One
is adding a bias to the index before converting to the sin() value for
the table.  Some would say the index 0 represents the phase 0 and the
index 2^n represents 90 degrees.  But this is 2^n+1 points which makes a
LUT inefficient, especially in hardware.  If a bias of half the lsb is
added to the index before converting to a sin() value the value 0 to
2^n-1 becomes symmetrical with 2^n to 2^(n+1)-1 fitting a binary sized
table properly.  I assume this is commonly done and I just can't find a
mention of it.

The other issue is how to calculate the values for the table to give the
best advantage to the linear interpolation.  Rather than using the exact
match to the end points stored in the table, an adjustment could be done
to minimize the deviation over each interpolated segment.  Without this,
the errors are always in the same direction.  With an adjustment the
errors become bipolar and so will reduce the magnitude by half (approx).
  Is this commonly done?  It will require a bit of computation to get
these values, but even a rough approximation should improve the max
error by a factor of two to around 2-3 ppm.

Now if I can squeeze another 16 dB of SINAD out of my CODEC to take
advantage of this resolution!  lol

One thing I learned while doing this is that near 0 degrees the sin()
function is linear (we all knew that, right?) but near 90 degrees, the
sin() function is essentially quadratic.  Who would have thunk it?

why not skip the lut and just do a full sine approximation

something like this, page 53-54

http://www.emt.uni-linz.ac.at/educa...ume 1/Using the ADSP-2100 Family Volume 1.pdf


-Lasse
 
G

glen herrmannsfeldt

In comp.dsp rickman said:
I've been studying an approach to implementing a lookup table (LUT) to
implement a sine function. The two msbs of the phase define the
quadrant. I have decided that an 8 bit address for a single quadrant is
sufficient with an 18 bit output. Another 11 bits of phase will give me
sufficient resolution to interpolate the sin() to 18 bits.

In the early days of MOS ROMs, there were commercial such ROMs,
I believe from National. (And when ROMs were much smaller than today.)

The data sheet has (I can't find it right now, but it is around
somewere) the combination of ROMs and TTL adders to do the
interpolation.
If you assume a straight line between the two endpoints the midpoint of
each interpolated segment will have an error of
((Sin(high)-sin(low))/2)-sin(mid)
Without considering rounding, this reaches a maximum at the last segment
before the 90 degree point. I calculate about 4-5 ppm which is about
the same as the quantization of an 18 bit number.

I presume the ROM designers had all this figured out.
There are two issues I have not come across regarding these LUTs. One
is adding a bias to the index before converting to the sin() value for
the table. Some would say the index 0 represents the phase 0 and the
index 2^n represents 90 degrees. But this is 2^n+1 points which makes a
LUT inefficient, especially in hardware. If a bias of half the lsb is
added to the index before converting to a sin() value the value 0 to
2^n-1 becomes symmetrical with 2^n to 2^(n+1)-1 fitting a binary sized
table properly. I assume this is commonly done and I just can't find a
mention of it.

I don't remember anymore. But since you have the additional bit to
do the interpolation, it should be easy.
The other issue is how to calculate the values for the table to give the
best advantage to the linear interpolation. Rather than using the exact
match to the end points stored in the table, an adjustment could be done
to minimize the deviation over each interpolated segment. Without this,
the errors are always in the same direction. With an adjustment the
errors become bipolar and so will reduce the magnitude by half (approx).
Is this commonly done? It will require a bit of computation to get
these values, but even a rough approximation should improve the max
error by a factor of two to around 2-3 ppm.

I am not sure how you are thinking about doing it. I believe that
some of the bits that go into the MSB ROM also go into the lower ROM
to select the interpolation slope, and then additional bits to select
the actual value. Say you have a 1024x10 ROM for the first one,
then want to interpolate that. How many bits of linear interpolation
can be done? (That is, before linear isn't close enough any more.)
Then the appropriate number of low bits and high bits, but not the
in between bits, go into the interpolation ROM, which is then added
to the other ROMs output.
Now if I can squeeze another 16 dB of SINAD out of my CODEC to take
advantage of this resolution! lol
One thing I learned while doing this is that near 0 degrees the sin()
function is linear (we all knew that, right?) but near 90 degrees, the
sin() function is essentially quadratic. Who would have thunk it?

Well, cos() is known to be quadratic near zero.

I once knew someone with a homework problem something like:

Take a string all the way around the earth at the equator, and add
(if I remember right) 3m. (Assuming the earth is a perfect sphere.)
If the string is at a uniform height around the earth, how far from
the surface is it?

Now, pull it up at one point. How far is that point above the surface?
More specifically, as I originally heard it, is it higher than the
height of a specific nine story library?

As I remember it, the usual small angle approximations to trig.
functions aren't enough to do this. The next term is needed.
The 3m added might not be right, as the answer is close enough
to the certain building height to need the additional term.

-- glen
 
R

Robert Baer

rickman said:
I've been studying an approach to implementing a lookup table (LUT) to
implement a sine function. The two msbs of the phase define the
quadrant. I have decided that an 8 bit address for a single quadrant is
sufficient with an 18 bit output. Another 11 bits of phase will give me
sufficient resolution to interpolate the sin() to 18 bits.

If you assume a straight line between the two endpoints the midpoint of
each interpolated segment will have an error of
((Sin(high)-sin(low))/2)-sin(mid)

Without considering rounding, this reaches a maximum at the last segment
before the 90 degree point. I calculate about 4-5 ppm which is about the
same as the quantization of an 18 bit number.

There are two issues I have not come across regarding these LUTs. One is
adding a bias to the index before converting to the sin() value for the
table. Some would say the index 0 represents the phase 0 and the index
2^n represents 90 degrees. But this is 2^n+1 points which makes a LUT
inefficient, especially in hardware. If a bias of half the lsb is added
to the index before converting to a sin() value the value 0 to 2^n-1
becomes symmetrical with 2^n to 2^(n+1)-1 fitting a binary sized table
properly. I assume this is commonly done and I just can't find a mention
of it.

The other issue is how to calculate the values for the table to give the
best advantage to the linear interpolation. Rather than using the exact
match to the end points stored in the table, an adjustment could be done
to minimize the deviation over each interpolated segment. Without this,
the errors are always in the same direction. With an adjustment the
errors become bipolar and so will reduce the magnitude by half (approx).
Is this commonly done? It will require a bit of computation to get these
values, but even a rough approximation should improve the max error by a
factor of two to around 2-3 ppm.

Now if I can squeeze another 16 dB of SINAD out of my CODEC to take
advantage of this resolution! lol

One thing I learned while doing this is that near 0 degrees the sin()
function is linear (we all knew that, right?) but near 90 degrees, the
sin() function is essentially quadratic. Who would have thunk it?
Sounds like you are making excellent improvements on the standard
ho-hum algorithms; the net result will be superior to anything done out
there (commercially).
With the proper offsets, one needs only 22.5 degrees of lookup
("bounce" off each multiple of 45 degrees).
 
G

glen herrmannsfeldt

In comp.dsp rickman said:
I've been studying an approach to implementing a lookup table (LUT) to
implement a sine function. The two msbs of the phase define the
quadrant. I have decided that an 8 bit address for a single quadrant is
sufficient with an 18 bit output. Another 11 bits of phase will give me
sufficient resolution to interpolate the sin() to 18 bits.
If you assume a straight line between the two endpoints the midpoint of
each interpolated segment will have an error of
((Sin(high)-sin(low))/2)-sin(mid)

See:

http://ia601506.us.archive.org/8/it...690/1972_National_MOS_Integrated_Circuits.pdf

The description starts on page 273.

-- glen
 
N

Nobody

There are two issues I have not come across regarding these LUTs. One
is adding a bias to the index before converting to the sin() value for
the table. Some would say the index 0 represents the phase 0 and the
index 2^n represents 90 degrees. But this is 2^n+1 points which makes a
LUT inefficient, especially in hardware.

An n-bit table has 2^n+1 entries for 2^n ranges. Range i has endpoints of
table and table[i+1]. The final range has i=(1<<n)-1, so the last
entry in the table is table[1<<n], not table[(1<<n)-1].
One thing I learned while doing this is that near 0 degrees the sin()
function is linear (we all knew that, right?) but near 90 degrees, the
sin() function is essentially quadratic. Who would have thunk it?

sin((pi/2)+x) = sin((pi/2)-x) = cos(x), and the Maclaurin series for
cos(x) is:

cos(x) = 1 - (x^2)/2! + (x^4)/4! - ...
 
R

rickman

this *should* be a relatively simple issue, but i am confused

I've been studying an approach to implementing a lookup table (LUT) to
implement a sine function. The two msbs of the phase define the
quadrant. I have decided that an 8 bit address for a single quadrant is
sufficient with an 18 bit output.

10 bits or 1024 points. since you're doing linear interpolation, add one
more, copy the zeroth point x[0] to the last x[1024] so you don't have
to do any modulo (by ANDing with (1023-1) on the address of the second
point. (probably not necessary for hardware implementation.)


x[n] = sin( (pi/512)*n ) for 0 <= n <= 1024

So you are suggesting a table with 2^n+1 entries? Not such a great idea
in some apps, like hardware. What is the advantage? Also, why 10 bit
address for a 1024 element table? My calculations indicate a linear
interpolation can be done with 4 ppm accuracy with a 256 element LUT.
I'm not completely finished my simulation, but I'm pretty confident this
much is corrrect.

so a 21 bit total index. your frequency resolution would be 2^(-21) in
cycles per sampling period or 2^(-21) * Fs. those 2 million values would
be the only frequencies you can meaningful

No, that is the phase sent to the LUT. The total phase accumulator can
be larger as the need requires.

do you mean "+" instead of the first "-"? to be explicit:

( sin((pi/512)*(n+1)) + sin((pi/512)*n) )/2 - sin((pi/512)*(n+0.5))

that's the error in the middle. dunno if it's the max error, but it's
might be.

Yes, thanks for the correction. The max error? I'm not so worried
about that exactly. The error is a curve with the max magnitude near
the middle if nothing further is done to minimize it.

at both the 90 and 270 degree points. (or just before and after those
points.)

I'm talking about the LUT. The LUT only considers the first quadrant.

do you mean biasing by 1/2 of a point? then your max error will be *at*
the 90 and 270 degree points and it will be slightly more than what you
had before.

No, not quite right. There is a LUT with points spaced at 90/255
degrees apart starting at just above 0 degrees. The values between
points in the table are interpolated with a maximum deviation near the
center of the interpolation. Next to 90 degrees the interpolation is
using the maximum interpolation factor which will result in a value as
close as you can get to the correct value if the end points are used to
construct the interpolation line. 90 degrees itself won't actually be
represented, but rather points on either side, 90±delta where delta is
360° / 2^(n+1) with n being the number of bits in the input to the sin
function.

if you assume an approximate quadratic behavior over that short segment,
you can compute the straight line where the error in the middle is equal
in magnitude (and opposite in sign) to the error at the end points.
that's a closed form solution, i think.

Yes, it is a little tricky because at this point we are working with
integer math (or technically fixed point I suppose). Rounding errors is
what this is all about. I've done some spreadsheet simulations and I
have some pretty good results. I updated it a bit to generalize it to
the LUT size and I keep getting the same max error counts (adjusted to
work with integers rather than fractions) ±3 no matter what the size of
the interpolation factor. I don't expect this and I think I have
something wrong in the calculations. I'll need to resolve this.

dunno if that is what you actually want for a sinusoidal waveform
generator. i might think you want to minimize the mean square error.

We are talking about the lsbs of a 20+ bit word. Do you think there
will be much of a difference in result? I need to actually be able to
do the calculations and get this done rather than continue to work on
the process. Also, each end point affects two lines, so there are
tradeoffs, make one better and the other worse? It seems to get
complicated very quickly.

Newton? Leibnitz? Gauss?

sin(t + pi/2) = cos(t)

How does that imply a quadratic curve at 90 degrees? At least I think
like the greats!
 
R

rickman

why not skip the lut and just do a full sine approximation

something like this, page 53-54

http://www.emt.uni-linz.ac.at/educa...ume 1/Using the ADSP-2100 Family Volume 1.pdf

I'm not sure this would be easier. The LUT an interpolation require a
reasonable amount of logic. This requires raising X to the 5th power
and five constant multiplies. My FPGA doesn't have multipliers and this
may be too much for the poor little chip. I suppose I could do some of
this with a LUT and linear interpolation... lol
 
R

rickman

Sounds like you are making excellent improvements on the standard ho-hum
algorithms; the net result will be superior to anything done out there
(commercially).
With the proper offsets, one needs only 22.5 degrees of lookup ("bounce"
off each multiple of 45 degrees).

I can't say I follow this. How do I get a 22.5 degree sine table to
expand to 90 degrees?

As to my improvements, I can see where most implementations don't bother
pushing the limits of the hardware, but I can't imagine no one has done
this before. I'm sure there are all sorts of apps, especially in older
hardware where resources were constrained, where they pushed for every
drop of precision they could get. What about space apps? My
understanding is they *always* push their designs to be the best they
can be.
 
G

glen herrmannsfeldt

In comp.dsp rickman said:
I've been studying an approach to implementing a lookup table (LUT) to
implement a sine function.
(snip)

If you assume a straight line between the two endpoints the midpoint of
each interpolated segment will have an error of
((Sin(high)-sin(low))/2)-sin(mid)

Seems what they instead do is implement

sin(M)cos(L)+cos(M)sin(L) where M and L are the more and less
significant bits of theta. Also, cos(L) tends to be almost 1,
so they just say it is 1.

(snip)
There are two issues I have not come across regarding these LUTs. One
is adding a bias to the index before converting to the sin() value for
the table. Some would say the index 0 represents the phase 0 and the
index 2^n represents 90 degrees. But this is 2^n+1 points which makes a
LUT inefficient, especially in hardware. If a bias of half the lsb is
added to the index before converting to a sin() value the value 0 to
2^n-1 becomes symmetrical with 2^n to 2^(n+1)-1 fitting a binary sized
table properly. I assume this is commonly done and I just can't find a
mention of it.

Well, you can fix it in various ways, but you definitely want 2^n.
The other issue is how to calculate the values for the table to give the
best advantage to the linear interpolation. Rather than using the exact
match to the end points stored in the table, an adjustment could be done
to minimize the deviation over each interpolated segment. Without this,
the errors are always in the same direction. With an adjustment the
errors become bipolar and so will reduce the magnitude by half (approx).
Is this commonly done? It will require a bit of computation to get
these values, but even a rough approximation should improve the max
error by a factor of two to around 2-3 ppm.

Seems like that is one of the suggestions, but not done in the ROMs
they were selling. Then the interpolation has to add or subtract,
which is slightly (in TTL) harder.

The interpolated sine was done in 1970 with 128x8 ROMs. With larger
ROMs, like usual today, you shouldn't need it unless you want really
high resolution.
Now if I can squeeze another 16 dB of SINAD out of my CODEC to take
advantage of this resolution! lol
One thing I learned while doing this is that near 0 degrees the sin()
function is linear (we all knew that, right?) but near 90 degrees, the
sin() function is essentially quadratic. Who would have thunk it?

-- glen
 
R

rickman

There are two issues I have not come across regarding these LUTs. One
is adding a bias to the index before converting to the sin() value for
the table. Some would say the index 0 represents the phase 0 and the
index 2^n represents 90 degrees. But this is 2^n+1 points which makes a
LUT inefficient, especially in hardware.

An n-bit table has 2^n+1 entries for 2^n ranges. Range i has endpoints of
table and table[i+1]. The final range has i=(1<<n)-1, so the last
entry in the table is table[1<<n], not table[(1<<n)-1].
One thing I learned while doing this is that near 0 degrees the sin()
function is linear (we all knew that, right?) but near 90 degrees, the
sin() function is essentially quadratic. Who would have thunk it?

sin((pi/2)+x) = sin((pi/2)-x) = cos(x), and the Maclaurin series for
cos(x) is:

cos(x) = 1 - (x^2)/2! + (x^4)/4! - ...


Yeah, a friend of mine knows these polynomials inside and out. I never
learned that stuff so well. At the time it didn't seem to have a lot of
use and later there were always better ways of getting the answer. From
looking at the above I see the X^2 term will dominate over the higher
order terms near 0, but of course the lowest order term, 1, will be the
truly dominate term... lol In other words, the function of cos(x) near
0 is just the constant 1 to the first order approximation.
 
R

rickman

Seems what they instead do is implement

sin(M)cos(L)+cos(M)sin(L) where M and L are the more and less
significant bits of theta. Also, cos(L) tends to be almost 1,
so they just say it is 1.

Interesting. This tickles a few grey cells. Two values based solely on
the MS portion of x and a value based solely on the LS portion. Two
tables, three lookups a multiply and an add. That could work. The
value for sin(M) would need to be full precision, but I assume the
values of sin(L) could have less range because sin(L) will always be a
small value...

Well, you can fix it in various ways, but you definitely want 2^n.

I think one of the other posters was saying to add an entry for 90
degrees. I don't like that. I could be done but it complicates the
process of using a table for 0-90 degrees.

Seems like that is one of the suggestions, but not done in the ROMs
they were selling. Then the interpolation has to add or subtract,
which is slightly (in TTL) harder.

The interpolated sine was done in 1970 with 128x8 ROMs. With larger
ROMs, like usual today, you shouldn't need it unless you want really
high resolution.

I'm not using an actual ROM chip. This is block ram in a rather small
FPGA with only six blocks. I need two channels and may want to use some
of the blocks for other functions.


Thanks for the advice to everyone.
 
I'm not sure this would be easier.  The LUT an interpolation require a
reasonable amount of logic.  This requires raising X to the 5th power
and five constant multiplies.  My FPGA doesn't have multipliers and this
may be too much for the poor little chip.  I suppose I could do some of
this with a LUT and linear interpolation... lol

I see, I assumed you had an MCU (or fpga) with multipliers

considered CORDIC?


-Lasse
 
R

rickman

I see, I assumed you had an MCU (or fpga) with multipliers

considered CORDIC?

I should learn that. I know it has been used to good effect in FPGAs.
Right now I am happy with the LUT, but I would like to learn more about
CORDIC. Any useful referrences?
 
R

rickman

It would be interesting to distort the table entries to minimize THD. The
problem there is how to measure very low THD to close the loop.

It could also be a difficult problem to solve without some sort of
exhaustive search. Each point you fudge affects two curve segments and
each curve segment is affected by two points. So there is some degree
of interaction.

Anyone know if there is a method to solve such a problem easily? I am
sure that my current approach gets each segment end point to within ±1
lsb and I suppose once you measure THD in some way it would not be an
excessive amount of work to tune the 256 end points in a few passes
through. This sounds like a tree search with pruning.

I'm assuming there would be no practical closed form solution. But
couldn't the THD be calculated in a simulation rather than having to be
measured on the bench?
 
J

Jon Kirwan

How does that imply a quadratic curve at 90 degrees? At least I think
like the greats!

You already know why. Just not wearing your hat right now.
Two terms of taylors at 0 of cos(x) is x + x^2/2. Two at
sin(x) is 1 + x. One is quadratic, one is linear.

Jon
 
R

rickman

If I could measure the THD accurately, I'd just blast in a corrective harmonic
adder to the table for each of the harmonics. That could be scientific or just
iterative. For the 2nd harmonic, for example, add a 2nd harmonic sine component
into all the table points, many of which wouldn't even change by an LSB, to null
out the observed distortion. Gain and phase, of course.

I don't think you are clear on this. The table and linear interp are as
good as they can get with the exception of one or possibly two lsbs to
minimize the error in the linear interp. These is no way to correct for
this by adding a second harmonic, etc. Remember, this is not a pure
table lookup, it is a two step approach.

Our experience is that the output amplifiers, after the DAC and lowpass filter,
are the nasty guys for distortion, at least in the 10s of MHz. Lots of
commercial RF generators have amazing 2nd harmonic specs, like -20 dBc. A table
correction would unfortunately be amplitude dependent, so it gets messy fast.

Ok, if you are compensating for the rest of the hardware, that's a
different matter...
 
T

Tim Williams

Jon Kirwan said:
You already know why. Just not wearing your hat right now.
Two terms of taylors at 0 of cos(x) is x + x^2/2. Two at
sin(x) is 1 + x. One is quadratic, one is linear.

Oops, should be:
cos(x) ~= 1 - x^2/2
sin(x) ~= x (- x^3/6)
Mixed up your odd/even powers :)

Of course, the cubic is less important than the quadratic, so sin(x) ~= x is
sufficient for most purposes.

I suppose you could potentially synthesize the functions using low power
approximations (like these) and interpolating between them over modulo pi/4
segments. Probably wouldn't save many clock cycles relative to a CORDIC,
higher order polynomial, or other, for a given accuracy, so the standard
space-time tradeoff (LUT vs. computation) isn't changed.

Tim
 
J

Jon Kirwan

Oops, should be:
cos(x) ~= 1 - x^2/2
sin(x) ~= x (- x^3/6)
Mixed up your odd/even powers :)

hehe. yes. But it gets the point across, either way. Detail I
didn't care a lot about at the moment.
 
Top