M
Moi
Hello.
I have a question about the precision of logarithmic amplifiers. My problem
is the following: as a result of an optical experiment, I have signals
-coming from a photodiode- that have a decaying exponential shape, just
like a discharging RC filter. The final data we are looking for is the time
constant of these exponentials. The usual procedure to measure them is to
use a digital storage oscilloscope, digitize the waveforms (let's say 100
points per waveform; each waveform has a duration of 2-3 us, btw) and
transfer them (we previously average a bunch of them in the scope and
transfer the averaged exponential) to a computer, where we just fit the
experimental waveform to an exponential and determine the time constant.
So far so good... our current problem is that after a number of
improvements in the experiment we now can generate waveforms at 10 kHz,
much faster than the digital scope can store, average and transfer them to
the computer... it is 10000 exponentials per second, each one of which has
to be digitized and stored. The acquisition system is so far behind that we
lose plenty of data. It'd be nice to take full advantage of our
experimental capabilities, so we are considering going to a full-analog
acquisition system.
The idea -not originally ours, btw- would be to use initially a fast
logarithmic amplifier to take the logarithm of the exponentials, so that
our signal would become a straight line with slope -1/t. Then we send them
through a differentiator, which would give as a square (top-hat) signal
with a height corresponding to this slope 1/t. Finally, we just use a
commercial boxcar integrator with the gate set to this top-hat region. We
can even use the boxcar to average thousands of these consecutive signals
before sending them to the computer, so that the load on the acquisition
system is drastically reduced (we could transfer for example just a point
per second, the point being the average of 10000 signals).
The question: I have never used a logarithmic amplifier (I have been
checking some like for example the AD8310), and I am wondering, first of
all, how much accuracy one loses by using one. How accurate mathematically
is the amplifier when taking the logarithm of a signal? The idea of course
is that the loss of accuracy in this operation (plus the differentiation)
and the error that introduces should be more than compensated by the faster
rate of acquisition we would be able to use (moving from around 200 Hz to
10 kHz). If it isn't then we better forget about the whole thing.
Anybody who has used these amplifiers and can shed some light on the
subject? I have the data sheets of the 8310 and the numbers they give
there, but would like comments from somebody with first-hand experience in
the use of these devices. Any thoughts or recommendations? Thanks in
advance!!
I have a question about the precision of logarithmic amplifiers. My problem
is the following: as a result of an optical experiment, I have signals
-coming from a photodiode- that have a decaying exponential shape, just
like a discharging RC filter. The final data we are looking for is the time
constant of these exponentials. The usual procedure to measure them is to
use a digital storage oscilloscope, digitize the waveforms (let's say 100
points per waveform; each waveform has a duration of 2-3 us, btw) and
transfer them (we previously average a bunch of them in the scope and
transfer the averaged exponential) to a computer, where we just fit the
experimental waveform to an exponential and determine the time constant.
So far so good... our current problem is that after a number of
improvements in the experiment we now can generate waveforms at 10 kHz,
much faster than the digital scope can store, average and transfer them to
the computer... it is 10000 exponentials per second, each one of which has
to be digitized and stored. The acquisition system is so far behind that we
lose plenty of data. It'd be nice to take full advantage of our
experimental capabilities, so we are considering going to a full-analog
acquisition system.
The idea -not originally ours, btw- would be to use initially a fast
logarithmic amplifier to take the logarithm of the exponentials, so that
our signal would become a straight line with slope -1/t. Then we send them
through a differentiator, which would give as a square (top-hat) signal
with a height corresponding to this slope 1/t. Finally, we just use a
commercial boxcar integrator with the gate set to this top-hat region. We
can even use the boxcar to average thousands of these consecutive signals
before sending them to the computer, so that the load on the acquisition
system is drastically reduced (we could transfer for example just a point
per second, the point being the average of 10000 signals).
The question: I have never used a logarithmic amplifier (I have been
checking some like for example the AD8310), and I am wondering, first of
all, how much accuracy one loses by using one. How accurate mathematically
is the amplifier when taking the logarithm of a signal? The idea of course
is that the loss of accuracy in this operation (plus the differentiation)
and the error that introduces should be more than compensated by the faster
rate of acquisition we would be able to use (moving from around 200 Hz to
10 kHz). If it isn't then we better forget about the whole thing.
Anybody who has used these amplifiers and can shed some light on the
subject? I have the data sheets of the 8310 and the numbers they give
there, but would like comments from somebody with first-hand experience in
the use of these devices. Any thoughts or recommendations? Thanks in
advance!!