Maker Pro
Maker Pro

Logarithmic Amplifiers

M

Moi

Hello.

I have a question about the precision of logarithmic amplifiers. My problem
is the following: as a result of an optical experiment, I have signals
-coming from a photodiode- that have a decaying exponential shape, just
like a discharging RC filter. The final data we are looking for is the time
constant of these exponentials. The usual procedure to measure them is to
use a digital storage oscilloscope, digitize the waveforms (let's say 100
points per waveform; each waveform has a duration of 2-3 us, btw) and
transfer them (we previously average a bunch of them in the scope and
transfer the averaged exponential) to a computer, where we just fit the
experimental waveform to an exponential and determine the time constant.

So far so good... our current problem is that after a number of
improvements in the experiment we now can generate waveforms at 10 kHz,
much faster than the digital scope can store, average and transfer them to
the computer... it is 10000 exponentials per second, each one of which has
to be digitized and stored. The acquisition system is so far behind that we
lose plenty of data. It'd be nice to take full advantage of our
experimental capabilities, so we are considering going to a full-analog
acquisition system.

The idea -not originally ours, btw- would be to use initially a fast
logarithmic amplifier to take the logarithm of the exponentials, so that
our signal would become a straight line with slope -1/t. Then we send them
through a differentiator, which would give as a square (top-hat) signal
with a height corresponding to this slope 1/t. Finally, we just use a
commercial boxcar integrator with the gate set to this top-hat region. We
can even use the boxcar to average thousands of these consecutive signals
before sending them to the computer, so that the load on the acquisition
system is drastically reduced (we could transfer for example just a point
per second, the point being the average of 10000 signals).

The question: I have never used a logarithmic amplifier (I have been
checking some like for example the AD8310), and I am wondering, first of
all, how much accuracy one loses by using one. How accurate mathematically
is the amplifier when taking the logarithm of a signal? The idea of course
is that the loss of accuracy in this operation (plus the differentiation)
and the error that introduces should be more than compensated by the faster
rate of acquisition we would be able to use (moving from around 200 Hz to
10 kHz). If it isn't then we better forget about the whole thing.

Anybody who has used these amplifiers and can shed some light on the
subject? I have the data sheets of the 8310 and the numbers they give
there, but would like comments from somebody with first-hand experience in
the use of these devices. Any thoughts or recommendations? Thanks in
advance!!
 
W

Winfield Hill

Moi wrote...
I have a question about the precision of logarithmic amplifiers.

The commercial parts have accuracy specs you can use, which will
typically be in the form of X% error of the original signal. But
a caution, for fast conversions take care with the dynamic range,
because logging amps slow down considerably at low input currents.
Knowing this is an issue, you can learn more from the data sheet.

Thanks,
- Win

(email: use hill_at_rowland-dot-org for now)
 
J

John Popelish

Moi said:
Hello.

I have a question about the precision of logarithmic amplifiers. My problem
is the following: as a result of an optical experiment, I have signals
-coming from a photodiode- that have a decaying exponential shape, just
like a discharging RC filter. The final data we are looking for is the time
constant of these exponentials. The usual procedure to measure them is to
use a digital storage oscilloscope, digitize the waveforms (let's say 100
points per waveform; each waveform has a duration of 2-3 us, btw) and
transfer them (we previously average a bunch of them in the scope and
transfer the averaged exponential) to a computer, where we just fit the
experimental waveform to an exponential and determine the time constant.

So far so good... our current problem is that after a number of
improvements in the experiment we now can generate waveforms at 10 kHz,
much faster than the digital scope can store, average and transfer them to
the computer... it is 10000 exponentials per second, each one of which has
to be digitized and stored. The acquisition system is so far behind that we
lose plenty of data. It'd be nice to take full advantage of our
experimental capabilities, so we are considering going to a full-analog
acquisition system.

The idea -not originally ours, btw- would be to use initially a fast
logarithmic amplifier to take the logarithm of the exponentials, so that
our signal would become a straight line with slope -1/t. Then we send them
through a differentiator, which would give as a square (top-hat) signal
with a height corresponding to this slope 1/t. Finally, we just use a
commercial boxcar integrator with the gate set to this top-hat region. We
can even use the boxcar to average thousands of these consecutive signals
before sending them to the computer, so that the load on the acquisition
system is drastically reduced (we could transfer for example just a point
per second, the point being the average of 10000 signals).

The question: I have never used a logarithmic amplifier (I have been
checking some like for example the AD8310), and I am wondering, first of
all, how much accuracy one loses by using one. How accurate mathematically
is the amplifier when taking the logarithm of a signal? The idea of course
is that the loss of accuracy in this operation (plus the differentiation)
and the error that introduces should be more than compensated by the faster
rate of acquisition we would be able to use (moving from around 200 Hz to
10 kHz). If it isn't then we better forget about the whole thing.

Anybody who has used these amplifiers and can shed some light on the
subject? I have the data sheets of the 8310 and the numbers they give
there, but would like comments from somebody with first-hand experience in
the use of these devices. Any thoughts or recommendations? Thanks in
advance!!

I have worked on a somewhat similar application of log amplifiers and
came up against a single problem that I didn't resolve. Part of the
mathematical exponential curve fit process that you are now doing (if
you are doing it correctly) is to recover the actual zero offset of
the of the exponential waveform, as well as to fit its dominant time
constant. I would very much like to hear from others in this group
how to recover that offset using the log amplifier. After that it is
a piece of cake, as you described, to extract the time constant.
 
J

John Larkin

Hello.

I have a question about the precision of logarithmic amplifiers. My problem
is the following: as a result of an optical experiment, I have signals
-coming from a photodiode- that have a decaying exponential shape, just
like a discharging RC filter. The final data we are looking for is the time
constant of these exponentials. The usual procedure to measure them is to
use a digital storage oscilloscope, digitize the waveforms (let's say 100
points per waveform; each waveform has a duration of 2-3 us, btw) and
transfer them (we previously average a bunch of them in the scope and
transfer the averaged exponential) to a computer, where we just fit the
experimental waveform to an exponential and determine the time constant.

So far so good... our current problem is that after a number of
improvements in the experiment we now can generate waveforms at 10 kHz,
much faster than the digital scope can store, average and transfer them to
the computer... it is 10000 exponentials per second, each one of which has
to be digitized and stored. The acquisition system is so far behind that we
lose plenty of data. It'd be nice to take full advantage of our
experimental capabilities, so we are considering going to a full-analog
acquisition system.

The idea -not originally ours, btw- would be to use initially a fast
logarithmic amplifier to take the logarithm of the exponentials, so that
our signal would become a straight line with slope -1/t. Then we send them
through a differentiator, which would give as a square (top-hat) signal
with a height corresponding to this slope 1/t. Finally, we just use a
commercial boxcar integrator with the gate set to this top-hat region. We
can even use the boxcar to average thousands of these consecutive signals
before sending them to the computer, so that the load on the acquisition
system is drastically reduced (we could transfer for example just a point
per second, the point being the average of 10000 signals).

The question: I have never used a logarithmic amplifier (I have been
checking some like for example the AD8310), and I am wondering, first of
all, how much accuracy one loses by using one. How accurate mathematically
is the amplifier when taking the logarithm of a signal? The idea of course
is that the loss of accuracy in this operation (plus the differentiation)
and the error that introduces should be more than compensated by the faster
rate of acquisition we would be able to use (moving from around 200 Hz to
10 kHz). If it isn't then we better forget about the whole thing.

Anybody who has used these amplifiers and can shed some light on the
subject? I have the data sheets of the 8310 and the numbers they give
there, but would like comments from somebody with first-hand experience in
the use of these devices. Any thoughts or recommendations? Thanks in
advance!!

Sounds complex. Are all the pulses the same height? If they are, you
could just measure the time it takes a pulse to decay to 0.37 (or 0.5,
or whatever) of nominal peak. If heights vary, just let each pulse
decay to some lowish level V, then time from there to 0.37 of V. Or do
the peak detector thing.

If you're going to average lots of shots anyhow, the time resolution
of these measurements needn't be extreme. A couple of comparators and
a small FPGA could do the whole thing.

John
 
R

Rene Tschaggelar

Moi said:
Hello.

I have a question about the precision of logarithmic amplifiers. My problem

[snip]

Anybody who has used these amplifiers and can shed some light on the
subject? I have the data sheets of the 8310 and the numbers they give
there, but would like comments from somebody with first-hand experience in
the use of these devices. Any thoughts or recommendations? Thanks in
advance!!

There are log amps and pseudo log amps. The 8310 is a pseudo log amp,
as it takes the log of the hull of an RF signal. It further has to be AC
coupled or some DC offset has to be cared for. It can be done. We
recently used an AD8307 at almost DC and therefore lifted a signal
to 2.5V while the other input was 2.5V. It is the last mV and submV,
that matter, where the dynamic range is thrown out of the window.

You'd need a true logamp with a current input, I guess.
If you need the dynamic range, well these 10kHz are not posing a
problem, even though the bandwidth at lower current is reduced.
Differentiation is not recommended as the fast rising slope becomes
a dirac and may saturate the opAmp for howlong.
Why not sample the output of the logamp with an ADC ?
Yes, scopes were never meant for sampling experiments.

Rene
 
T

Tony Williams

Moi said:
I have a question about the precision of logarithmic amplifiers.
My problem is the following: as a result of an optical
experiment, I have signals -coming from a photodiode- that have a
decaying exponential shape, just like a discharging RC filter.
The final data we are looking for is the time constant of these
exponentials. The usual procedure to measure them is to use a
digital storage oscilloscope, digitize the waveforms (let's say
100 points per waveform; each waveform has a duration of 2-3 us,
btw) and transfer them (we previously average a bunch of them in
the scope and transfer the averaged exponential) to a computer,
where we just fit the experimental waveform to an exponential and
determine the time constant.

Of the form v = Vpk(1 - exp(k.T)) ?

That's three unknowns, so three ADC readings at timed
intervals to get the exponential characterised.

Perhaps use 3x SA-ADCs (with sample+hold), with each
S+H being sequentially hit at the timed intervals, so
you avoid using 1x ADC with <1uS conversion time.

Log the ADC readings and initially do the calcs off
line. If the method proves out ok, then tackle the
problem of doing the calcs in real time (at 100uS
intervals).
 
B

Bill Sloman

Moi said:
Hello.

I have a question about the precision of logarithmic amplifiers. My problem
is the following: as a result of an optical experiment, I have signals
-coming from a photodiode- that have a decaying exponential shape, just
like a discharging RC filter. The final data we are looking for is the time
constant of these exponentials. The usual procedure to measure them is to
use a digital storage oscilloscope, digitize the waveforms (let's say 100
points per waveform; each waveform has a duration of 2-3 us, btw) and
transfer them (we previously average a bunch of them in the scope and
transfer the averaged exponential) to a computer, where we just fit the
experimental waveform to an exponential and determine the time constant.

So far so good... our current problem is that after a number of
improvements in the experiment we now can generate waveforms at 10 kHz,
much faster than the digital scope can store, average and transfer them to
the computer... it is 10000 exponentials per second, each one of which has
to be digitized and stored. The acquisition system is so far behind that we
lose plenty of data. It'd be nice to take full advantage of our
experimental capabilities, so we are considering going to a full-analog
acquisition system.

The idea -not originally ours, btw- would be to use initially a fast
logarithmic amplifier to take the logarithm of the exponentials, so that
our signal would become a straight line with slope -1/t. Then we send them
through a differentiator, which would give as a square (top-hat) signal
with a height corresponding to this slope 1/t. Finally, we just use a
commercial boxcar integrator with the gate set to this top-hat region. We
can even use the boxcar to average thousands of these consecutive signals
before sending them to the computer, so that the load on the acquisition
system is drastically reduced (we could transfer for example just a point
per second, the point being the average of 10000 signals).

The question: I have never used a logarithmic amplifier (I have been
checking some like for example the AD8310), and I am wondering, first of
all, how much accuracy one loses by using one. How accurate mathematically
is the amplifier when taking the logarithm of a signal? The idea of course
is that the loss of accuracy in this operation (plus the differentiation)
and the error that introduces should be more than compensated by the faster
rate of acquisition we would be able to use (moving from around 200 Hz to
10 kHz). If it isn't then we better forget about the whole thing.

Anybody who has used these amplifiers and can shed some light on the
subject? I have the data sheets of the 8310 and the numbers they give
there, but would like comments from somebody with first-hand experience in
the use of these devices. Any thoughts or recommendations? Thanks in
advance!!

There's another problem with this approach that hasn't been mentioned yet,
which is the shot noise on your photodiode output, which is basically the
square root of the number of photons per sampling interval.

When you digitise the linear output, this decreases as the square root of
the size of the signal.

When you digitise a logarithmic output the error on the logarithmic output
increases with time - the math is straightforward enough, but I'm not going
to try and work it out here.

As John Popelish has already pointed out, you are actually trying to extract
three variables - the initial amplitude of your signal, the decay time
constant and the final amplitude at infinite time (which he calls the
offset).

This sounds rather like an fluorescence lifetime oxygen sensor I worked on -
where the life-time extraction was dealt with by using an A/D converter to
sample at about 2.5MHz, and the data was summed over three periods,
basically from excitation to one (nominal) fluorescent lifetime, from one
fluorscent life-time to two fluorescent lifetimes and from three fluorescent
lifetimes to about five.

Call the first block A, the second block B and the third block C. They
summed A, B and C separately over several thousand excitations, then
evaluated the expression

(A-C/2)/(B-C/2)

and fed the number into a look-up table to pull out the fluorescent
lifetime. The look-up table was set up by integrating the exponential decay
over the three periods for a whole range of time constants centred around
the nominal value.

I think that this would have worked better in our application if they had
adapted the nominal time constant - the length of the periods of
integration - to track the actual fluorescent time constant, but this was
because the original device was intended to measure the oxygen content of
ground water, which is usually close to saturated vis-a-vis air, and we
wanted to use it to monitor a liquid that usually contained much less
dissolved oxygen.

They ran at about 2,000 excitations per second, with a battery-powered
microprocessor. An Analog Devices Blackfin could go a lot faster, if you've
got the power supply to keep it running.

I don't think that this is exactly rocket science - I was fooling around
with much the same idea back in 1970, though with a box-car integrator doing
analog sampling and integration into three separate analog integrators.
 
M

Moi

First of all, thanks to all of you who replied. You have made some very
good points, and after reading through them I feel that some further
comments are in order.

The exponential decays we have in our photodiode are the result of a cavity
ring-down experiment, that is, they are just produced by light trapped into
a Fabry-Perot interferometer that is leaking through the mirrors. The
amount of light leaked is proportional to the amount trapped inside the
cavity at any moment, and its evolution in time is what we measure: it
follows a simple Beer law which accounts for the exponential decay. If the
interferometer is empty (no sample inside, just vacuum) then the time
constant of the exponential only depends on the transmission and absorption
of the mirrors (the only way for the light to get out of the cavity is
either through the mirrors by transmission or when it is absorbed in these
same mirrors) and the cavity length, these variables alone determine how
long -on average- a photon survives inside the cavity before escaping or
being absorbed. If one puts some molecular sample inside, then when the
frequency of light matches an absorption line of this sample intracavity
losses increase and this can be seen as a faster exponential decay of the
leaked light (photons survive less time into the cavity).

The beauty of this experimental setup resides in the fact that the rate of
decay depends on sample absorption (stronger absorptions obviously kill
photons faster), but not on the absolute amount of power inside the cavity:
if one laser shot used to "fill" the cavity with light has 20% more power
than the previous one, for example, there will be more power inside the
cavity and more power leaking out of it, but the rate of decay (the time
constant of the exponential) will be exactly the same. In other words,
we'll detect an exponential function with a starting voltage 20% higher
than the previous one, but with exactly the same time constant. This is why
we are interested in measuring the time constants of the exponentials, but
not necessarily the starting point or the ending offset, which can be
different from one shot to the next: measuring the time constants
eliminates the influence of the shot-to-shot amplitude noise of the laser,
which in a spectroscopy experiment is a tremendous advantage.

This is a well known setup in spectroscopy, has been around for over a
decade, and hopefully now you see why we need to measure these time
constants but not necessarily the remaining parameters of the exponential:
just monitor the time constants and its change as you scan the laser
frequency and you will get a nice spectrum of the sample absorptions. As I
said in my previous message, the usual way to get these data is by
digitizing and fitting the exponentials, but that is not easily compatible
with generating 10000 exponentials per second or more, as we can now do.
Hence our desire to move to an analog system.

An additional problem in this case is that we, as good chemists, don't know
shit of digital electronics and DSPs: we'd really prefer to keep this in
the analog domain until we send the final data to the ADC (a data
acquisition card in the computer).

An analog detection scheme has been implemented before, and in a way that
somehow relates to Bill Sloman's description of the fluorescence
experiment: a very good way to avoid digitization of the exponential but at
the same time still use most of the information contained in the curve is
integration, which takes us to an analog boxcar integrator: some smart guy
realized that if you integrate two "slices" of the same exponential at
different points in time the time constant of the exponential can be
obtained directly from the ratio of these two integrals. For, example, if
you integrate the interval of the exponential that goes from time t0 to t1
and -with a second boxcar module- the interval from t1 to t2, where both
intervals have the same duration, the ratio of these two integrals
eliminates all the other variables and gives you the time constant of the
exponential (you have to take a log and divide by the interval duration, if
I remember correctly). So the time constant can certainly be determined
without knowing the other parameters of the function.

This integration of the exponential is a good procedure but it is not
without its problems: it is not so easy to make sure that the two time
gates are precisely equal, their positions are very sensitive to trigger
levels, and for exponentials with not very different time constants one has
to use large integration intervals (as close to the full exponential as
possible) to get a good sensitivity. For example, an exponential with half
the time constant of another one will cover half the area, but only if you
integrate them to infinity. If you integrate a short interval near the
beginning of the exponentials the areas won't be so different and the
sensitivity will suffer.

This is why the idea of electronically taking the logarithm and then
differentiating, proposed a few years ago, looks so attractive: instead of
trains of exponentials of varying height one gets trains of square pulses,
each one of which has a height that is strictly proportional -just modified
by the gain factors of the amplifiers- to the time constant of the
exponential, not to its magnitude. Each pulse can thus be integrated with a
single boxcar module instead of two, and since it is supposed to be of
constant height there is no special sensitivity to the precise positioning
of the gate as long as it is inside the pulse (ideally one would cover most
of it). Also, a bunch of them can be averaged by the boxcar itself so that
in the end we only acquire one point.

An alternative to this would be the use of a lock-in amplifier, but it'd
require more refined electronics to "clean" the shapes outside these square
pulses (basically to make them zero), as René said there will be nasty
spikes caused by the differentiator when the cavity is filling with light
(positive slopes). As long as they don't saturate the opamp and it can take
care of the next pulse, spikes don't matter if one uses the boxcar approach
because it is integrating only the useful part of the total waveform, not
the spikes.

The main problem of this whole analog approach would be the noise
introduced by the analog electronics in the logarithm/differentiation
process (plus the "mathematical" noise of the logarithm, thanks Bill).

Btw, René, thanks for pointing out the differences between pseudo and true
log amps: if we finally adopt this scheme we certainly seem to need a true
one, and so far only the TL441AM -that we have found- seems to fit the
bill. Not easy to find these gizmos.

Anyway... thanks to all for your suggestions and help, thay have been quite
useful: we have realized that we really need to think more about this and
probably do some trials before deciding what the final configuration will
be, it is especially important to evaluate whether the prospective gains of
such a system will be offset by the extra noise introduced by the
additional electronics.
 
J

John Popelish

Moi wrote:
(snip)

You still have not addressed the live correction for the infinite time
(zero signal) offset. No log amplifier has long term zero signal
stability, since the gain is ideally infinity for a zero amplitude
signal. If there is a similar DC offset added to all of the signals
that pass through the log process, the slope is no longer proportional
to the time constant.
This integration of the exponential is a good procedure but it is not
without its problems: it is not so easy to make sure that the two time
gates are precisely equal, their positions are very sensitive to trigger
levels, and for exponentials with not very different time constants one has
to use large integration intervals (as close to the full exponential as
possible) to get a good sensitivity. For example, an exponential with half
the time constant of another one will cover half the area, but only if you
integrate them to infinity. If you integrate a short interval near the
beginning of the exponentials the areas won't be so different and the
sensitivity will suffer.

One neat solution to this problem of optimizing the sample intervals
to the measured time constant is to make it a closed loop process that
adjusts the pair of intervals (keeping them in a constant duration
ratio) from a given starting time to force a single value of ratio.
Then the time intervals are then proportional to the time constant,
while the accuracy of the ratio process is optimized but not used as
an output.
 
R

Rene Tschaggelar

Moi said:

thanks for the explanations.
The main problem of this whole analog approach would be the noise
introduced by the analog electronics in the logarithm/differentiation
process (plus the "mathematical" noise of the logarithm, thanks Bill).

Btw, René, thanks for pointing out the differences between pseudo andtrue
log amps: if we finally adopt this scheme we certainly seem to need a true
one, and so far only the TL441AM -that we have found- seems to fit the
bill. Not easy to find these gizmos.

the AD8304 & AD8305 form Analog Devices would fit too.

Rene
 
M

Moi

John said:
You still have not addressed the live correction for the infinite time
(zero signal) offset.  No log amplifier has long term zero signal
stability, since the gain is ideally infinity for a zero amplitude
signal.  If there is a similar DC offset added to all of the signals
that pass through the log process, the slope is no longer proportional
to the time constant.

Yes, this is a nasty problem and I don't have a good solution for it. I
have considered adding a small offset as you say (small enough to not
modify the slope significantly). It is not ideal, but in my case it might
suffice: when you work out the math for the slope, if one adds a constant
offset to the initial exponential then the factor that causes the deviation
of the slope of the logarithm from the "true" time constant increases with
time, so that at some point the deviation becomes too large to be
acceptable. In the picture of these final trains of square pulses that I
try to get after differentiating, this is equivalent to saying that the
pulses no longer have a flat top, but instead the top part of the pulses
has a very slight curvature so that at the beginning of each pulse (and,
for a very small offset compared to the initial value of the exponential)
the pulse basically starts with the correct height (a value proportional to
the time constant) but then this height changes slowly with time. In
practical terms, this limits the integration interval that one can use
before the value obtained is too different from the real time constant.

To give an idea of the magnitude of these factors, our exponentials for an
empty cavity have an initial amplitude of around 3 volts and a time
constant of 3 us. If we wanted to be able to integrate these quasi-square
pulses up to 3 time constants (9 us), which is more or less a typical time
interval in this type of experiment, with the height of the pulses being
within a maximum of 1% of the "real" time constant in the full integration
interval, the offset applied should be of no more than 2 mV. It is a little
tight. It is always possible to reduce slightly the integration interval or
to live with less accuracy than this 1% (in fact 1% is a harsh demand, in
these spectroscopy experiment there are other sources of noise that
frequently end up being more important). I think the compromise would be
more than acceptable for us.

If somebody has a better solution I'd also be very glad to hear it.

Thanks.
 
J

John Larkin

Yes, this is a nasty problem and I don't have a good solution for it. I
have considered adding a small offset as you say (small enough to not
modify the slope significantly). It is not ideal, but in my case it might
suffice: when you work out the math for the slope, if one adds a constant
offset to the initial exponential then the factor that causes the deviation
of the slope of the logarithm from the "true" time constant increases with
time, so that at some point the deviation becomes too large to be
acceptable. In the picture of these final trains of square pulses that I
try to get after differentiating, this is equivalent to saying that the
pulses no longer have a flat top, but instead the top part of the pulses
has a very slight curvature so that at the beginning of each pulse (and,
for a very small offset compared to the initial value of the exponential)
the pulse basically starts with the correct height (a value proportional to
the time constant) but then this height changes slowly with time. In
practical terms, this limits the integration interval that one can use
before the value obtained is too different from the real time constant.

To give an idea of the magnitude of these factors, our exponentials for an
empty cavity have an initial amplitude of around 3 volts and a time
constant of 3 us. If we wanted to be able to integrate these quasi-square
pulses up to 3 time constants (9 us), which is more or less a typical time
interval in this type of experiment, with the height of the pulses being
within a maximum of 1% of the "real" time constant in the full integration
interval, the offset applied should be of no more than 2 mV. It is a little
tight. It is always possible to reduce slightly the integration interval or
to live with less accuracy than this 1% (in fact 1% is a harsh demand, in
these spectroscopy experiment there are other sources of noise that
frequently end up being more important). I think the compromise would be
more than acceptable for us.

If somebody has a better solution I'd also be very glad to hear it.

Thanks.


Why are you going to a lot of trouble to process and measure a
voltage, when the thing you really want is time?

John
 
M

Moi

Why are you going to a lot of trouble to process and measure a
voltage, when the thing you really want is time?

Sorry about the long time it took for me to reply.

Well... what we want to measure is a time constant, which is of course
given by the evolution of voltage in time, but I suppose your comment
refers to the possibility you mentioned: if I understood it correctly, it
would be as easy as establishing two voltage reference points (two
"triggers" at well known voltage values) and just measure the time it takes
for the voltage to go from the first to the second.

Leaving aside the use of FPGAs, it seems to me the idea is perfectly sound
on paper, but it is a less accurate way of measuring the time constant: we
would be taking just two voltage/time data points and fitting the
exponential waveform to them. It would work, but it would be a lot more
susceptible to measurement noise than any of the other methods that measure
-or integrate- the evolution of the voltage, thus sampling plenty of data
points along the exponential, which translates into substantial
improvements in S/N ratio (or in the standard deviation of the fit, if you
want). Of course all these methods, like the ADC and direct fitting of the
exponential, are more "expensive" in terms of the demand they place on the
acquisition system, hence our desire to find a "smart" way of implementing
them in analog.

Of course we would be taking plenty of exponential per second, but then,
what is the point of going to these fast decay-generation rates with all
the instrumental complexity involved if it doesn't translate in to
improvements in precision or -for a given precision- in acquisition
times...

Thanks!
 
B

Bret Cannon

You might consider using the decay time to determine a loop delay in an
oscillator and then monitor the frequency of the oscillator. I have a vague
memory of hearing a talk describing using this approach to measure small
changes in the response time of a system. I don't remember any details.

Bret Cannon
 
Top