Maker Pro
Maker Pro

DIY PC Oscilloscope

N

Nico Coesel

Francesco Poderico said:
Jeff said:
6. Presumably, it's dual channel. If so, make it stackable via a
common bus or preferably via ethernet, so that multiple units can be
conglomerated into a 4,6,8, etc channel scope.



From my (rather narrow) point of view, where the main use of an

oscilloscope is in debugging embedded software, this would make it

stand out above the competition.

return address is invalid ]

Thanks Roberto,
I'm not sure what do you mean with a stackable oscilloscope, do you mean... e.g. 4 oscilloscopes... 2 channel each that makes a 8 channel oscilloscope?
The only problem I see with that is that between each scope you can have 20 ns max. of time offset.

If you distribute the trigger and use a global reference clock to
which all the ADCs are synchronised it will work. At least that is how
I planned it for my own USB scope design about 9 years ago. But then
Rigol emerged so I stopped the project.
 
T

Tim Williams

Nico Coesel said:
If you distribute the trigger and use a global reference clock to
which all the ADCs are synchronised it will work. At least that is how
I planned it for my own USB scope design about 9 years ago. But then
Rigol emerged so I stopped the project.

You still have the propagation delay of the clock (and trigger if shared)
cascading between modules. You could characterize each unit's delay and
adjust in the DLL, perhaps, or perform a round-trip self-calibration, or
ask the user to run a BNC cable from the "probe test" connector on the
master unit down to each successive input; assuming the cable remains
constant during the test, this would allow the inputs to be adjusted for
zero time difference.

Tim
 
F

Francesco Poderico

You still have the propagation delay of the clock (and trigger if shared)

cascading between modules. You could characterize each unit's delay and

adjust in the DLL, perhaps, or perform a round-trip self-calibration, or

ask the user to run a BNC cable from the "probe test" connector on the

master unit down to each successive input; assuming the cable remains

constant during the test, this would allow the inputs to be adjusted for

zero time difference.



Tim



--

Deep Friar: a very philosophical monk.

Website: http://seventransistorlabs.com

Thanks to everybody for the suggestions,
I will think about... as it could be a good selling point...
I'm a bit concerned regarding the clock syncronization between different FPGA, at 200 MHz it's not easy and you can have easily have 1 or 2 clock skews between modules...
I need to think about
 
J

JW

[snip}
Yeah, but it has Lecroy's name on it and that's like GOLD!!!!!!!!!!!

I dunno... Every time I pick one up on the surplus market, they sell
poorly and for low $ compared to a Tek or Agilent of similar specs. There
are guys who will buy nothing else *but* Lecroy, but they seem to be a
very small group compared to Tek and Agilent fans. Consequently, I haven't
bought one in over a year, I end up sitting on them too long.

One thing positive I will say about them, I've never seen a 93XX or LC
series scope alias, where as Tek and Agilent scopes of similar vintage are
very easy to get to alias.
 
T

Tim Williams

JW said:
One thing positive I will say about them, I've never seen a 93XX or LC
series scope alias, where as Tek and Agilent scopes of similar vintage
are
very easy to get to alias.

Not necessarily an advantage -- in the old days, of course, the analog
trace simply looked fuzzy due to any modulation that was too fast to
resolve, or not harmonically related to the trigger source. On digital
scopes, it's best to have the option. Many times I've looked at a signal
and thought, "where did it go?" Zooming in, it shows up. Tek's "high
res" mode, I think, does a crude oversampling IIR effect, which looks very
nice around traces with much noise or ripple (clearing away noise I would
have trouble cutting through otherwise), but also misses that information.

Speaking of noise, it aliases too, correct? As long as the bands are
uncorrelated (to each other; correlation to the sample rate I don't think
matters, as long as it's still "noisy"), it should go as V = Vo*sqrt(B /
(2*Fs)), given a noise source of full-bandwidth amplitude Vo, bandwidth B,
and sample rate Fs. But then, random sampling has to have the same
statistics as the source signal. I seem to be forgetting something.

Tim
 
R

rickman

Thanks to everybody for the suggestions,
I will think about... as it could be a good selling point...
I'm a bit concerned regarding the clock syncronization between different FPGA, at 200 MHz it's not easy and you can have easily have 1 or 2 clock skews between modules...
I need to think about

There are a lot of things to consider when designing a scope like this.
I wouldn't bother too much up front with trying to include all the
fancy features. If you want a four or eight channel scope, it can be
done as a separate design using modules from the original better than
trying to make the entire design into a cascading unit of some sort.
Although connecting modules as cascaded units wouldn't really be all
that hard. If the modules are made to snap together with a mating
connector, like the TI-99 computer used to do, then the paths would be
short and you could actually distribute the clock so that all delays
were equal.

I think the cascading is practical, but I just don't think it adds a lot
to the value of the device.

If you design a good, two channel low cost scope, I am sure it will do
well. But I don't think 25 MHz is good enough to set yourself apart.
As others have said, there are a number of 20/25 MHz units available in
the low $100's. If it isn't 300 MHz or so analog bandwidth, why bother?
Those units tend to be rather pricey like >$1000. That gives you some
room to really beat their price.

I was looking at doing a low end scope on what would be close to a
single chip. There is a multiprocessor chip that should be able to do
everything but talk over USB for a 10 MHz scope. It even includes ADCs
which have a variable resolution so that they could be used for faster
low res signals as high as 50 MSPS or slower high res, down to 15/16
bits at CD rates. So there can be better ways to do a low end scope
than the traditional scope design.

A major feature I would like to see is a 16 channel logic input along
with the two analog channels. Also keep in mind that it needs to run
off the USB power which is limited to 2.5 Watts. If the scope requires
a separate wire for power, it becomes much less attractive.

Someone mentioned that they didn't think 3K was deep enough channel
memory and I'll second that. One SDRAM chip can provide MBs of memory
at very high speeds and not use a lot of power.

If you are really serious about this, I might be interested in helping
with the digital design and the FPGA code. That's my focus area.

My last comment on this is about the software. Others have mentioned
that the software is where all the features are added. That is true, so
don't short change the difficultly of getting the software done and done
right. I could see this being integrated with one of the many single
board computers for a portable device as an option in place of being
tethered to a PC. I read that there is a new BeagleBone coming out
which includes video, although I suppose there is nothing much wrong
with the raspPi really.

I would like to hear from others about why the front end is the hard
part. Exactly how do the attenuators work? Does the amp remain set to
a given gain and the large signals are attenuated down to a fixed low
range?
 
F

Francesco Poderico

Not necessarily an advantage -- in the old days, of course, the analog

trace simply looked fuzzy due to any modulation that was too fast to

resolve, or not harmonically related to the trigger source. On digital

scopes, it's best to have the option. Many times I've looked at a signal

and thought, "where did it go?" Zooming in, it shows up. Tek's "high

res" mode, I think, does a crude oversampling IIR effect, which looks very

nice around traces with much noise or ripple (clearing away noise I would

have trouble cutting through otherwise), but also misses that information.



Speaking of noise, it aliases too, correct? As long as the bands are

uncorrelated (to each other; correlation to the sample rate I don't think

matters, as long as it's still "noisy"), it should go as V = Vo*sqrt(B /

(2*Fs)), given a noise source of full-bandwidth amplitude Vo, bandwidth B,

and sample rate Fs. But then, random sampling has to have the same

statistics as the source signal. I seem to be forgetting something.



Tim



--

Deep Friar: a very philosophical monk.

Website: http://seventransistorlabs.com

Hi all, thanks a lot for all the suggestions...
I have a problem... I believe some of you guys may have some idea of what is going on here...
I'm using the sin(x)/x interpolation algorithm... which works OK, but I do have a problem...
if you go to my blog : http://thefpproject01.blogspot.co.uk/
you can see that the interpolation algorithm I'm using gives a waveform wobbly... but... by using a linear interpolation algorithmic then it's OK!
what is going on her? what I'm doing wrong?
 
N

Nico Coesel

rickman said:
There are a lot of things to consider when designing a scope like this.
I wouldn't bother too much up front with trying to include all the
fancy features. If you want a four or eight channel scope, it can be
done as a separate design using modules from the original better than
trying to make the entire design into a cascading unit of some sort.
Although connecting modules as cascaded units wouldn't really be all
that hard. If the modules are made to snap together with a mating
connector, like the TI-99 computer used to do, then the paths would be
short and you could actually distribute the clock so that all delays
were equal.

I think the cascading is practical, but I just don't think it adds a lot
to the value of the device.

If you design a good, two channel low cost scope, I am sure it will do
well. But I don't think 25 MHz is good enough to set yourself apart.
As others have said, there are a number of 20/25 MHz units available in
the low $100's. If it isn't 300 MHz or so analog bandwidth, why bother?
Those units tend to be rather pricey like >$1000. That gives you some
room to really beat their price.


I would like to hear from others about why the front end is the hard
part. Exactly how do the attenuators work? Does the amp remain set to
a given gain and the large signals are attenuated down to a fixed low
range?

You need to use capacitive dividers which need adjustment. In my
design I used one varicap (controlled by a DAC) to do all the
necessary adjustments for several ranges. Nowadays you could use a 12
bit ADC so you wouldn't need a variable gain amplifier. Another trick
to get a programmable range is to vary the reference voltage of the
ADC. I think I re-did the design of the front-end about 3 or 4 times.
 
T

Tim Williams

Francesco Poderico said:
Hi all, thanks a lot for all the suggestions...
I have a problem... I believe some of you guys may have some idea of
what is going on here...
I'm using the sin(x)/x interpolation algorithm... which works OK, but I
do have a problem...
if you go to my blog : http://thefpproject01.blogspot.co.uk/
you can see that the interpolation algorithm I'm using gives a waveform
wobbly... but... by using a linear interpolation algorithmic then it's
OK!
what is going on her? what I'm doing wrong?

Without looking at your code, or playing with it, I'd have to guess you
got the T or Fs incorrect.

The formula posted is just a convolution, of course (as well it should be,
since it's filtering a time-domain signal, and sinc(t) is the complement
of a rect(f) "brick wall" filter). Which is tedious for math (doing it
literally consumes lots of sines and divides), but not intractible (the
samples are windowed by the buffer length, so the infinite sum over n
reduces to buffer length only).

A better way of writing the argument is,
sinc(t/T - n)
since this showcases the dimensionless ratio, which ultimately is all we
really care about, since digital up-sampling just uses different ratios in
T. This is actually:
sinc(x*(Ts / Td) - n)
for screen coordinate x, sample n, and sample rate Ts and displayed sample
rate Td, which is actually Td = (selected time/div) / (graphical
pixels/div).

A more practical implementation might also window the sinc function, with
a rect function for starters, but perhaps using another function as well,
like a good old Hamming window, to keep it smooth. The actual number of
samples convolved need not be too much; a few times the Told/Tnew ratio
should be sufficient. Then you might as well precalculate it and shove it
in a table, since it doesn't need to change. (Okay, it might change if
you want to make the graphical display resizable.)

Tim
 
R

rickman

You need to use capacitive dividers which need adjustment. In my
design I used one varicap (controlled by a DAC) to do all the
necessary adjustments for several ranges. Nowadays you could use a 12
bit ADC so you wouldn't need a variable gain amplifier. Another trick
to get a programmable range is to vary the reference voltage of the
ADC. I think I re-did the design of the front-end about 3 or 4 times.

I don't get how a 12 bit ADC solves the attenuator problem. That is
only 2 bits more than what I would like to see in a front end. Once a
waveform is captured, I would like to be able to zoom in a bit to view
detail, I think that requires 10 bits. This only leaves 2 bits of
flexible range which is just a factor of 4x. Even if you are happy with
a vertical range using 8 bits, that only leaves a factor of 16x. Most
scope front ends I have seen work over a range of multiple decades, 16
mV to 80 V FS on my scope. That would require over 12 bits just to
match that range, not counting the bits you want for the display. I
suppose a 16 bit converter could be used leaving less than 4 bits on the
most sensitive ranges. I know they are faster now days, any idea how
fast a 16 bit converter is?

The reason adjusting the reference voltage on the ADC is a bad idea is
that the noise floor goes up as your reference voltage goes down. That
just reduces the ENOB.
 
T

Tim Williams

rickman said:
I know they are faster now days, any idea how fast a 16 bit converter
is?

How fast can you afford? ;-)

IIRC, 14 or 16 bit goes for ca. $1/MSPS. Beware the INL usually stops at
12 bits, typical of pipelined types (which the fast high-bit ones all
are). DNL is all that matters for SDR, so they aren't useless; INL could
be calibrated, as long as you can generate a 16 bit-linear ramp (now
Larkin will chime in..).

Tim
 
How fast can you afford?  ;-)

IIRC, 14 or 16 bit goes for ca. $1/MSPS.  Beware the INL usually stops at
12 bits, typical of pipelined types (which the fast high-bit ones all
are).  DNL is all that matters for SDR, so they aren't useless; INL could
be calibrated, as long as you can generate a 16 bit-linear ramp (now
Larkin will chime in..).

Tim

analog has a 16bit 250Ms, it is "only" ~150$



-Lasse
 
N

Nico Coesel

rickman said:
I don't get how a 12 bit ADC solves the attenuator problem. That is
only 2 bits more than what I would like to see in a front end. Once a

Its a factor 16 attenuation you don't need to do in hardware. So if
the hardware attenuator does 1:1.5, 1:10 and 1:100 (which is doable
with 2 relays) you save quite some circuitry. 8 bits is probably more
then enough. It will be hard to get the response so flat that more
than 8 bits actually adds accuracy of the readout.
 
S

Spehro Pefhany

OK, here's a patentable idea that I donate to humanity:

The scope trigger fires a delay and then makes some analog pattern, like a sine
burst or chirp or some pseudo-random mess made from delay lines or something.
Something with a nice sharp autocorrelation function.

Mix that into the scope vertical signal. The PC software looks at that, figures
out its timing, and removes the +-1 clock jitter from the displayed data.

The burst can be delayed and thereby separated in time from the displayable
waveform, or it can be superimposed, no delay, and subtracted out.

Of course, the idea works even better if you have another ADC channel to spare.

We're doing something like that to synchronize phase on several
free-running ADCs in a distributed data acquisition system on a
non-deterministic network. Works, but I'd rather have it done by
hardware.
 
R

rickman

Its a factor 16 attenuation you don't need to do in hardware. So if
the hardware attenuator does 1:1.5, 1:10 and 1:100 (which is doable
with 2 relays) you save quite some circuitry. 8 bits is probably more
then enough. It will be hard to get the response so flat that more
than 8 bits actually adds accuracy of the readout.

I've worked with 8 bit scopes and the vertical clearly shows steps which
I find interfere with making reasonable measurements. That's why I said
2 spare bits out of 12. There is also a need for zooming in on a
portion of a captured trace. At 8 bits all you see is the steps. With
a full 12 bits you have a little bit of extra resolution so you can
actually get a bit of detail.
 
J

John Devereux

rickman said:
I've worked with 8 bit scopes and the vertical clearly shows steps
which I find interfere with making reasonable measurements. That's
why I said 2 spare bits out of 12. There is also a need for zooming
in on a portion of a captured trace. At 8 bits all you see is the
steps. With a full 12 bits you have a little bit of extra resolution
so you can actually get a bit of detail.

I agree a 12 bit (or 16 bit!) scope would be nice. Lecroy make one I
think but it is very expensive.

The situation with the standard 8 bits is not quite as bad as you
portray in a higher end scope. They can sample at the full maximum
digitizer rate (5 or 20 GSPS say) then do real-time averaging/DSP on it
so that each point plotted at lower sweep rates represents the average
of hundreds of samples potentially. The noise at 20GSPS smears out the
steps then the averaging smooths out the noise. Or something like that.
Anyway the result is much better than you would think from the 8 bit
input.
 
N

Nico Coesel

rickman said:
I've worked with 8 bit scopes and the vertical clearly shows steps which
I find interfere with making reasonable measurements. That's why I said

That has more to do with how the software shows the signal.
Interpolation can solve a lot.
 
T

Tim Williams

John Devereux said:
The situation with the standard 8 bits is not quite as bad as you
portray in a higher end scope. They can sample at the full maximum
digitizer rate (5 or 20 GSPS say) then do real-time averaging/DSP on it
so that each point plotted at lower sweep rates represents the average
of hundreds of samples potentially. The noise at 20GSPS smears out the
steps then the averaging smooths out the noise. Or something like that.
Anyway the result is much better than you would think from the 8 bit
input.

It also reduces aliasing. Mine has a "high res" mode which does this --
only works below a certain range, of course.

Tim
 
R

rickman

I agree a 12 bit (or 16 bit!) scope would be nice. Lecroy make one I
think but it is very expensive.

The situation with the standard 8 bits is not quite as bad as you
portray in a higher end scope. They can sample at the full maximum
digitizer rate (5 or 20 GSPS say) then do real-time averaging/DSP on it
so that each point plotted at lower sweep rates represents the average
of hundreds of samples potentially. The noise at 20GSPS smears out the
steps then the averaging smooths out the noise. Or something like that.
Anyway the result is much better than you would think from the 8 bit
input.

You say that the 16 bit converters are expensive, then talk about using
a 20 GHz 8 bit ADC. Is that not expensive, not to mention the clocking,
the board for the high speed signals and the power supply to make all
this happen? I can't imagine this is actually a better approach to
designing a scope with a stated goal of 20-25 MHz bandwidth. I would
like to see at least 300 MHz, but the OP says 25 is good enough.
 
Top