Maker Pro
Maker Pro

Questions about equivalents of audio/video and digital/analog.

B

Bob Myers

Randy Yates said:
It isn't reaonable to you. Don't publish opinion as fact.

OK, it's not reasonable to ME, either, if you're impressed
by taking a vote on this sort of thing.

The problem with the definition that you and Floyd seem to
want to use is that it leads to several problems in both
theory and practice, in addition to the fact that there are
numerous counter-examples one can point to.

"Reasonable" would seem (at least to me) to mean that you
can justify your definition *through reason*, which Don has
done. Simply pointing to a published work, including a
standard, as a reference to support your definition is what's
called an "argument from authority," and it has exactly zero
weight in light of an opposing argument based on evidence
and logic. However, if you like, I can also point to several
references which support the definition that Don and I (and
I believe others) are proposing. You might claim the list to
be invalid, however, since it would contain works that I
myself wrote for publication. Which is, of course, the whole
point - simply having your statements published does NOT
make them any more or less correct; the deciding factor is
whether or not they can be shown to be true through evidence
and logic.

Bob M.
You haven't looked very far. Here is an example (a power calculation):

The question was flawed to being with, though - "DSP" stands
for "DIGITAL signal processing," which by definition could
not have been done on information that was simply "sampled
in time." Such information would also have to be digitally encoded
in order to be subject to "DSP.:
I won't argue that the current usage isn't good nomenclature, but that's
the way historically things have developed.

A common misuse or misunderstanding does not become
less so merely because it IS common.

Bob M.
 
B

Bob Myers

It's the quantisation which makes something "digital".

So is the output of an ideal D/A converter "digital,"
then? It is most certainly quantized; it cannot take
on any values between adjacent output levels,
which are themselves separated by one "LSB" step
size.

What makes something "digital" is the representation
of information by numeric values ("digits"), or their equivalent,
as opposed to its representation by analogous variations
in some other quantity (which is "analog"). This is the
only definition which consistently makes sense.

Bob M.
 
D

Don Pearce

Don Pearce wrote:
(snip)


Floating point is still quantized, though not the same as fixed
point (integer) data.
Effectively it isn't. Of course if you apply this strictly, it is, but
a floating point number can be so detailed that it would be
essentially impossible to find the quantization steps in the real
world.
Instead of floating point, Mu-law and A-law coding are commonly
used for digitized audio:

http://en.wikipedia.org/wiki/Mu_law

http://en.wikipedia.org/wiki/A-law_algorithm

The result is similar to the use of floating point, but the
coding is different.
Nonsense. Mu and A law are simply a way of rescuing some decent
distortion performance from a limited number of bits by making the
quantization steps smaller as the signal gets smaller. The result is
lower noise with the penalty of slightly higher distortion. There is
no similarity whatever to floating point.

d
 
D

Don Bowey

No, Don had it right. A quantized analog signal
remains analog as long as the relative values of the
quantization levels, one to the other have significance;
they thus can carry information, which is the fundamental
goal of any such system.

No, it becomes a digitally encoded representative of a sample of an analog
voltage. First the continuously variable analog signal is sampled,
becoming, for example PAM, which is still analog, which is then quantized
and may be fit to whatever digital or analog coding that is desired. If
it's to a digital code, the signal is digital. If to an analog code, the
signal is analog.
 
J

Jerry Avins

Don Pearce wrote:

...
Quantization isn't important. If you don't quantize all it means is
that you are dealing with floating point rather than integer numbers.
Still digital of course. I can't think of any floating point ADC's off
hand, of course.

A sample is quantized as soon as it is represented by a finite number of
digits. You can't imagine that floating point has infinite resolution.

...

Jerry
 
B

Bob Myers

Those discrete levels *are* digits.

No, because it is the relative change between levels
which contains the information, not *necessarily* the
levels themselves.

Suppose you have an "analog" representation of an
audio signal, which is coming out of a D/A converter
and therefore DOES have discrete levels which it
may assume. As output by the D/A, this signal
ranges between 0 and +1.0V. Now suppose I
shift the signal so that it is now between -0.5 and
+0.5V, or amplify so that it is now between 0 and
+100V. What has changed in terms of the information
content in either case? Note that the discrete levels
most definitely have changed!

Now suppose I have developed an encoding system
which uses the following definitions (never mind WHY
I chose this encoding, we can just assume that I had
my reasons:

0V = "3"
1V = "1"
2V = "7"
3V = "4"
4V = "2"
5V = "0"
6V = "5"
7V = "6"

This is quite clearly a system which conveys three
bits of information per sample (symbol), but the
encoding is such that the levels themselves are NOT
related to each other in an "analog" fashion. Further,
a decoder which would be built to understand the
above would produce an erroneous output if, say,
the signal carrying this information were attenuated
or amplified by a factor of two. This is a "digital"
system (it's just not a *binary* encoding, which is
of course the most common form of "digital" in use
today), as the symbols themselves carry a defined
meaning which is not directly dependent on the
signal which carries them varying in a manner analogous
to the original.

Finally, consider the following, which you see on
an oscilloscope screen; for readability here, I will
use "L" for a low state in the wave form, and "H"
for high:

LLHHHLHLLLHHHHLLHHHH

Now, exactly what is this? If we assume it's carrying
information, it appears to be "sampled" in time - although
we can't be certain that the shortest low or high state
in this wave represents a single symbol - it might be that
we are happening to see a period where the shortest
continuous high or low is actually two, three, or twenty
symbol times. Is it a "digital" signal? Perhaps; you might
be tempted to say that you could interpret this string of
highs and lows as binary digits (again, if only you knew
how long a "digit" was), and it certainly appears to be
"quantized" in that it only exhibits two states.
However, it might also be a snapshot of a single line
of video, which happens to be varying between black
and white vertical bars - there is again, no way of knowing.

The bottom line - both "digital" and "analog" refer to
different means of encoding information, and you CANNOT
reliably distinguish one from the other without knowing
something about the intent of the encoding going on at
the transmitting end. You can take an educated guess
based on the most commonly-seen implementations of
these two, but you cannot be absolutely certain that your
guess is correct.

Bob M.
 
J

Jerry Avins

Don said:
Effectively it isn't. Of course if you apply this strictly, it is, but
a floating point number can be so detailed that it would be
essentially impossible to find the quantization steps in the real
world.

Nonsense. Mu and A law are simply a way of rescuing some decent
distortion performance from a limited number of bits by making the
quantization steps smaller as the signal gets smaller. The result is
lower noise with the penalty of slightly higher distortion. There is
no similarity whatever to floating point.

There's a gap in your understanding. the "segment" is the equivalent of
floating point's exponent, and the bits that divide the segment into
equal parts are like floating point's mantissa.

Jerry
 
D

Don Pearce

Don Pearce wrote:

...


A sample is quantized as soon as it is represented by a finite number of
digits. You can't imagine that floating point has infinite resolution.

Sure, but you may be quantizing to many trillions of finer steps, so
that effectively the thing is almost not quantized.

d
 
S

Scott Seidman

Bob Myers wrote:

(snip)


How about, Analog implies "infinite" precision in the absence of
noise, including fundamental quantum noise.

Note, for example, that an analog current is quantized in units
of the charge on the electron.

-- glen

Doesn't "analog" also imply that x(t) exists for all t in range, and not
just at nT for all n in range? Or would people just call that "sampled"?
 
R

Richard Crowley

Radium's ability to suck so many people into attempting to
answer insane questions is reaching legendary heights.
I hereby nominate him for the Troll Hall of Fame with special
endorsement for use of technical gobeldygook.
 
G

glen herrmannsfeldt

Don Pearce wrote:
(snip)
Quantization isn't important. If you don't quantize all it means is
that you are dealing with floating point rather than integer numbers.
Still digital of course. I can't think of any floating point ADC's off
hand, of course.

Floating point is still quantized, though not the same as fixed
point (integer) data.

Instead of floating point, Mu-law and A-law coding are commonly
used for digitized audio:

http://en.wikipedia.org/wiki/Mu_law

http://en.wikipedia.org/wiki/A-law_algorithm

The result is similar to the use of floating point, but the
coding is different.

-- glen
 
J

Jerry Avins

glen said:
Jerry Avins wrote:

(snip)


I believe that some of the early machines used 10 wires.

With ten neon lamps stacked vertically for each digit at first, then
Nixie tubes.
Biquinary, with seven wires, one of two and one of five, has
also been used.

That was so entrenched that TI's first IC decimal counter could be
configured as a biquinary device. It had divide-by-two and
divide-by-five sections.
In both cases each wire has one of two values, but it isn't very
"binary like".

It really depends on context. From a circuit viewpoint, I think of
"binary" as implying a single receiver threshold.

Jerry
 
M

Martin Heffels

Radium's ability to suck so many people into attempting to
answer insane questions is reaching legendary heights.
I hereby nominate him for the Troll Hall of Fame with special
endorsement for use of technical gobeldygook.

I vote: aye
 
B

Bob Myers

How about, Analog implies "infinite" precision in the absence of
noise, including fundamental quantum noise.

Except that "absence of noise" is a condition which
doesn't exist, even in theory.

ALL systems, digital, analog, or whatever, are limited in
information capacity by (a) the bandwidth of the channel
in question and (b) the level of noise within that channel,
per the aforementioned Gospel According to Shannon.
This is exactly the same thing as saying that there is a limit
to "precision" or "accuracy," as infinite precision implies
an infinite information capacity (i.e., given infinite precision,
I could encode the entire Library of Congress as a single
value, since I have as many effective "bits of resolution"
as I would ever need).
Note, for example, that an analog current is quantized in units
of the charge on the electron.

Sure is. So isn't it a good thing that we don't confuse either
"analog" or "digital" with either "quantized" or "continuous"?

Bob M.
 
G

glen herrmannsfeldt

Randy Yates wrote:

(snip)
Let me back-pedal a little and say that, yeah, colloquially, digital
is related to "digits." But the term "digital signal" as used in texts
and industry does not hold to this colloquial usage. That is, a signal
that is completely unquantized in amplitude and represented in base 10
as an element of the real numbers could well be called a digital
signal. The key property of such a signal is that it is *discrete-time*
(i.e., sampled in time).

I would say that "digitized signal" also implies quantization.

There are analog sampled storage systems, such as:

http://www.datasheets.org.uk/search.php?q=ISD+MICROTAD-16M&sType=part&ExactDS=Starts

-- glen
 
B

Bob Myers

Doesn't "analog" also imply that x(t) exists for all t in range, and not
just at nT for all n in range? Or would people just call that "sampled"?

Assuming "t" is time here, no - that would require
that there be no such thing as a sampled analog
representation, and we already have noted examples
of that very thing.

"Analog" != "continuous," even though most commonly
"analog" signals are also continuous in nature.

Bob M.
 
G

glen herrmannsfeldt

Bob Myers wrote:

(snip)
"Analog" also does not imply "infinite" precision or
adjustability, since, as is the case in ALL systems, the achievable
precision (and thus the information capacity) is ultimately limited
by noise. See the Gospel According to St. Shannon for
further details...;-)

How about, Analog implies "infinite" precision in the absence of
noise, including fundamental quantum noise.

Note, for example, that an analog current is quantized in units
of the charge on the electron.

-- glen
 
R

Richard Crowley

"Martin Heffels" wrote ...
I vote: aye

I don't mean to imply that there may not be idiot-savants
on the interweb. Al Einstein himself may easily have been
perceived as a troll if he were online :)
 
G

glen herrmannsfeldt

Jerry Avins wrote:

(snip)
I believe that's also a borderline area where definitions become
smudged. I know that the Russians built a computer with trinary logic,
but all the decimal systems I know, whether BCD, excess-three, or
something more exotic, encode the numbers on sets of four wires that
carry two-state signals. Making a case that that isn't binary opens the
door to claiming that hexadecimal is distinct from binary.

I believe that some of the early machines used 10 wires.

Biquinary, with seven wires, one of two and one of five, has
also been used.

In both cases each wire has one of two values, but it isn't very
"binary like".

-- glen
 
Top