Maker Pro
Maker Pro

Questions about equivalents of audio/video and digital/analog.

J

Jerry Avins

Floyd L. Davidson wrote:

...
And a "digit" is nothing other than a discrete value. Hence
you have a valid definition, but don't recognize what it means.

So the output of a fresh C cell is a digit? Talk about digging a hole
for oneself!

Jerry
 
D

Don Pearce

It doesn't lead to any such problems.

What you need to get straight is that it is not *my*
definition. It is the *standard* technical definition
recognized by virtually *every* standards organization.
I quoted the NTIA's Federal Standard 1037C.


It makes no difference what you think is or is not
reasonable, unless we want to discuss *you*. If you
disagree with the standard definition then you don't
understand the term, and we can determine how far off
you are by how much your definition differs from that
one! ;-)


Which proves that he doesn't understand it. It says
nothing about whether the National Telecommunications
and Information Administration, knows or what the MilStd
specification knows.


That is a logical fallacy on your part. An "argument
from authority" has great weight if it is valid. To
be valid it must pass three tests:

1) The authority cited must actually be an authority.
2) All authorities must agree on the topic.
3) The authority cannot be misquoted, taken out of
context, or be joking.

Clearly citing the NTIA and MilStd definition is indeed
a *very* strong appeal to authority, and no mere opinion
can even come close to invalidating it.


What evidence? And the logic is clearly invalid and
based on false assumptions.

You know one way to be absolutely positive that your
logic is not good is to do a reality check and find that
the answer you have is wrong. It this case that is very
easy to do, which is why *standard* definitions are
quoted from authoritative sources. If you disagree,
then clearly you *don't* have the logic right!


So cite even one such valid reference! (You *cannot*,
because there are none.)

(And recognize that if you think you have one, then
there is one of two things clearly true: Either 1) you
do not understand that the other definition is not
actually different, or 2) your reference is not a valid
one.)


You are not a valid reference. You don't even come
close to being equal to the NTIA.

And it is *hilarious* that you would (again, because
this isn't the first time) try to convince anyone that
you are.


Except technical definitions are sometimes merely
arbitrary agreements on one of many possible logical
ways to define a term. We could have decided that
"digital" means binary, or a decimal system. We didn't,
but both would be logical.


Hmmm...

The big problem here is that you have misunderstood what NTIA is
saying.

d
 
D

Don Pearce

Oh? The concept of floating point prescribes a certain number of bits?
That you fail to see the parallel doesn't mean there isn't one.
No idea what that meant. Mu and A law are used in telephony. The
system uses 8 bits. Mu and A law are necessary because there ARE only
8 integer bits. If telephony could afford, say, 16 bits, there would
be no need for Mu and A law. The bits are integer. There is no
floating point. Why is that so hard to grasp?

d
 
F

Floyd L. Davidson

Bob Myers said:
No, Don had it right. A quantized analog signal

You can repeat that all you like, but you are wrong
every time you do.

By *definition* it is a digital signal.

quantization:
A process in which the continuous range of values
of an analog signal is sampled and divided into
nonoverlapping (but not necessarily equal)
subranges, and a discrete, unique value is assigned
to each subrange.

A _sampled_ signal is still analog. A _quantized_ signal is
digital by definition.

If you do not stay with standard definitions it is
impossible to discuss anything rationally.
remains analog as long as the relative values of the
quantization levels, one to the other have significance;
they thus can carry information, which is the fundamental
goal of any such system.

The quantization levels are digital. By definition.

If that isn't what you mean, then you need to use other
words because you are confusing the issue by misuse of
standard terms.
Now, we could certainly assign values to those levels
which (for instance) are NOT in order from "top to
bottom" (or whichever direction you choose to use),
which might be done to distribute the susceptibility of
any given "bit" in said value to noise evenly. In this
case, the levels MUST be interpreted as the intended
numeric values in order to recover the original
information, and hence this would be a "digital"
encoding system.


Exactly. But mere quantization by itself does not
suffice to render a signal "digitally encoded," no
matter what a given government "expert" may claim.

The quantization of a signal makes it digital.

(It *is* encoded, too, BTW. But until you understand
what makes it digital, there is little point in trying
to define what "encoded" means.)

No matter how dense you want to be about it, that
government "expert" happens to be right. And you cannot
find *any* expert that will disagree. That is the
*standard* definition, and virtually *everyone* agrees
that it is correct.
 
J

Jerry Avins

Floyd L. Davidson wrote:

...
Floating point is analog, integer is digital.

An analog of what? I use digits to represent floats, with E sometimes
thrown in and '.', the "point" part. How many different floats can your
computer represent? I'm sure it's a countable number

...

Jerry
 
J

Jerry Avins

Don said:
No idea what that meant. Mu and A law are used in telephony. The
system uses 8 bits. Mu and A law are necessary because there ARE only
8 integer bits. If telephony could afford, say, 16 bits, there would
be no need for Mu and A law. The bits are integer. There is no
floating point. Why is that so hard to grasp?

It is a characteristic of floating-point representation that some of the
bits represent a signed magnitude and others represent a scale factor
for that quantity. It is a characteristic of mu- and A-law that some of
the bits represent a signed magnitude and others represent a scale
factor for that quantity. I would have thought that you could grasp the
similarity. No matter if you reject the insight.

Jerry
 
F

Floyd L. Davidson

Don Bowey said:
No, it becomes a digitally encoded representative of a sample of an analog
voltage. First the continuously variable analog signal is sampled,
becoming, for example PAM, which is still analog, which is then quantized
and may be fit to whatever digital or analog coding that is desired. If

Once it is quantized, it is digital.

Actually I suspect it is open to debate as to whether a
sample is actually PAM until it is quantized. (Until it
is, it's just a sample of an analog signal.) But
whatever, if the sample itself actually is PAM, then yes
that is an analog signal.

However, after it is is quantized is then a digital (PAM)
signal. (And example is the high speed link of a v.90
modem, which uses PAM.)
it's to a digital code, the signal is digital. If to an analog code, the
signal is analog.

For analog it necessarily has to be _modulated_, not
encoded. If must be modulated for the resulting signal
to be applied to the input of an analog channel. If it
is encoded it must a digital channel. (Again, that is
the nature of arbitrary definitions, this time of what
"encode" and "modulate" mean.)
 
F

Floyd L. Davidson

Jerry Avins said:
The government declares it so it must be true?

No, virtually *every* standards organization recognizes
that definition.

You cannot find *any* reputable disagreement.

(For one thing, because anyone who disagrees is *clearly*
not credible... ;-)
I can
demonstrate a circuit using analog components that
transforms a continuous ramp input into a staircase
output. Moreover, the output levels can be individually
adjusted. Is the output digital? (We're discussing an
arbitrary definition here. There is no wrong answer.)

The output is apparenlty analog. At least you have said
*nothing* that indicates otherwise.

Do you think all digital signals are square waves and
anything that has square waves is digital? Your example
above suggests that you might, but it simply isn't true.
 
F

Floyd L. Davidson

Jerry Avins said:
I believe that the definition is flawed. Not that it

Your opinion of standard definitions is worthless.

If you want to communicate with the rest of the technical
world, use standard definitions and cease claiming they
are flawed.

Your opinion is where the flaw exists.
matters; it's good enough in context. A signal can be
quantized without any need to measure it or describe it
with a number.

That isn't true. In order to quantize it you *must*
decide on non-overlapping ranges of *values*, and a
specific quantity value that equates to those values.
An example is the signal being measured
in a quantum Hall-effect experiment.

Explain.
 
F

Floyd L. Davidson

Bob Myers said:
Not necessarily; a two-state representation is most properly
referred to as "binary." The best definition of "digital" I've
managed to come up with comes in the word itself - it
is the encoding system whereby information is stored as
"digits," i.e., numeric values, as opposed to a system in which
the information is stored "analogously" in the form of one
parameter (voltage, say) which varies in a like manner as the
original.

Your definition is flawed. Digital implies a finite set
of values, which might well be a voltage that varies in
a like manner (granted not continuously) as the
original.

Analog inplies the variation is continuous.
"Quantized" and "sampled" are terms which are really not all
that closely associated (at least in theory) with either of the
above,

Again, not really true. Quantized is necessarily
digitized.

But sampled can be either.
although admittedly most systems seen today which
employ sampling and/or quantization are also "digital" in the
nature of the encoding of the information carried.

Anything that is quantized is digitized.
 
F

Floyd L. Davidson

Bob Myers said:
A CCD is an example of a device which stores information
in an analog manner, but non-continuously.

The output signal is analog, and is able to vary *continuously*
over the range in which it functions.
 
F

Floyd L. Davidson

Bob Myers said:
Sure it does.

Look up the definition of "quantization" again. It simply
makes no difference. If an analog signal is quantized, the
result is a digital signal. That is by definition, and you
cannot escape that with mumbo-jumbo and faulty logic.
If the levels of the original signal (or rather,
whatever parameter of the original information is being
recorded/stored/process are represented by analogous
levels of some other parameter (e.g., sound represented
by voltage), then the system is "analog."

And that necessarily means that the "analogous levels"
can vary continuously.

(Your example is poor, becuase sound can be represented
by a voltage that has been digitally encoded.)
It is certainly
possible to conceive of a quantized analog system, although

It is not possible by definition. If you quantize something,
you have a finite set of discrete values, and it *is* digital.
such things are rarely if ever seen in practice.

Understatement of the day.
"Analog" also does not imply "infinite" precision or
adjustability, since, as is the case in ALL systems, the achievable
precision (and thus the information capacity) is ultimately limited
by noise. See the Gospel According to St. Shannon for
further details...;-)

True. It means only continuously variable over an
infinite set of values. Your ability to determine
exactly which value (accuracy) is not guaranteed, nor is
your ability to reset to any specific value (precision)
guaranteed.
 
F

Floyd L. Davidson

glen herrmannsfeldt said:
Bob Myers wrote:

(snip)


How about, Analog implies "infinite" precision in the absence of
noise, including fundamental quantum noise.

Note, for example, that an analog current is quantized in units
of the charge on the electron.

No, in fact it is not. Electrons do not necessarily all move
at the same speed...
 
F

Floyd L. Davidson

Bob Myers said:
Assuming "t" is time here, no - that would require
that there be no such thing as a sampled analog
representation, and we already have noted examples
of that very thing.

"Analog" != "continuous," even though most commonly
"analog" signals are also continuous in nature.

Analog signals are by *definition* continous.

You have misunderstood what that means though. The
analog value of a signal is continuous, but that does
not imply that the signal continuously exists or that
it even changes at all.
 
F

Floyd L. Davidson

Bob Myers said:
Except that "absence of noise" is a condition which
doesn't exist, even in theory.

Apparenlty Claude Shannon didn't agree with you on that.

Part I of "A Mathematical Theory of Communications" carries
the title "Discrete Noiseless Systems". Section 1 of that
part is titled "The Discrete Noiseless Channel". It turns
out that is a very important theoretical model.
ALL systems, digital, analog, or whatever, are limited in
information capacity by (a) the bandwidth of the channel
in question and (b) the level of noise within that channel,
per the aforementioned Gospel According to Shannon.
This is exactly the same thing as saying that there is a limit
to "precision" or "accuracy," as infinite precision implies
an infinite information capacity (i.e., given infinite precision,
I could encode the entire Library of Congress as a single
value, since I have as many effective "bits of resolution"
as I would ever need).


Sure is. So isn't it a good thing that we don't confuse either
"analog" or "digital" with either "quantized" or "continuous"?

Backwards.
 
D

Don Bowey

Radium's ability to suck so many people into attempting to
answer insane questions is reaching legendary heights.
I hereby nominate him for the Troll Hall of Fame with special
endorsement for use of technical gobeldygook.

He does have a knack for getting into esoteric points before having an
understanding foundation. Not insane, but all over the map.

I'll vote for your candidate, in any case.
 
R

Radium

analogue - a continuous representation of the original signal
sampled - a representation of the signal at discrete time points
quantized - a sampled signal, but with the possible levels constrained
to a limited set of values
digital - a quantized signal, with the individual levels represented
by numbers

I agree with your list.

That means the device in the link below is neither analog nor
digital.

http://www.winbond-usa.com/mambo/content/view/36/140/

I'd like to see a purely-analog device which can record, store, and
playback electric audio signals [AC currents at least 20 Hz but no
more than 20,000 Hz] without having any moving parts [except of course
for the diaphragms present in the microphone and speaker and the
electrons that make up the electric signal] and without any amount of
sampling.

The CCD is out of the question as it uses sampling.
 
F

Floyd L. Davidson

Scott Seidman said:
[email protected] (Floyd L. Davidson) wrote in


Funny, that's just what my D/A converters put out, and the spec sheets
claim they're putting out analog signals. Perhaps I should return them.

Your D/A converter puts out what is called
"quasi-analog". It's actually a digital PAM signal, not
an analog signal.

You can easily make it is a close approximation of the
original (with quantization distortion added), however,

But once you do that (by sending it through almost any
kind of an analog channel) it truly becomes "analog", in
the sense that you can no longer recover information or
use it as a digital signal.
 
N

Nobody

So is the output of an ideal D/A converter "digital,"
then?

Maybe. If you intend to feed it to an ADC such that the ADC will output
the original bits, then it is (i.e. multi-level encoding). If you intend
to feed it to a low-pass filter to remove the quantisation noise, it isn't.
It is most certainly quantized; it cannot take
on any values between adjacent output levels,
which are themselves separated by one "LSB" step
size.

That doesn't make it quantised.
What makes something "digital" is the representation
of information by numeric values ("digits"), or their equivalent,
as opposed to its representation by analogous variations
in some other quantity (which is "analog"). This is the
only definition which consistently makes sense.

Whether or not a signal is "digital" depends upon what you intend to do
with it. A PCM signal conveying the digits 01010101... is digital; a
square wave generated within an audio synthesiser isn't, even if the
signals are identical.
 
Top