T
ted
I can't quite fathom how Sony's SACD encoding works.
According to their specification, a 44.1KHz 16 bit audio stream is
transmitted or encoded as a single serial bit sequence at a rate of
2.82MHz. The encoding is called "Pulse Density mode" So the
transmitted bits contain the density or amplitude of the audio signal.
A simple low pass filter is all that is required to recover the audio.
Using simple maths this means that every original audio sample (at
44.1Khz or every 22.6uS) is encoded using 64 bits of SACD stream. In
contrast, standard PCM uses 16 bits to convey the 65536 analogue
levels required for 16 bit accuracy.
What I cannot understand, if the SACD "decoder" is a simple low pass
filter. How can you get 65536 different levels (as would be required
for every audio sample) out of just 64 serial -equally weighed- bits??
A simple low pass filter just averages the bits, so at most one would
get 64 levels out of a 64 bit serial sequence.
This is a major discrepancy..so what is the catch? or am I
misunderstanding the whole operation??
In other words, is SACD a "lossy" decoding method, i.e. one where bits
are lost (as opposed to standard PCM)
PS: I have looked extensively at web sources, which seem to go deeply
into noise and noise shaping arguments. None seem to describe the
argument in simple IT terms as described above, which seems to
counteract Shannons principle of communications.
Any ideas anybody??
Thanks in advance
ted
According to their specification, a 44.1KHz 16 bit audio stream is
transmitted or encoded as a single serial bit sequence at a rate of
2.82MHz. The encoding is called "Pulse Density mode" So the
transmitted bits contain the density or amplitude of the audio signal.
A simple low pass filter is all that is required to recover the audio.
Using simple maths this means that every original audio sample (at
44.1Khz or every 22.6uS) is encoded using 64 bits of SACD stream. In
contrast, standard PCM uses 16 bits to convey the 65536 analogue
levels required for 16 bit accuracy.
What I cannot understand, if the SACD "decoder" is a simple low pass
filter. How can you get 65536 different levels (as would be required
for every audio sample) out of just 64 serial -equally weighed- bits??
A simple low pass filter just averages the bits, so at most one would
get 64 levels out of a 64 bit serial sequence.
This is a major discrepancy..so what is the catch? or am I
misunderstanding the whole operation??
In other words, is SACD a "lossy" decoding method, i.e. one where bits
are lost (as opposed to standard PCM)
PS: I have looked extensively at web sources, which seem to go deeply
into noise and noise shaping arguments. None seem to describe the
argument in simple IT terms as described above, which seems to
counteract Shannons principle of communications.
Any ideas anybody??
Thanks in advance
ted