Maker Pro
Maker Pro

"Real WMV", 148.50 mhz sample-rate, 1920 X 1080 progressive scan image, "object data" bit-rate of 1-

R

Radium

Hi:
From the responses to my posts, I conclude that a video with 1-bit file
size cannot exist. What about a WMV file that is 148.50 mhz
sample-rate, 1920 x 1080 progressive scan image resolution, whose
"object data" has a bit-rate CBR of 1 bit per second? Could this exist?
If so, what would the video look like? In 2 hours of this video, the
file size would be 7,200 bits.

I hate pixelation and aliasing with a passion. Pixelation and aliasing
make me sick. I don't mind the artifacts -- that I think -- are
associated with a WMV whose color-depth has been compressed even to
extremes while the sample-rate and pixel resolution are left alone. It
looks similar to what a WMA file with a 44.1 khz and 20 kbps sounds
like -- I think.

What would be to the human eye what 44.1 khz, 20kbps is to the eye?

The human ear needs at least 20 hz to hear the sound. The human eye
needs at least 60 hz for the light to appear solid. E.g. a hummingbird
wing flap is too high of a video-frequency for the human eye to see,
much like the sound of a dog-whistle is too high an audio-frequency for
the human ear to hear.

WMA is my preferred type of perceptual encoding. Both WMAs and MP3s
will produce artifacts with a too-low bit rate. However, WMA's
artifacts are rather pleasant, while MP3's are disgusting.

I have Adobe Audition 1.5. I generate a silent file. I save it as WMA
20 kbps, 44.1 khz, mono. I convert file this to WAV and then back to
WMA several times. I make my last conversion to WMA and save it. I then
open this WMA file. Finally I increase the volume of the audio in the
WMA file and play. Intriguing tones result. These tones are typical in
low bit-rate, high-sample rate WMA files. I believe something analogous
could be done to WMV video.

I've tried doing the above Adobe Audition experiment with MP3s instead
of WMAs. How sickening MP3's audio artifacts are. Much like non-WMV
video compressions of pixels are. Those pixelations are just nasty.


Thanks,

Radium
 
B

Bob Myers

I hate pixelation and aliasing with a passion. Pixelation and aliasing
make me sick. I don't mind the artifacts -- that I think -- are
associated with a WMV whose color-depth has been compressed even to
extremes while the sample-rate and pixel resolution are left alone. It
looks similar to what a WMA file with a 44.1 khz and 20 kbps sounds
like -- I think.

What would be to the human eye what 44.1 khz, 20kbps is to the eye?

Think about what you're asking - it basically depends on the ability
of the eye to resolve detail, right? The term for this is "spatial acuity,"
which is typically given in terms of the number of cycles (or line pairs)
of white and black the eye can detect PER VISUAL DEGREE. That
last part means that it depends on how "wide" each line pair appears
within the visual field. So it's never as simple as just "how many pixels
do I need before I can't see the pixels any more?" The answer depends
on the image size, and how far away the viewer will be - in other
words, how much of the visual field will be filled by the image.

Google for "spatial acuity" and you will no doubt find all the
information you need to solve this one.

The human ear needs at least 20 hz to hear the sound. The human eye
needs at least 60 hz for the light to appear solid. E.g. a hummingbird
wing flap is too high of a video-frequency for the human eye to see,
much like the sound of a dog-whistle is too high an audio-frequency for
the human ear to hear.

Actually, 60 Hz isn't always enough for the light to appear "solid"
("flicker-free" is the more common term). Again, try Googling
for "flicker fusion frequency" and you will find all you need to
know.
WMA is my preferred type of perceptual encoding. Both WMAs and MP3s
will produce artifacts with a too-low bit rate. However, WMA's
artifacts are rather pleasant, while MP3's are disgusting.

Better than either, of course, is uncompressed data; second
best is data run through a completely lossless compression
algorithm. Lossy compression is used when you have no other
choice, due to constraints of storage space, data transmission
capacity, etc.. So before you pick one system as your "preferred,"
you need to better define the problem you're really trying to
solve at any given time.

The blocky-looking artifacts that result from some compression
schemes are not the result of "pixelization" per se; they come about
because these schemes are based on transforming a block of
pixels (such as an 8 x 8 or 16 x 16 pixel square) into a more
compact form, all across the image. But you can push the
compression algorithm too far, to the point where there is no longer
sufficient information to accurately recover that block of pixels
- and so the block itself becomes visible.

Bob M.
 
R

Radium

Bob said:
The blocky-looking artifacts that result from some compression
schemes are not the result of "pixelization" per se; they come about
because these schemes are based on transforming a block of
pixels (such as an 8 x 8 or 16 x 16 pixel square) into a more
compact form, all across the image. But you can push the
compression algorithm too far, to the point where there is no longer
sufficient information to accurately recover that block of pixels
- and so the block itself becomes visible.

The blocks become visible during compression if the image-resolution is
the thing being compressed. If it is the color-depth [instead of the
image-resolution] that is compressed, then there will be no appearance
of blocks [provided the image-resolution is sufficient] no matter how
far the compression of color-depth is pushed.
 
L

Lionel

Hi:

size cannot exist. What about a WMV file that is 148.50 mhz
sample-rate, 1920 x 1080 progressive scan image resolution, whose
"object data" has a bit-rate CBR of 1 bit per second? Could this exist?

No.
 
R

Radium

Radium said:
Why not? Whats stopping it from existing?

Or a more correct questions. What makes it physically-impossible for it
to exist?
 
M

Martin Heffels

If it is the color-depth [instead of the
image-resolution] that is compressed, then there will be no appearance
of blocks [provided the image-resolution is sufficient] no matter how
far the compression of color-depth is pushed.

That is incorrect. Look at for instance an area with very little gradation
in colour, like the sky. The compression algorhythms look at similar
colours, and declare one area a similar colour, even if it is slightly
different shade. If you compress too much, the range in which this is done,
is getting wider and wider, and upon decompressing you see large blocks of
colour, and then another block of a slightly different shade etc etc.
Really, you ask so many questions which you don't have to ask, because you
can try them out yourself with Photoshop/Gimp etc etc.

cheers

-martin-
--
 
B

Bob Myers

Radium said:
The blocks become visible during compression if the image-resolution is
the thing being compressed.

Wrong. There is no "compression" of the image "resolution"
(pixel format) involved at all.
If it is the color-depth [instead of the
image-resolution] that is compressed, then there will be no appearance
of blocks [provided the image-resolution is sufficient] no matter how
far the compression of color-depth is pushed.

Why are you under the impression that any of these are
"compressed" independently?

Bob M.
 
B

Bob Myers

Radium said:
Or a more correct questions. What makes it physically-impossible for it
to exist?

How much information does one bit carry? How much
do you need to be able to re-create even a single frame
of the image? You didn't really understand why an entire
movie couldn't be compressed to a single bit, apparently.

Bob M.
 
J

Jim Leonard

Radium said:
If so, what would the video look like? In 2 hours of this video, the
file size would be 7,200 bits.

CBR of 1 bit per second would look like your entire screen turning
either white or black once per second. Seriously.
 
K

Ken Maltby

Jim Leonard said:
CBR of 1 bit per second would look like your entire screen turning
either white or black once per second. Seriously.

IF it were possible to make such a file, and you figured
out how to get 7,200 bits to represent 108,000 frames.
 
K

Ken Maltby

IF it were possible to make such a file, and you figured
out how to get 7,200 bits to represent 108,000 frames,
of 2,073,600 pixels each or 223,948,800,000 pixels
altogether.

Luck;
Ken
 
J

Jim Leonard

Ken said:
IF it were possible to make such a file, and you figured
out how to get 7,200 bits to represent 108,000 frames,
of 2,073,600 pixels each or 223,948,800,000 pixels
altogether.

Of course it's possible. The decompressor handles that of course, just
like video codecs "generate" (ie. do nothing) inbetween frames for
low-framerate video played on a high-refresh device (ie. television or
monitor).

The point isn't how many pixels can be represented by bits, but by how
much state change information (ie. how many DIFFERENT pixels) can be
represented by bits. With a video bitrate of 1 bit per second, the
most that video could possibly be is the entire screen being 1 of two
colors (I chose white or black for ease, but they could be any two
colors), once per second.
 
R

Richard Crowley

"Jim Leonard" wrote...
The point isn't how many pixels can be represented by bits, but by how
much state change information (ie. how many DIFFERENT pixels) can be
represented by bits. With a video bitrate of 1 bit per second, the
most that video could possibly be is the entire screen being 1 of two
colors (I chose white or black for ease, but they could be any two
colors), once per second.

No, you're stuck with black and white. If you want any other
colors, it will take more bits to explain what they are.

But then the whole discussion was absurd to begin with.
 
K

Ken Maltby

Jim Leonard said:
Of course it's possible. The decompressor handles that of course, just
like video codecs "generate" (ie. do nothing) inbetween frames for
low-framerate video played on a high-refresh device (ie. television or
monitor).

The point isn't how many pixels can be represented by bits, but by how
much state change information (ie. how many DIFFERENT pixels) can be
represented by bits. With a video bitrate of 1 bit per second, the
most that video could possibly be is the entire screen being 1 of two
colors (I chose white or black for ease, but they could be any two
colors), once per second.

OK, where is the link to your file? It won't be a heavy
download, if it's possible to make such a file, as you say.
And be sure to let us know what player to use.

Luck;
Ken
 
L

Lionel

Bob said:
The blocky-looking artifacts that result from some compression
schemes are not the result of "pixelization" per se; they come about
because these schemes are based on transforming a block of
pixels (such as an 8 x 8 or 16 x 16 pixel square) into a more
compact form, all across the image. But you can push the
compression algorithm too far, to the point where there is no longer
sufficient information to accurately recover that block of pixels
- and so the block itself becomes visible.

The blocks become visible during compression if the image-resolution is
the thing being compressed. If it is the color-depth [instead of the
image-resolution] that is compressed, then there will be no appearance
of blocks [provided the image-resolution is sufficient] no matter how
far the compression of color-depth is pushed.

Wrong. The more you (lossy) compress the colour-depth, the more
posterised & jaggy the image becomes. This is the direct consequence
of the process Bob is describing.
 
L

Lionel

Or a more correct questions. What makes it physically-impossible for it
to exist?

Here's a repeat of the post in which I explained this to you last
time, which you ignored:
--------
From: Lionel <[email protected]>
Subject: Re: "Real WMV" 2-hour movie, 148.50 Mhz, 1920 x 1080
Message-ID: <[email protected]>
Date: Wed, 25 Oct 2006 17:38:02 +1000
[...]
BTW, in the unlikely case that you're not actually a troll[0], & are
simply clueless about mathematics & physics, there is an intuitive,
non-mathematical way to understand why it's impossible to compress a
bunch of movies down to files that each contain a single bit, & then
recover the movies from those files:

(1) Imagine that you own DVDs of your 10 favourite movies, & you wish
to compress them down to 10 files, each containing a single bit.

(2) By definition; a single bit can only be '1' or '0'. If it can be
anything else, it's no longer a 'bit', & calling it a bit is simply
mistaken.

(3) Your hypothetical uber-compressor can only generate one of two
possible compressed files: One that contains a '1', or one that
contains
a '0'.

Problem #1: Suppose you uber-compress your favourite DVD ('The
Matrix') down to a single bit: '1'. Next, you want to compress your
next favourite movie ('Capricorn One'). Now here's where things start
to get weird, because no matter what data - or even what movie - is on
that DVD, we know that the new compressed file *must* end up being a
'0', because the compressed version of 'The Matrix' is '1', & if our
uber-compressor outputs a '1' for *both movies*, it is *impossible*
for our uber-decompressor to 'know' which movie to play!
So, it logically follows that 'Capricorn One" *must* compress to file
containing only a '0'. However, the fact that you could've picked *any
movie* (other than 'Matrix') from your shelf, & it *must* compress to
a '0', should tell you that something is seriously wrong with the
logic underlying our uber-compressor.

Now, ignoring that fo the sake of argument, let's pretend that by
some god-like miracle of software engineering, your uber-compressor
manages to play 'The Matrix' for you when you give it the '1', & plays
'Capricorn One' when it is given the '0' file. This leads us to:

Problem #2: What happens when you try to compress a *third* movie (eg;
'Final Fantasy')? The answer is that, by definition, our
uber-compressor *cannot* produce a single-bit file that will
decompress to 'Final Fantasy', because there are only two possible
single-bit files: '0' & '1', & our uber-compressor has already used
both of them! So, no matter what bit it outputs, it'll be identical to
either 'The Matrix' or 'Capricorn One', so the decompressor cannot
possibly play back the correct movie, because it has no way of telling
the difference between the uber-compressed version of 'Final Fantasy'
& the uber-compressed movie file which has the same contents.

And *that* is why there is no such thing as a video
compression/decompression system that can compress more than two
abitrarily-chosen movies down to a single bit.

This same logic can be extended to determine all sorts of useful
things about the limits on storing data efficiently, or how much data
you can squeeze through a communications medium in a given time.

'Information Theory' is the name for this field,
<http://en.wikipedia.org/wiki/Information_theory>
which was pioneered by Claude Shannon
<http://en.wikipedia.org/wiki/Claude_Elwood_Shannon>

--------
 
P

Pete Fraser

Wrong. The more you (lossy) compress the colour-depth, the more
posterised & jaggy the image becomes. This is the direct consequence
of the process Bob is describing.

Depends how you do it.

For example, I could easily reduce the bit depth on an
image to 1 bit per channel, using error feeback, and the
image would be neither particularly posterised nor jaggy.
 
L

Lionel

Depends how you do it.

For example, I could easily reduce the bit depth on an
image to 1 bit per channel, using error feeback, and the
image would be neither particularly posterised nor jaggy.

Of course it would be. Reducing the bit-depth of any typical movie
frame from 32 or 24 bits to 1 bit will always posterise it.
 
P

Pete Fraser

Lionel said:
Wrong. The more you (lossy) compress the colour-depth, the more
posterised & jaggy the image becomes. This is the direct consequence
of the process Bob is describing.

Pete replied:
Depends how you do it.
For example, I could easily reduce the bit depth on an
image to 1 bit per channel, using error feeback, and the
image would be neither particularly posterised nor jaggy.

Lionel replied:
Of course it would be. Reducing the bit-depth of any typical movie
frame from 32 or 24 bits to 1 bit will always posterise it.

----------------------------------------------------------------


Pete:
That's not true. Reducing the bit depth of a 32 or 24 bit image to
one bit will convert it to monochrome, but it need not posterize
it. Just take the error caused by reducing the accuracy of
each pixel, and add it on to the next pixel.

It's called error feedback. Try Googling it.
 
Top