Arny Krueger said:
That's a good part of it. The net purpose of inverted polarity was to
improve subjective dynamic range. White flecks on a grey background are
far less obvious than black ones.
Umm..No. You've both got it the wrong way round. With -ve polarity sync
pulses are more affected by noise bursts than with +ve polarity. And white
flecks are far more obvious than black. Part of the reason is that impulse
interference could greatly exceed the 100% vision carrier level, saturating
the video amplifier and, with +ve modulation, the CRT.
This was why US TVs, where -ve modulation was used from the beginning,
employed flywheel sync very early on, whilst UK TVs didn't. On the other
hand UK TVs needed peak-white limiters to prevent the CRT defocusing on to
the "whiter-than-white" interference specs.
The real benefit of -ve modulation was AGC. With -ve modulation sync tips
correspond to 100% modulation and make an easy source for the AGC bias. With
+ve modulation sync tips are at zero carrier which obviously is useless for
AGC. Instead the back-porch has to be used and many different weird and
wonderful circuits were devised to "gate out" the signal voltage during the
back porch. Due to the need to keep costs down manufacturers increasingly
turned to "mean-level AGC" in which the video signal itself was simply
low-pass filtered to form the AGC bias. This lead to receiver gain being
varied by the video content, so the black on low-key scenes was boosted
whilst the whites in high-key scenes were reduced leading to a general
greyness to everything. To me it looked awful but as the Great British
Public kept buying these sets (and they were cheaper to build) mean-level
AGC became the norm for B&W UK domestic TV receivers. One great advantage of
colour was that mean-level AGC could not be used, to give correct colour
values colour sets *had* to display a picture with a stable black-level.
David.