Maker Pro
Maker Pro

Video Switcher

I am looking to start a project that will be used for several big event in 2016. I want to create a video switcher box to my specs that will be better than switching over the TV's with a remote control.

I want to be able to control up to six TV's using RGB inputs and outputs. I have created a basic image of what I want to achieve. When the feed is live it will show live streaming video. When the feed has finished, I want to flick a switch that will begin to play a series of videos/adverts that are either on a USB stick or a small internal hard drive/memory chip (if such a thing exist). I was looking at PI but if there are other alternatives, then please let me know. Thanks

RGB.png
 

CDRIVE

Hauling 10' pipe on a Trek Shift3
To be precise it means Red Green Blue. The three colors used to make other colors in composite video. In other words RGB and Composite Video are synonymous.

When labeling panels RGB takes up less real estate. :)

Chris
 

hevans1944

Hop - AC8NS
... In other words RGB and Composite Video are synonymous ....
No, RGB is separate video with sync usually on Green.

Composite video is sync + video (either monochrome or color) on a single wire-pair.

For color, the video has quadrature I and Q components that must be synchronously demodulated using as a phase reference the six to ten cycles of (approximately) 3.58 MHz color burst sample inserted on the "back porch" of the horizontal sync signals. The color burst samples are used to phase-lock a continuously running oscillator used to demodulate and recover the I and Q signals, which in conjunction with the luminance signal, can be resolved into red, green, and blue intensities that will combine to create a color image.

More information on this (largely obsolete) NTSC standard can be found here.
 
"More information on this (largely obsolete) NTSC standard"

In the UK they use(d) the PAL system.
Obviously, a lot more info is needed to help the OD.
 

Harald Kapp

Moderator
Moderator
This application note discusses lots of video circuits. Figure 32 on page 17 shows a 3-channel 2-1 multiplexer, suitable for switching between 2 RGB (or YPbPr) signals. Provided your signals are of this type, you can use one of these multiplexer per channel to switch between the life stream and the advertisements.

You will need a player that outputs the ads in a loop from the USB stick. A Raspberry Pi or an old laptop may be suitable.
Connect the life stream to one port, the ad-stream to the second port of the multiplexer. The circuit as shown has 75Ω termination resistors on the inputs. As your ad-stream will be routed to all 6 multiplexers (1 for each output channel), only one set of thermination resistors is allowed. As this may cause ghosting due to reflections of the signal on the unterminated inputs, you may create 6 independent ad-streams by applying a video splitter (same app-note, figure 8 on page 5) sequentially:
upload_2015-12-16_13-58-49.png
Note that this splitter as described in teh app note is for composite video (one channel only). You'll need 3 per stage for RGB.
 

CDRIVE

Hauling 10' pipe on a Trek Shift3
I don't want to derail this thread but something has been bugging me since Hop corrected me regarding Composite Video vs RGB.

I have no idea why I posted what I did! I've installed well over a hundred CCTV security cameras, all of which employed composite video output via BNC connectors over 75Ω coax!

How I mixed RGB and Composite Video is a mystery to me.....Maybe I'm guilty of TUI??

Chris
 
If the OP still is interested, and confirms that all video signals are composite NTSC, then Maxim has many video mux chips. Also, there are very inexpensive video mux products on ebay, and many are remote controllable through a serial port.

ak
 

hevans1944

Hop - AC8NS
Okay, if we are derailing the thread, I would like to mention a project I did sometime in the early 1980s. The company I worked for had just acquired an imaging infrared camera (FLIR or Forward Looking Infra Red) with two liquid nitrogen cooled detectors, one "tuned" to 3 to 6 μm and the other to 8 to 12 μm wavelengths. These are the two atmospheric "windows" that more or less easily transmit infrared radiation with little attenuation from water vapor and aerosols naturally present in air. The camera used miniature galvanometers to scan the focal plane of a germanium lens in two directions across each sensor. Being mechanical (and resonant to boot!) the horizontal scan rate was about half the normal NTSC monochrome line rate. To compensate for this disparity, the camera digitized the sensor output and stored the results in a FIFO (First In, First Out) buffer. It did this for both directions of the horizontal scan (right to left and left to right). Being resonant, the scan was sinusoidal. So, without correction, the video image would have been quiet distorted if samples were taken at uniform intervals in time. IIRC correctly, the A/D samples were controlled by an oscillator that ran at some multiple of the line rate to produce about 512 samples per scan line, but the samples were taken at uniformly spaced amplitude increments of the original sinusoidal drive signal. Voila! Non linear sweep results in linearly spaced samples!

Now all of this was marvelous enough, and we all had a fun time pointing the camera around and taking pictures (using a rack-mounted digital frame grabber about the size of large computer monitor) and discovering how infrared imaging worked in the "real world". But to see both wavelength bands you had to flip a switch because the camera would only display one or the other at its NTSC composite video output. The powers-that-be wanted to display both bands simultaneously on a TV monitor and capture the superimposed images on the frame grabber. After examining the circuitry for awhile, I realized that the FIFO offered a solution. To accommodate NTSC scan rates, the FIFO was actually read out twice for each right-to-left or left-to-right horizontal scan. All I had to do was take the video, derived with D/As from FIFO outputs (one from each of the two sensors), and rapidly switch them back and forth at the TV line rate. Fortunately for me, I found an integrated circuit video switch that would operate at the TV line rate. Along with a couple of TTL "glue" chips, the whole circuit, including BNC connectors, fit inside a small Bud die-cast aluminum box that mounted to the back of the camera electronics box.

When I fired it up the dual-band imaging worked as advertised and the powers-that-be were suitably impressed. However, I wanted to go further and adapt a CCD visible light camera to coaxially view the same scene as the infrared camera and display the combined (and suitably scaled) images (visible plus two infrared) on a single monitor. Well, it turned out that this was already being done for remote reconnaissance and heads-up targeting systems, but using very expensive staring infrared arrays instead of the crude mechanically scanned version we had. The technology was called sensor fusion and included visible, infrared, and radar. At that time it was all highly classified, and I didn't yet have the clearances or the need-to-know to access information about that technology. So when I proposed adding the CCD camera to the mix, the powers-that-be (who did have the appropriate "tickets" and knew about sensor fusion) got strangely quiet and then quickly told me, "Never-mind about that. This FLIR modification is all we need."

Unfortunately I don't remember the part number for the high-speed video switch, and it has probably long since been replaced by newer devices. Witness the "magic" special video effects available today at your local TV station or Hollywood production studio to see where this eventually wound up in commercial products. Later that same decade we bought a push-broom airborne hyperspectral scanner from a small company in New York City that hand-built them. It took 512 images simultaneously, each representing a narrow range of the spectrum, from ultraviolet to near-infrared, and recorded them in real time on 9-track digital magnetic tape. Of course today it would use a flash drive, but back then this was hot sh*t. Our software weenies had a field day extracting data from the humongous image set and displaying it in a form human beings could look at and understand. One example I remember used the spectral "signature" of a certain type of camouflage netting, nominally invisible to the naked eye, to color it bright red in the image displayed on a color monitor. Good luck trying to hide something under that particular camouflage. It's also useful for determining vegetation varieties and the health of same. Today there are many multi-spectral orbiting satellite imagers available for commercial use. Similar eye-in-the-sky hyper-spectral imaging systems may have existed in the 1980s (or earlier), but they would have been (and may still be) highly classified. I haven't worked in that area since 1990, but Aviation Week and Space Technology magazine does a pretty good job of keeping me up to date.
 
Top