L
Lewin A.R.W. Edwards
Hi all,
I'm trying to debug a problem - or even decide if I have a problem - in
a system that records data to tape. (This is my camcorder-telemetry
device, in a slightly different incarnation). This thing works very
nicely on my digital camcorder. However I want to use the same circuit
to lay down a data track on an audio cassette, and I'm encountering odd
bit errors.
To describe the format very briefly: The system has a 6.250kHz interrupt
handler which toggles a digital output, resulting in a nominal 3.125kHz
square wave going out to the recorder. Every odd interrupt, a bit is
taken off the head of the Tx queue. If that bit is '0', the toggle
operation is skipped for this interrupt. IOW, the bit cell is bracketed
by edges, and if there's an edge in the middle it's a '1' and if there
is no edge in the middle, it's a '0'.
For those who want gory details, the actual format has another layer on
top of this. The outbound data is divided into packets comprised of:
48 bits '1' - timing header
1 bit '0' - sync bit
48 x - 9-bit bytes (data byte followed by a '0' spacer)
2 x - 9-bit CRC bytes (data byte followed by '0' spacer)
The decoder listens for a burst of at least 10 consecutive "1" bits
(which can't occur in the middle of a data stream). While acquiring
this, it measures the bit cell timing. Once it has acquired enough "1"s,
it keeps sampling until either an excessively out-of-time edge occurs
(>25% of nominal bit cell deviation), or the sync bit is found. In the
latter case, it gathers the incoming data, checks the CRC, and decides
whether to pass the data on to the host.
Now the part where I'd like suggestions. When I record the above signal
on my digital camcorder, the output waveform on playback looks almost
exactly like the input waveform, except that it's understandably
centered around the 0V line. When I record on a simple cheap cassette
player, however, I get a very spiky waveform, which I guess (thinking
about the physics of it) makes sense. Traces are at
<http://www.larwe.com/camctrl.jpg>, sorry for the fuzzy photo.
This spiky waveform actually works quite well with the decoder, because
the decoder only looks for edges, and misses the trailing edge of each
spike due to interrupt latency. The higher BER I'm finding *could* be
attributable entirely to the much lower quality tape and transport in
the cassette unit.
But I am wondering what I can do to make the signal look more like what
I expect. I'm wondering in particular if I should be doing something to
the signal before sending it on to the recorder, or if the
processing/interface circuit I need should be on the playback side. For
instance, should I be trying to match the speaker impedance the recorder
expects to see, probably 8 or 16 ohms? My input is very high-impedance.
Any comments/suggestions?
I'm trying to debug a problem - or even decide if I have a problem - in
a system that records data to tape. (This is my camcorder-telemetry
device, in a slightly different incarnation). This thing works very
nicely on my digital camcorder. However I want to use the same circuit
to lay down a data track on an audio cassette, and I'm encountering odd
bit errors.
To describe the format very briefly: The system has a 6.250kHz interrupt
handler which toggles a digital output, resulting in a nominal 3.125kHz
square wave going out to the recorder. Every odd interrupt, a bit is
taken off the head of the Tx queue. If that bit is '0', the toggle
operation is skipped for this interrupt. IOW, the bit cell is bracketed
by edges, and if there's an edge in the middle it's a '1' and if there
is no edge in the middle, it's a '0'.
For those who want gory details, the actual format has another layer on
top of this. The outbound data is divided into packets comprised of:
48 bits '1' - timing header
1 bit '0' - sync bit
48 x - 9-bit bytes (data byte followed by a '0' spacer)
2 x - 9-bit CRC bytes (data byte followed by '0' spacer)
The decoder listens for a burst of at least 10 consecutive "1" bits
(which can't occur in the middle of a data stream). While acquiring
this, it measures the bit cell timing. Once it has acquired enough "1"s,
it keeps sampling until either an excessively out-of-time edge occurs
(>25% of nominal bit cell deviation), or the sync bit is found. In the
latter case, it gathers the incoming data, checks the CRC, and decides
whether to pass the data on to the host.
Now the part where I'd like suggestions. When I record the above signal
on my digital camcorder, the output waveform on playback looks almost
exactly like the input waveform, except that it's understandably
centered around the 0V line. When I record on a simple cheap cassette
player, however, I get a very spiky waveform, which I guess (thinking
about the physics of it) makes sense. Traces are at
<http://www.larwe.com/camctrl.jpg>, sorry for the fuzzy photo.
This spiky waveform actually works quite well with the decoder, because
the decoder only looks for edges, and misses the trailing edge of each
spike due to interrupt latency. The higher BER I'm finding *could* be
attributable entirely to the much lower quality tape and transport in
the cassette unit.
But I am wondering what I can do to make the signal look more like what
I expect. I'm wondering in particular if I should be doing something to
the signal before sending it on to the recorder, or if the
processing/interface circuit I need should be on the playback side. For
instance, should I be trying to match the speaker impedance the recorder
expects to see, probably 8 or 16 ohms? My input is very high-impedance.
Any comments/suggestions?