Hello Paul,
Yes and I have been looking at it. It has many things missing notably YUV
format data streams.
Indeed. On a side issue, unless I am having a simple problem navigating
their site, have you noticed that Omnivision have removed datasheets for
their sensors from their website recently? Then curiously they leave the
datasheet for the SCCB bus on their site. Or am I just not looking in the
right place?
So reading your reply I immediately see my problem. Not having experience
of video, I designed my hardware around the VGA Frame Timing Diagram,
concluding that I would need to store a maximum of 307200 bytes of
information per frame. On this bases I have a 512k SRAM. I do store
exactly 307200 bytes, and for interest sake I will explain the control logic
in the CPLD.
First, I want to examine what data I am actually capturing. If two bytes of
valid data are available per cycle of PCLK (one on each edge), then I am
capturing the data only on one edge. (ie : half the available data) I
suspect that I am capturing on the Y values, as I get get a very good
monochrome image. Whilst this makes sense for one line, if I understand
correctly, the Y values won't appear in the same position on the second
line. ie : line 1 byte 2 (not pixel 2) is Y but line 2 byte 2 won't be a Y.
I won't waste too much time thinking about this, as what really matters is
that I capture all the bytes correctly.
Back to the control logic of the CPLD.
Remember here, that I am capturing 307200 bytes. In effect, I "gate" the
SRAM such that it stores 640 samples during the time that HREF is active.
How do you determine capture?
Counter counting the first 640 falling edges of PCLK? (as this is
determined
by COMD bit 6 as the reference edge for Y[7:0}). When you run out of data
space what happens to other data is it lost.
The CPLD clock is running at twice PCLK, which in two cycles gives me time
to increment the address counter and strobe !WE.
Actually, I don't look at PCLK at all. I first wait for VSYNC. After VSYNC
has gone low, I know the next time HREF goes high, it will be the first line
of the frame.
Then I wait for HREF to go high. Whilst it is high, I alternately strobe
!WE and increment the address counter. After each byte write, I check if
HREF is still high, if it is, I perform another write cycle. If it has gone
low, I wait for two possible events. (And this I think answers your
question

If I see HREF go high again, it is the start of a new line and I
start writing bytes into SRAM. If on the other hand I don't see HREF go
high, but instead see VSYNC go high, then clearly it is the start of a new
frame, and I have captured all the information relevant to one frame. At
this point I conclude the capture sequence is complete. Since I am
interested in capturing a still frame, not video, this simple approach seems
to work fine.
To capture on both edges of PCLK, I will need to (a) Reduce the PCLK rate by
half, and (b) increase the SRAM size from 512k to 600k
The SRAM size is now an issue. Whilst 512k is reasonably priced, any larger
and I think my approach should change.
Remaining production lifetime of the OV7640 may be an issue, and while I'm
at it, why not make provision for an OV9650 (1.3 M pixels) I see at
2k1.co.uk that the OV9650 is less expensive than the OV7640! I don't
recognize the package outline (the limited information I can see on this
device), and I wonder, can I even hand assemble the OV9650? I'm waiting for
a response to a request for the datasheet for it.
The next issue will be RAM, so maybe it is time to look at a (less
expensive) DRAM. I would need 1300 x 1028 x 2(!) = 2672800 bytes. This is
only £3.24 + VAT at Farnell, but of course the (CPLD) controller complexity
is much higher.
Firstly PCLK is the Pixel NOT byte clock in frequency.
OK. I will try and scope PCLK and Y[7:0] to see if I get valid data on both
edges. Since I don't have a DSO, this might be a little bit of a challenge,
but I'll soon see.
Secondly this is a datasheet for a MONO and colour device and preliminary
at that. Not uncommon from Omnivision.
Yes. I never found an OV7640 datasheet that wasn't "preliminary"
Now this is why I separate the terms BYTEs and PIXELS. COMD bit 6 is a
big clue to me that if you were to check with a scope to see if you
have STABLE data on the rising and falling edges of PCLK, that is
where your colour data is.
Thank you for this explanation.
Bad move displaying an image, unless it is of a WELL illuminated test
chart
will tell you very little as:-
a) If by chance you are only capturing the first 640 bytes for each line
How do you know that data is the FULL width of what the device is
capturing?
I believe I am capturing every other byte of a line, ie : 640 bytes when in
fact there are 1280 "available" for capture.
b) If the target image is not very colourful (the vast majority of views)
Chrominance data is offset by 128 as it is SIGNED values, this can give
every second pixel a grey mid range value. You would be amazed how
little
range U and V have on a lot of video sources, even from a camera.
OK. In fact, I may already have seen this, whilst playing with the
registers and selecting different output formats, I captured data whilst was
grey and had VERY little variation in values. I think this was due to
shifting the format from UYVY to YUYV which makes sense. I was capturing
the colour information, but not the luminance.
c) You are not capturing the colour data and have not made provisions for
it.
This is exactly the conclusion I draw thanks to your explanation.
Put some form of test chart in front of the camera (even coloured rainbow
ribbon cable - the poor man's colour bars test chart), know what you
should
be seeing. Look at the DATA from the first byte onwards to see if it makes
sense. Even for a black and white target the U and V data should be
around 128 for every second BYTE for no colour.
I will modify the logic as follows :
Slow PCLK down so I can capture every BYTE of output. I will exceed the
store capacity of the SRAM by doing this, so I will come up with a plan to
capture say half the number of vertical lines. This will give me the
opportunity of checking that I capture the output correctly. (For at least
half the frame anyway.)
I can then work on a new memory design sufficient to capture all bytes in a
DRAM.
To support 4:2:2 as 8 bit Y, U and V each and ITU 656 it MUST be going at
twice the Pixel rate, that is a function of 4:2:2 in 656 format. This
format
is not fully described in their data sheet and their diagrams for RGB 555
and RGB 565 leave a lot to be desired as this suggests half the H
resolution
is provided. I suspect this is what they refer to as 'preliminary'.
Pity the datasheet wasn't more accurate :-(
I really wish they provided a VBS signal from this device (Video,
Blanking,
and Sync) to take the Y signal to some form of other monitor to compare
the data captured.
Not that I've looked closely, but don't some other OV devices have a
separate Y channel (ie : 8 pins for Y and 8 pins for UV)?
It is a pity that devices such as OV519 are not available in small
quantities. I saw another device, a Sanyo LC82210 that would make image
capture a breeze. (I saw a simple block diagram here
http://www.alldatasheet.co.kr/datasheet-pdf/pdf_kor/SANYO/LC82210.html but
this part is not readily available.
Still, maybe it is more fun doing it the hard way, and then the only supply
problem becomes the lifetime of the OV part.
Paul.