Hello,

I am trying to implement a hard real-time vision system on Linux (RH
7.3, 2.4.18) usign a BT848 capture card and V4L (this is on a P42.4Ghz).
I need to capture 3 NTSC signals, at 320x240x24 without dropping any
frames (or dropping very very few -- people's lives aren't at risk if
they are dropped, but I need that rate for motor control). Before now, I
was using a video assembler with 4 inputs, which fed a new analog signal
into one capture card (which was captured at 640x480). Now, I have a
card that has 4 grabbers on-card.  I'm using DMA, with 32 buffers
(although the number doesn't seem to matter after 2). Due to the lack of
synchonization in the signals, I am getting killed by frames dropping
all over the place. This makes intuitive sense... but I don't know how
to get around it. Does anyone have suggestions? I can't afford copious
memory copies (that's why I'm using DMA in the first place). 

Another question. How can I detect whether a frame has been dropped or
not? I mean, it seems if I stick the capture resolution to 640x480x24
with just one of the grabbers in use and doing NO processing that
oftentimes the loop takes > 40 seconds (which means a frame was dropped,
no?) (timed using struct timevals)

The loop being:

current_frame = 0
VIDIOCMCAPTURE(0)
Do{
        VIDIOCMCAPTURE(current_frame+1)
        VIDIOCSYNC(current_frame)
        /* processing - NO PROCESSING NOW */
        current_frame+1
}

Could going to V4L2 solve some of my problems?

Anyway, Thank you very much for any help you can give me.

Daniel



--
video4linux-list mailing list
Unsubscribe mailto:[EMAIL PROTECTED]
https://listman.redhat.com/mailman/listinfo/video4linux-list

Reply via email to