Justin Schoeman wrote:
> Gerd Knorr wrote:
>
>>> But my primary question was how to address the capturing of video
>>> from two sources. (I'd prefer if it was possibile to use one single
>>> thread because (IMHO) it ensures better sync.
>>
>>
>>
>> With v4l2 API you can use select() to do exactly that.
>>
>> Gerd
>
>
>
> With the v4l2 API, you get timestamps anyway, so you can decide after
> the fact, which two images are closest in time to each other... No
> real need for synchronous capture then...
...snip...
Since you are "tracking pointers" I would guess that you are translating
the information into coordinates for calculation purposes. By time
stamping the coordinates from each view point you should be able to
interpolate the points from each view. I would pobable use at least a
quadratic interpolation using three known points for each view, then
generate interpolated data from each veiw at a specific or variable
clock speed. Using {x,y,miliseconds} three dimentional points for each
pointer could increase accuracy. Storing two sets of three dimentional
data sets rather than guestimating the time is more acurate. You may
also want to build correction algorithms into your translation, because
unless you are using a highspeed shutter the top of each field is
"older" than the bottom of the field and the alternate field contains
slightly newer information than the first but the data is interlaced
below the first fields lines.
One of the most important things to know in order to provide higher
accuracy is the technical details of the camera. For example, is the
data for each frame captured stop action, or is the data changing while
the frame is processed {this is most likely.} What is the frequency
drift rate of the camera and can the camera take an external clock.
There are other qualities of cameras that are less impotrant, like
bluming, bleeding, bluring, pixel variation and the linear acuracy of
the optics. These can all be counteracted by doing calibration tests and
building a mathematical representaion of the flaws. For instace a
spherical aberation in the lens can be calculated and processed with the
quadratic representaion of the error. I don't want to give all the
solutions, because after all it is your project and your credit not
mine. PS I was a math/physics major, but now I just work as a Senior
Network Administrator for an Internet Service Provider. Maybe one day I
will go back to school and get a masters or doctorate so that I might
have a chance getting a job in my prefered field of theoretical physics.
Best advice I can give is to figure out what you want to achive from the
most basic aspects of the physical components, calculate representaions
of the important aspects {some errors are diminished by other factors.}
Then build corrective functions and process the data to achive corrected
results. I would suggest it is important to store the raw data, just
incase you discover errors in your corrective routines. Reprocessing raw
data is more acurate than processing processed data.
As far as the threading of the capture of data, multi threading has
advantages if you are maintaining time stamps for each sample. Since the
processing of data to gather three dimesioal data to be combined into a
four dimentional data stream requires processing, you might just as well
build in corrective routines that can provide the required acuracy the
project demands. I doubt you want 3D jitter creeping in, that could
represent incorrect inertial stability and inaccuratly complex motion
analysis. But not knowing your project you may require far less
stringent accuacy.
Guy Fraser
_______________________________________________
Video4linux-list mailing list
[EMAIL PROTECTED]
https://listman.redhat.com/mailman/listinfo/video4linux-list