Hi Sylwester,

On Wed, May 22, 2013 at 11:41:50PM +0200, Sylwester Nawrocki wrote:
> [...]
> >>>I'm in favour of using a separate video buffer queue for passing
> >>>low-level
> >>>metadata to user space.
> >>
> >>Sure. I certainly see a need for such an interface. I wouldn't like to
> >>see it
> >>as the only option, however. One of the main reasons of introducing
> >>MPLANE
> >>API was to allow capture of meta-data. We are going to finally prepare
> >>some
> >>RFC regarding usage of a separate plane for meta-data capture. I'm not
> >>sure
> >>yet how it would look exactly in detail, we've just discussed this topic
> >>roughly with Andrzej.
> >
> >I'm fine that being not the only option; however it's unbeatable when it
> >comes to latencies. So perhaps we should allow using multi-plane buffers
> >for the same purpose as well.
> >
> >But how to choose between the two?
> 
> I think we need some example implementation for metadata capture over
> multi-plane interface and with a separate video node. Without such
> implementation/API draft it is a bit difficult to discuss this further.

Yes, that'd be quite nice.

There are actually a number of things that I think would be needed to
support what's discussed above. Extended frame descriptors (I'm preparing
RFC v2 --- yes, really!) are one.

Also creating video nodes based on how many different content streams there
are doesn't make much sense to me. A quick and dirty solution would be to
create a low level metadata queue type to avoid having to create more video
nodes. I think I'd prefer a more generic solution though.

-- 
Kind regards,

Sakari Ailus
e-mail: sakari.ai...@iki.fi     XMPP: sai...@retiisi.org.uk
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to