Re: WinTV HVR-900 (usb 2040:6500) (model 65008) / no audio but clicking noise

2011-02-26 Thread AW
yesterday i wrote:
 Now I  bought a Hauppauge WinTV HVR-900 (USB, DVB-T/analog Hybrid).


today i found that i have quite good DVB-T connectivity...
but sometimes there r too many errors...

so i m still interested in analog tv...

after some rebooting and
after i dropped a lot of firmware files from 
http://konstantin.filtschew.de/v4l-firmware/firmware_v3.tgz into /lib/firmware 
(in addition to xc3028-v27.fw)
i can hear the analog audio,
but from time to time there is still this strong clicking noise...
example: http://www.wgboome.de./20110226,hvr.mpg (i blurred the picture due to 
copyright considerations...)...

could it be that i use the wrong amux?
i found that theory here (but i dont know if i can just change the kernel 
module):
http://www.freak-search.com/de/thread/332374/linux-dvb_em28xx-audio_hvr-900_b3c0_id_20406502_hauppa


-arne


--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [st-ericsson] v4l2 vs omx for camera

2011-02-26 Thread Hans Verkuil
On Friday, February 25, 2011 18:22:51 Linus Walleij wrote:
 2011/2/24 Edward Hervey bilb...@gmail.com:
 
   What *needs* to be solved is an API for data allocation/passing at the
  kernel level which v4l2,omx,X,GL,vdpau,vaapi,... can use and that
  userspace (like GStreamer) can pass around, monitor and know about.
 
 I think the patches sent out from ST-Ericsson's Johan Mossberg to
 linux-mm for HWMEM (hardware memory) deals exactly with buffer
 passing, pinning of buffers and so on. The CMA (Contigous Memory
 Allocator) has been slightly modified to fit hand-in-glove with HWMEM,
 so CMA provides buffers, HWMEM pass them around.
 
 Johan, when you re-spin the HWMEM patchset, can you include
 linaro-dev and linux-media in the CC?

Yes, please. This sounds promising and we at linux-media would very much like
to take a look at this. I hope that the CMA + HWMEM combination is exactly
what we need.

Regards,

Hans

 I think there is *much* interest
 in this mechanism, people just don't know from the name what it
 really does. Maybe it should be called mediamem or something
 instead...
 
 Yours,
 Linus Walleij
 
 

-- 
Hans Verkuil - video4linux developer - sponsored by Cisco
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC] snapshot mode, flash capabilities and control

2011-02-26 Thread Hans Verkuil
On Friday, February 25, 2011 18:08:07 Guennadi Liakhovetski wrote:

snip

   configure the sensor to react on an external trigger provided by the 
   flash 
   controller is needed, and that could be a control on the flash 
   sub-device. 
   What we would probably miss is a way to issue a STREAMON with a number of 
   frames to capture. A new ioctl is probably needed there. Maybe that would 
   be 
   an opportunity to create a new stream-control ioctl that could replace 
   STREAMON and STREAMOFF in the long term (we could extend the subdev 
   s_stream 
   operation, and easily map STREAMON and STREAMOFF to the new ioctl in 
   video_ioctl2 internally).
  
  How would this be different from queueing n frames (in total; count
  dequeueing, too) and issuing streamon? --- Except that when the last frame
  is processed the pipeline could be stopped already before issuing STREAMOFF.
  That does indeed have some benefits. Something else?
 
 Well, you usually see in your host driver, that the videobuffer queue is 
 empty (no more free buffers are available), so, you stop streaming 
 immediately too.

This probably assumes that the host driver knows that this is a special queue?
Because in general drivers will simply keep capturing in the last buffer and not
release it to userspace until a new buffer is queued.

That said, it wouldn't be hard to add some flag somewhere that puts a queue in
a 'stop streaming on last buffer capture' mode.

Regards,

Hans

-- 
Hans Verkuil - video4linux developer - sponsored by Cisco
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC/PATCH 0/1] New subdev sensor operation g_interface_parms

2011-02-26 Thread Hans Verkuil
On Friday, February 25, 2011 19:23:43 Sakari Ailus wrote:
 Hi Guennadi and others,
 
 Apologies for the late reply...
 
 Guennadi Liakhovetski wrote:
  On Wed, 23 Feb 2011, Hans Verkuil wrote:
  
  On Tuesday, February 22, 2011 22:42:58 Sylwester Nawrocki wrote:
  Clock values are often being rounded at runtime and do not always reflect 
  exactly
  the numbers fixed at compile time. And negotiation could help to obtain 
  exact
  values at both sensor and host side.
 
  The only static data I am concerned about are those that affect signal 
  integrity.
  After thinking carefully about this I realized that there is really only 
  one
  setting that is relevant to that: the sampling edge. The polarities do not
  matter in this.
  
  Ok, this is much better! I'm still not perfectly happy having to punish 
  all just for the sake of a couple of broken boards, but I can certainly 
  much better live with this, than with having to hard-code each and every 
  bit. Thanks, Hans!
 
 How much punishing would actually take place without autonegotiation?
 How many boards do we have in total? I counted around 26 of
 soc_camera_link declarations under arch/. Are there more?
 
 An example of hardware which does care about clock polarity is the
 N8[01]0. The parallel clock polarity is inverted since this actually
 does improve reliability. In an ideal hardware this likely wouldn't
 happen but sometimes the hardware is not exactly ideal. Both the sensor
 and the camera block support non-inverted and inverted clock signal.
 
 So at the very least it should be possible to provide this information
 in the board code even if both ends share multiple common values for
 parameters.
 
 There have been many comments on the dangers of the autonegotiation and
 I share those concerns. One of my main concerns is that it creates an
 unnecessary dependency from all the boards to the negotiation code, the
 behaviour of which may not change.

OK, let me summarize this and if there are no objections then Stan can start
implementing this.

1) We need two subdev ops: one reports the bus config capabilities and one that
sets it up. Note that these ops should be core ops since this functionality is
relevant for both sensors and video receive/transmit devices.

2) The clock sampling edge and polarity should not be negotiated but must be set
from board code for both subdevs and host. In the future this might even require
a callback with the clock frequency as argument.

3) We probably need a utility function that given the host and subdev 
capabilities
will return the required subdev/host settings.

4) soc-camera should switch to these new ops.

Of course, we also need MIPI support in this API. The same considerations apply 
to
MIPI as to the parallel bus: settings that depend on the hardware board design
should come from board code, others can be negotiated. Since I know next to 
nothing
about MIPI I will leave that to the experts...

One thing I am not sure about is if we want separate ops for parallel bus and 
MIPI,
or if we merge them. I am leaning towards separate ops as I think that might be
easier to implement.

Regards,

Hans

-- 
Hans Verkuil - video4linux developer - sponsored by Cisco
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC] snapshot mode, flash capabilities and control

2011-02-26 Thread Guennadi Liakhovetski
On Sat, 26 Feb 2011, Hans Verkuil wrote:

 On Friday, February 25, 2011 18:08:07 Guennadi Liakhovetski wrote:
 
 snip
 
configure the sensor to react on an external trigger provided by the 
flash 
controller is needed, and that could be a control on the flash 
sub-device. 
What we would probably miss is a way to issue a STREAMON with a number 
of 
frames to capture. A new ioctl is probably needed there. Maybe that 
would be 
an opportunity to create a new stream-control ioctl that could replace 
STREAMON and STREAMOFF in the long term (we could extend the subdev 
s_stream 
operation, and easily map STREAMON and STREAMOFF to the new ioctl in 
video_ioctl2 internally).
   
   How would this be different from queueing n frames (in total; count
   dequeueing, too) and issuing streamon? --- Except that when the last frame
   is processed the pipeline could be stopped already before issuing 
   STREAMOFF.
   That does indeed have some benefits. Something else?
  
  Well, you usually see in your host driver, that the videobuffer queue is 
  empty (no more free buffers are available), so, you stop streaming 
  immediately too.
 
 This probably assumes that the host driver knows that this is a special queue?
 Because in general drivers will simply keep capturing in the last buffer and 
 not
 release it to userspace until a new buffer is queued.

Yes, I know about this spec requirement, but I also know, that not all 
drivers do that and not everyone is happy about that requirement:)

 That said, it wouldn't be hard to add some flag somewhere that puts a queue in
 a 'stop streaming on last buffer capture' mode.

No, it wouldn't... But TBH this doesn't seem like the most elegant and 
complete solution. Maybe we have to think a bit more about it - which 
soncequences switching into the snapshot mode has on the host driver, 
apart from stopping after N frames. So, this is one of the possibilities, 
not sure if the best one.

Thanks
Guennadi
---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
http://www.open-technology.de/
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [st-ericsson] v4l2 vs omx for camera

2011-02-26 Thread Felipe Contreras
Hi,

On Fri, Feb 18, 2011 at 6:39 PM, Robert Fekete robert.fek...@linaro.org wrote:
 To make a long story short:
 Different vendors provide custom OpenMax solutions for say Camera/ISP. In
 the Linux eco-system there is V4L2 doing much of this work already and is
 evolving with mediacontroller as well. Then there is the integration in
 Gstreamer...Which solution is the best way forward. Current discussions so
 far puts V4L2 greatly in favor of OMX.
 Please have in mind that OpenMAX as a concept is more like GStreamer in many
 senses. The question is whether Camera drivers should have OMX or V4L2 as
 the driver front end? This may perhaps apply to video codecs as well. Then
 there is how to in best of ways make use of this in GStreamer in order to
 achieve no copy highly efficient multimedia pipelines. Is gst-omx the way
 forward?

 Let the discussion continue...

We are talking about 3 different layers here which don't necessarily
overlap. You could have a v4l2 driver, which is wrapped in an OpenMAX
IL library, which is wrapped again by gst-openmax. Each layer is
different. The problem here is the OMX layer, which is often
ill-conceived.

First of all, you have to remember that whatever OMX is supposed to
provide, that doesn't apply to camera; you can argue that there's some
value in audio/video encoding/decoding, as the interfaces are very
simple and easy to standardize, but that's not the case with camera. I
haven't worked with OMX camera interfaces, but AFAIK it's very
incomplete and vendors have to implement their own interfaces, which
defeats the purpose of OMX. So OMX provides nothing in the camera
case.

Secondly, there's no OMX kernel interface. You still need something
between kernel to user-space, the only established interface is v4l2.
So, even if you choose OMX in user-space, the sensible choice in
kernel-space is v4l2, otherwise you would end up with some custom
interface which is never good.

And third, as Laurent already pointed out; OpenMAX is _not_ open. The
community has no say in what happens, everything is decided by a
consortium, you need to pay money to be in it, to access their
bugzilla, to subscribe to their mailing lists, and to get access to
their conformance test.

If you forget all the marketing mumbo jumbo about OMX, at the of the
day what is provided is a bunch of headers (and a document explaining
how to use them). We (the linux community) can come up with a bunch of
headers too, in fact, we already do much more than that with v4l2, the
only part missing is encoders/decoders, which if needed could be added
very easily (Samsung already does AFAIK). Right?

Cheers.

-- 
Felipe Contreras
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC/PATCH 0/1] New subdev sensor operation g_interface_parms

2011-02-26 Thread Guennadi Liakhovetski
On Sat, 26 Feb 2011, Hans Verkuil wrote:

 On Friday, February 25, 2011 19:23:43 Sakari Ailus wrote:
  Hi Guennadi and others,
  
  Apologies for the late reply...
  
  Guennadi Liakhovetski wrote:
   On Wed, 23 Feb 2011, Hans Verkuil wrote:
   
   On Tuesday, February 22, 2011 22:42:58 Sylwester Nawrocki wrote:
   Clock values are often being rounded at runtime and do not always 
   reflect exactly
   the numbers fixed at compile time. And negotiation could help to obtain 
   exact
   values at both sensor and host side.
  
   The only static data I am concerned about are those that affect signal 
   integrity.
   After thinking carefully about this I realized that there is really only 
   one
   setting that is relevant to that: the sampling edge. The polarities do 
   not
   matter in this.
   
   Ok, this is much better! I'm still not perfectly happy having to punish 
   all just for the sake of a couple of broken boards, but I can certainly 
   much better live with this, than with having to hard-code each and every 
   bit. Thanks, Hans!
  
  How much punishing would actually take place without autonegotiation?
  How many boards do we have in total? I counted around 26 of
  soc_camera_link declarations under arch/. Are there more?
  
  An example of hardware which does care about clock polarity is the
  N8[01]0. The parallel clock polarity is inverted since this actually
  does improve reliability. In an ideal hardware this likely wouldn't
  happen but sometimes the hardware is not exactly ideal. Both the sensor
  and the camera block support non-inverted and inverted clock signal.
  
  So at the very least it should be possible to provide this information
  in the board code even if both ends share multiple common values for
  parameters.
  
  There have been many comments on the dangers of the autonegotiation and
  I share those concerns. One of my main concerns is that it creates an
  unnecessary dependency from all the boards to the negotiation code, the
  behaviour of which may not change.

Sorry, didn't want to comment on this... But to me this sounds like a void 
argument... Yes, there are _many_ inter-dependencies in the kernel, and if 
you break code something will stop working... What's new about it??? But 
no, I do not want to continue this discussion endlessly...

 OK, let me summarize this and if there are no objections then Stan can start
 implementing this.
 
 1) We need two subdev ops: one reports the bus config capabilities and one 
 that
 sets it up. Note that these ops should be core ops since this functionality is
 relevant for both sensors and video receive/transmit devices.
 
 2) The clock sampling edge and polarity should not be negotiated but must be 
 set
 from board code for both subdevs and host. In the future this might even 
 require
 a callback with the clock frequency as argument.
 
 3) We probably need a utility function that given the host and subdev 
 capabilities
 will return the required subdev/host settings.
 
 4) soc-camera should switch to these new ops.

...remains only to find, who will do this;)

So, I'm in minority here, if we don't count all those X systems, 
successfully using soc-camera with its evil auto-negotiation. If you just 
decide to do this and push the changes - sure, there's nothing I can do 
against this. But if you decide to postpone a final decision on this until 
we meet personally and will not have to circulate the same arguments 100 
times - just because the delay is shorter - maybe we can find a solution, 
that will keep everyone happy.

 Of course, we also need MIPI support in this API. The same considerations 
 apply to
 MIPI as to the parallel bus: settings that depend on the hardware board design
 should come from board code, others can be negotiated. Since I know next to 
 nothing
 about MIPI I will leave that to the experts...
 
 One thing I am not sure about is if we want separate ops for parallel bus and 
 MIPI,
 or if we merge them. I am leaning towards separate ops as I think that might 
 be
 easier to implement.

Thanks
Guennadi
---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
http://www.open-technology.de/
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [st-ericsson] v4l2 vs omx for camera

2011-02-26 Thread Felipe Contreras
Hi,

On Thu, Feb 24, 2011 at 3:04 PM, Hans Verkuil hverk...@xs4all.nl wrote:
 On Thursday, February 24, 2011 13:29:56 Linus Walleij wrote:
 2011/2/23 Sachin Gupta sachin.gu...@linaro.org:

  The imaging coprocessor in today's platforms have a general purpose DSP
  attached to it I have seen some work being done to use this DSP for
  graphics/audio processing in case the camera use case is not being tried or
  also if the camera usecases does not consume the full bandwidth of this
  dsp.I am not sure how v4l2 would fit in such an architecture,

 Earlier in this thread I discussed TI:s DSPbridge.

 In drivers/staging/tidspbridge
 http://omappedia.org/wiki/DSPBridge_Project
 you find the TI hackers happy at work with providing a DSP accelerator
 subsystem.

 Isn't it possible for a V4L2 component to use this interface (or something
 more evolved, generic) as backend for assorted DSP offloading?

Yes it is, and it has been part of my to-do list for some time now.

 So using one kernel framework does not exclude using another one
 at the same time. Whereas something like DSPbridge will load firmware
 into DSP accelerators and provide control/datapath for that, this can
 in turn be used by some camera or codec which in turn presents a
 V4L2 or ALSA interface.

 Yes, something along those lines can be done.

 While normally V4L2 talks to hardware it is perfectly fine to talk to a DSP
 instead.

 The hardest part will be to identify the missing V4L2 API pieces and design
 and add them. I don't think the actual driver code will be particularly hard.
 It should be nothing more than a thin front-end for the DSP. Of course, that's
 just theory at the moment :-)

The pieces are known already. I started a project called gst-dsp,
which I plan to split into the gst part, and the part that
communicates with the DSP, this part can move to kernel side with a
v4l2 interface.

It's easier to identify the code in the patches for FFmpeg:
http://article.gmane.org/gmane.comp.video.ffmpeg.devel/116798

 The problem is that someone has to do the actual work for the initial driver.
 And I expect that it will be a substantial amount of work. Future drivers 
 should
 be *much* easier, though.

 A good argument for doing this work is that this API can hide which parts of
 the video subsystem are hardware and which are software. The application 
 really
 doesn't care how it is organized. What is done in hardware on one SoC might be
 done on a DSP instead on another SoC. But the end result is pretty much the 
 same.

Exactly.

-- 
Felipe Contreras
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [st-ericsson] v4l2 vs omx for camera

2011-02-26 Thread Felipe Contreras
On Thu, Feb 24, 2011 at 3:27 PM, Laurent Pinchart
laurent.pinch...@ideasonboard.com wrote:
  Perhaps GStreamer experts would like to comment on the future plans ahead
  for zero copying/IPC and low power HW use cases? Could Gstreamer adapt
  some ideas from OMX IL making OMX IL obsolete?

 perhaps OMX should adapt some of the ideas from GStreamer ;-)

 I'd very much like to see GStreamer (or something else, maybe lower level, but
 community-maintainted) replace OMX.

Yes, it would be great to have something that wraps all the hardware
acceleration and could have support for software codecs too, all in a
standard interface. It would also be great if this interface would be
used in the upper layers like GStreamer, VLC, etc. Kind of what OMX
was supposed to be, but open [1].

Oh wait, I'm describing FFmpeg :) (supports vl42, VA-API, VDPAU,
DirectX, and soon OMAP3 DSP)

Cheers.

[1] 
http://freedesktop.org/wiki/GstOpenMAX?action=AttachFiledo=gettarget=gst-openmax.png

-- 
Felipe Contreras
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC/PATCH 0/1] New subdev sensor operation g_interface_parms

2011-02-26 Thread Hans Verkuil
On Saturday, February 26, 2011 14:14:29 Guennadi Liakhovetski wrote:
 On Sat, 26 Feb 2011, Hans Verkuil wrote:
 
  On Friday, February 25, 2011 19:23:43 Sakari Ailus wrote:
   Hi Guennadi and others,
   
   Apologies for the late reply...
   
   Guennadi Liakhovetski wrote:
On Wed, 23 Feb 2011, Hans Verkuil wrote:

On Tuesday, February 22, 2011 22:42:58 Sylwester Nawrocki wrote:
Clock values are often being rounded at runtime and do not always 
reflect exactly
the numbers fixed at compile time. And negotiation could help to 
obtain exact
values at both sensor and host side.
   
The only static data I am concerned about are those that affect signal 
integrity.
After thinking carefully about this I realized that there is really 
only one
setting that is relevant to that: the sampling edge. The polarities do 
not
matter in this.

Ok, this is much better! I'm still not perfectly happy having to punish 
all just for the sake of a couple of broken boards, but I can certainly 
much better live with this, than with having to hard-code each and 
every 
bit. Thanks, Hans!
   
   How much punishing would actually take place without autonegotiation?
   How many boards do we have in total? I counted around 26 of
   soc_camera_link declarations under arch/. Are there more?
   
   An example of hardware which does care about clock polarity is the
   N8[01]0. The parallel clock polarity is inverted since this actually
   does improve reliability. In an ideal hardware this likely wouldn't
   happen but sometimes the hardware is not exactly ideal. Both the sensor
   and the camera block support non-inverted and inverted clock signal.
   
   So at the very least it should be possible to provide this information
   in the board code even if both ends share multiple common values for
   parameters.
   
   There have been many comments on the dangers of the autonegotiation and
   I share those concerns. One of my main concerns is that it creates an
   unnecessary dependency from all the boards to the negotiation code, the
   behaviour of which may not change.
 
 Sorry, didn't want to comment on this... But to me this sounds like a void 
 argument... Yes, there are _many_ inter-dependencies in the kernel, and if 
 you break code something will stop working... What's new about it??? But 
 no, I do not want to continue this discussion endlessly...
 
  OK, let me summarize this and if there are no objections then Stan can start
  implementing this.
  
  1) We need two subdev ops: one reports the bus config capabilities and one 
  that
  sets it up. Note that these ops should be core ops since this functionality 
  is
  relevant for both sensors and video receive/transmit devices.
  
  2) The clock sampling edge and polarity should not be negotiated but must 
  be set
  from board code for both subdevs and host. In the future this might even 
  require
  a callback with the clock frequency as argument.
  
  3) We probably need a utility function that given the host and subdev 
  capabilities
  will return the required subdev/host settings.
  
  4) soc-camera should switch to these new ops.
 
 ...remains only to find, who will do this;)
 
 So, I'm in minority here, if we don't count all those X systems, 
 successfully using soc-camera with its evil auto-negotiation. If you just 
 decide to do this and push the changes - sure, there's nothing I can do 
 against this. But if you decide to postpone a final decision on this until 
 we meet personally and will not have to circulate the same arguments 100 
 times - just because the delay is shorter - maybe we can find a solution, 
 that will keep everyone happy.

No, I am no longer willing to postpone this. Sorry. Discussing this in a
brainstorm meeting or whatever won't bring us any closer. We did that in
the last Helsinki meeting already. Heck, we've gone over this for a year and a
half now, if not more. The arguments haven't changed in all that time. Enough
is enough.

Let Stan implement the new subdev core ops (Stan, please confirm that you can
work on this!), then we can use it in soc-camera for everything except the 
clock.
The final step will be to remove the clock negotiation from soc-camera.

By the time we are ready for that final step we'll see who can do this. It's
several months in the future anyway.

Regards,

Hans

  Of course, we also need MIPI support in this API. The same considerations 
  apply to
  MIPI as to the parallel bus: settings that depend on the hardware board 
  design
  should come from board code, others can be negotiated. Since I know next to 
  nothing
  about MIPI I will leave that to the experts...
  
  One thing I am not sure about is if we want separate ops for parallel bus 
  and MIPI,
  or if we merge them. I am leaning towards separate ops as I think that 
  might be
  easier to implement.
 
 Thanks
 Guennadi
 ---
 Guennadi Liakhovetski, Ph.D.
 Freelance 

Re: [RFC] snapshot mode, flash capabilities and control

2011-02-26 Thread Sylwester Nawrocki
On 02/26/2011 02:03 PM, Guennadi Liakhovetski wrote:
 On Sat, 26 Feb 2011, Hans Verkuil wrote:
 
 On Friday, February 25, 2011 18:08:07 Guennadi Liakhovetski wrote:

 snip

 configure the sensor to react on an external trigger provided by the flash
 controller is needed, and that could be a control on the flash sub-device.
 What we would probably miss is a way to issue a STREAMON with a number of
 frames to capture. A new ioctl is probably needed there. Maybe that would 
 be
 an opportunity to create a new stream-control ioctl that could replace
 STREAMON and STREAMOFF in the long term (we could extend the subdev 
 s_stream
 operation, and easily map STREAMON and STREAMOFF to the new ioctl in
 video_ioctl2 internally).

 How would this be different from queueing n frames (in total; count
 dequeueing, too) and issuing streamon? --- Except that when the last frame
 is processed the pipeline could be stopped already before issuing 
 STREAMOFF.
 That does indeed have some benefits. Something else?

 Well, you usually see in your host driver, that the videobuffer queue is
 empty (no more free buffers are available), so, you stop streaming
 immediately too.

 This probably assumes that the host driver knows that this is a special 
 queue?
 Because in general drivers will simply keep capturing in the last buffer and 
 not
 release it to userspace until a new buffer is queued.
 
 Yes, I know about this spec requirement, but I also know, that not all
 drivers do that and not everyone is happy about that requirement:)

Right, similarly a v4l2 output device is not releasing the last buffer
to userland and keeps sending its content until a new buffer is queued to the 
driver.
But in case of capture device the requirement is a pain, since it only causes
draining the power source, when from a user view the video capture is stopped.
Also it limits a minimum number of buffers that could be used in preview 
pipeline.

In still capture mode (single shot) we might want to use only one buffer so 
adhering
to the requirement would not allow this, would it?

 
 That said, it wouldn't be hard to add some flag somewhere that puts a queue 
 in
 a 'stop streaming on last buffer capture' mode.
 
 No, it wouldn't... But TBH this doesn't seem like the most elegant and
 complete solution. Maybe we have to think a bit more about it - which
 soncequences switching into the snapshot mode has on the host driver,
 apart from stopping after N frames. So, this is one of the possibilities,
 not sure if the best one.
 
 Thanks
 Guennadi
 ---
 Guennadi Liakhovetski, Ph.D.
 Freelance Open-Source Software Developer
 http://www.open-technology.de/

--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [st-ericsson] v4l2 vs omx for camera

2011-02-26 Thread Hans Verkuil
On Saturday, February 26, 2011 14:38:50 Felipe Contreras wrote:
 On Thu, Feb 24, 2011 at 3:27 PM, Laurent Pinchart
 laurent.pinch...@ideasonboard.com wrote:
   Perhaps GStreamer experts would like to comment on the future plans ahead
   for zero copying/IPC and low power HW use cases? Could Gstreamer adapt
   some ideas from OMX IL making OMX IL obsolete?
 
  perhaps OMX should adapt some of the ideas from GStreamer ;-)
 
  I'd very much like to see GStreamer (or something else, maybe lower level, 
  but
  community-maintainted) replace OMX.
 
 Yes, it would be great to have something that wraps all the hardware
 acceleration and could have support for software codecs too, all in a
 standard interface. It would also be great if this interface would be
 used in the upper layers like GStreamer, VLC, etc. Kind of what OMX
 was supposed to be, but open [1].
 
 Oh wait, I'm describing FFmpeg :) (supports vl42, VA-API, VDPAU,
 DirectX, and soon OMAP3 DSP)
 
 Cheers.
 
 [1] 
 http://freedesktop.org/wiki/GstOpenMAX?action=AttachFiledo=gettarget=gst-openmax.png
 
 

Are there any gstreamer/linaro/etc core developers attending the ELC in San 
Francisco
in April? I think it might be useful to get together before, during or after the
conference and see if we can turn this discussion in something more concrete.

It seems to me that there is an overall agreement of what should be done, but
that we are far from anything concrete.

Regards,

Hans

-- 
Hans Verkuil - video4linux developer - sponsored by Cisco
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC] snapshot mode, flash capabilities and control

2011-02-26 Thread Hans Verkuil
On Saturday, February 26, 2011 14:39:54 Sylwester Nawrocki wrote:
 On 02/26/2011 02:03 PM, Guennadi Liakhovetski wrote:
  On Sat, 26 Feb 2011, Hans Verkuil wrote:
  
  On Friday, February 25, 2011 18:08:07 Guennadi Liakhovetski wrote:
 
  snip
 
  configure the sensor to react on an external trigger provided by the 
  flash
  controller is needed, and that could be a control on the flash 
  sub-device.
  What we would probably miss is a way to issue a STREAMON with a number 
  of
  frames to capture. A new ioctl is probably needed there. Maybe that 
  would be
  an opportunity to create a new stream-control ioctl that could replace
  STREAMON and STREAMOFF in the long term (we could extend the subdev 
  s_stream
  operation, and easily map STREAMON and STREAMOFF to the new ioctl in
  video_ioctl2 internally).
 
  How would this be different from queueing n frames (in total; count
  dequeueing, too) and issuing streamon? --- Except that when the last 
  frame
  is processed the pipeline could be stopped already before issuing 
  STREAMOFF.
  That does indeed have some benefits. Something else?
 
  Well, you usually see in your host driver, that the videobuffer queue is
  empty (no more free buffers are available), so, you stop streaming
  immediately too.
 
  This probably assumes that the host driver knows that this is a special 
  queue?
  Because in general drivers will simply keep capturing in the last buffer 
  and not
  release it to userspace until a new buffer is queued.
  
  Yes, I know about this spec requirement, but I also know, that not all
  drivers do that and not everyone is happy about that requirement:)
 
 Right, similarly a v4l2 output device is not releasing the last buffer
 to userland and keeps sending its content until a new buffer is queued to the 
 driver.
 But in case of capture device the requirement is a pain, since it only causes
 draining the power source, when from a user view the video capture is stopped.
 Also it limits a minimum number of buffers that could be used in preview 
 pipeline.

No, we can't change this. We can of course add some setting that will explicitly
request different behavior.

The reason this is done this way comes from the traditional TV/webcam viewing 
apps.
If for some reason the app can't keep up with the capture rate, then frames 
should
just be dropped silently. All apps assume this behavior. In a normal user 
environment
this scenario is perfectly normal (e.g. you use a webcam app, then do a CPU
intensive make run).

I agree that you might want different behavior in an embedded environment, but
that should be requested explicitly.

 In still capture mode (single shot) we might want to use only one buffer so 
 adhering
 to the requirement would not allow this, would it?

That's one of the problems with still capture mode, yes.

I have not yet seen a proposal for this that I really like. Most are too 
specific
to this use-case (snapshot) and I'd like to see something more general.

Regards,

Hans

 
  
  That said, it wouldn't be hard to add some flag somewhere that puts a 
  queue in
  a 'stop streaming on last buffer capture' mode.
  
  No, it wouldn't... But TBH this doesn't seem like the most elegant and
  complete solution. Maybe we have to think a bit more about it - which
  soncequences switching into the snapshot mode has on the host driver,
  apart from stopping after N frames. So, this is one of the possibilities,
  not sure if the best one.
  
  Thanks
  Guennadi
  ---
  Guennadi Liakhovetski, Ph.D.
  Freelance Open-Source Software Developer
  http://www.open-technology.de/
 
 

-- 
Hans Verkuil - video4linux developer - sponsored by Cisco
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC/PATCH 0/1] New subdev sensor operation g_interface_parms

2011-02-26 Thread Sylwester Nawrocki
On 02/26/2011 01:50 PM, Hans Verkuil wrote:
 On Friday, February 25, 2011 19:23:43 Sakari Ailus wrote:
 Hi Guennadi and others,

 Apologies for the late reply...

 Guennadi Liakhovetski wrote:
 On Wed, 23 Feb 2011, Hans Verkuil wrote:

 On Tuesday, February 22, 2011 22:42:58 Sylwester Nawrocki wrote:
 Clock values are often being rounded at runtime and do not always reflect 
 exactly
 the numbers fixed at compile time. And negotiation could help to obtain 
 exact
 values at both sensor and host side.

 The only static data I am concerned about are those that affect signal 
 integrity.
 After thinking carefully about this I realized that there is really only 
 one
 setting that is relevant to that: the sampling edge. The polarities do not
 matter in this.

 Ok, this is much better! I'm still not perfectly happy having to punish
 all just for the sake of a couple of broken boards, but I can certainly
 much better live with this, than with having to hard-code each and every
 bit. Thanks, Hans!

 How much punishing would actually take place without autonegotiation?
 How many boards do we have in total? I counted around 26 of
 soc_camera_link declarations under arch/. Are there more?

 An example of hardware which does care about clock polarity is the
 N8[01]0. The parallel clock polarity is inverted since this actually
 does improve reliability. In an ideal hardware this likely wouldn't
 happen but sometimes the hardware is not exactly ideal. Both the sensor
 and the camera block support non-inverted and inverted clock signal.

 So at the very least it should be possible to provide this information
 in the board code even if both ends share multiple common values for
 parameters.

 There have been many comments on the dangers of the autonegotiation and
 I share those concerns. One of my main concerns is that it creates an
 unnecessary dependency from all the boards to the negotiation code, the
 behaviour of which may not change.
 
 OK, let me summarize this and if there are no objections then Stan can start
 implementing this.
 
 1) We need two subdev ops: one reports the bus config capabilities and one 
 that
 sets it up. Note that these ops should be core ops since this functionality is
 relevant for both sensors and video receive/transmit devices.

Sounds reasonable. In case of MIPI-CSI receiver as a separate subdev I assume 
it would allow to retrieve settings from sensor subdev and apply them to 
MIPI-CSI
receiver.

 
 2) The clock sampling edge and polarity should not be negotiated but must be 
 set
 from board code for both subdevs and host. In the future this might even 
 require
 a callback with the clock frequency as argument.
 
 3) We probably need a utility function that given the host and subdev 
 capabilities
 will return the required subdev/host settings.
 
 4) soc-camera should switch to these new ops.
 
 Of course, we also need MIPI support in this API. The same considerations 
 apply to
 MIPI as to the parallel bus: settings that depend on the hardware board design
 should come from board code, others can be negotiated. Since I know next to 
 nothing
 about MIPI I will leave that to the experts...
 
 One thing I am not sure about is if we want separate ops for parallel bus and 
 MIPI,
 or if we merge them. I am leaning towards separate ops as I think that might 
 be
 easier to implement.

I suppose it wouldn't hurt to have same, parametrized ops for both parallel and 
serial
bus. Just like in the original Stan's RFC.

 
 Regards,
 
   Hans
 

--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [st-ericsson] v4l2 vs omx for camera

2011-02-26 Thread Kyungmin Park
On Sat, Feb 26, 2011 at 2:22 AM, Linus Walleij linus.wall...@linaro.org wrote:
 2011/2/24 Edward Hervey bilb...@gmail.com:

  What *needs* to be solved is an API for data allocation/passing at the
 kernel level which v4l2,omx,X,GL,vdpau,vaapi,... can use and that
 userspace (like GStreamer) can pass around, monitor and know about.

 I think the patches sent out from ST-Ericsson's Johan Mossberg to
 linux-mm for HWMEM (hardware memory) deals exactly with buffer
 passing, pinning of buffers and so on. The CMA (Contigous Memory
 Allocator) has been slightly modified to fit hand-in-glove with HWMEM,
 so CMA provides buffers, HWMEM pass them around.

 Johan, when you re-spin the HWMEM patchset, can you include
 linaro-dev and linux-media in the CC? I think there is *much* interest
 in this mechanism, people just don't know from the name what it
 really does. Maybe it should be called mediamem or something
 instead...

To Marek,

Can you also update the CMA status and plan?

The important thing is still Russell don't agree the CMA since it's
not solve the ARM different memory attribute mapping issue. Of course
there's no way to solve the ARM issue.

We really need the memory solution for multimedia.

Thank you,
Kyungmin Park



 Yours,
 Linus Walleij
 --
 To unsubscribe from this list: send the line unsubscribe linux-media in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC/PATCH 0/1] New subdev sensor operation g_interface_parms

2011-02-26 Thread Laurent Pinchart
Hi Hans,

On Saturday 26 February 2011 13:50:12 Hans Verkuil wrote:
 On Friday, February 25, 2011 19:23:43 Sakari Ailus wrote:
  Guennadi Liakhovetski wrote:
   On Wed, 23 Feb 2011, Hans Verkuil wrote:
   On Tuesday, February 22, 2011 22:42:58 Sylwester Nawrocki wrote:
   Clock values are often being rounded at runtime and do not always
   reflect exactly the numbers fixed at compile time. And negotiation
   could help to obtain exact values at both sensor and host side.
   
   The only static data I am concerned about are those that affect signal
   integrity. After thinking carefully about this I realized that there
   is really only one setting that is relevant to that: the sampling
   edge. The polarities do not matter in this.
   
   Ok, this is much better! I'm still not perfectly happy having to punish
   all just for the sake of a couple of broken boards, but I can certainly
   much better live with this, than with having to hard-code each and
   every bit. Thanks, Hans!
  
  How much punishing would actually take place without autonegotiation?
  How many boards do we have in total? I counted around 26 of
  soc_camera_link declarations under arch/. Are there more?
  
  An example of hardware which does care about clock polarity is the
  N8[01]0. The parallel clock polarity is inverted since this actually
  does improve reliability. In an ideal hardware this likely wouldn't
  happen but sometimes the hardware is not exactly ideal. Both the sensor
  and the camera block support non-inverted and inverted clock signal.
  
  So at the very least it should be possible to provide this information
  in the board code even if both ends share multiple common values for
  parameters.
  
  There have been many comments on the dangers of the autonegotiation and
  I share those concerns. One of my main concerns is that it creates an
  unnecessary dependency from all the boards to the negotiation code, the
  behaviour of which may not change.
 
 OK, let me summarize this and if there are no objections then Stan can
 start implementing this.
 
 1) We need two subdev ops: one reports the bus config capabilities and one
 that sets it up. Note that these ops should be core ops since this
 functionality is relevant for both sensors and video receive/transmit
 devices.

Could you elaborate on this ? Stan's original proposal is to report the subdev 
configuration so that the host can configure itself at streamon time. Why do 
we need an operation to setup the subdev ?

 2) The clock sampling edge and polarity should not be negotiated but must
 be set from board code for both subdevs and host. In the future this might
 even require a callback with the clock frequency as argument.
 
 3) We probably need a utility function that given the host and subdev
 capabilities will return the required subdev/host settings.
 
 4) soc-camera should switch to these new ops.
 
 Of course, we also need MIPI support in this API. The same considerations
 apply to MIPI as to the parallel bus: settings that depend on the hardware
 board design should come from board code, others can be negotiated. Since
 I know next to nothing about MIPI I will leave that to the experts...
 
 One thing I am not sure about is if we want separate ops for parallel bus
 and MIPI, or if we merge them. I am leaning towards separate ops as I
 think that might be easier to implement.

-- 
Regards,

Laurent Pinchart
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [st-ericsson] v4l2 vs omx for camera

2011-02-26 Thread Edward Hervey
On Sat, 2011-02-26 at 14:47 +0100, Hans Verkuil wrote:
 On Saturday, February 26, 2011 14:38:50 Felipe Contreras wrote:
  On Thu, Feb 24, 2011 at 3:27 PM, Laurent Pinchart
  laurent.pinch...@ideasonboard.com wrote:
Perhaps GStreamer experts would like to comment on the future plans 
ahead
for zero copying/IPC and low power HW use cases? Could Gstreamer adapt
some ideas from OMX IL making OMX IL obsolete?
  
   perhaps OMX should adapt some of the ideas from GStreamer ;-)
  
   I'd very much like to see GStreamer (or something else, maybe lower 
   level, but
   community-maintainted) replace OMX.
  
  Yes, it would be great to have something that wraps all the hardware
  acceleration and could have support for software codecs too, all in a
  standard interface. It would also be great if this interface would be
  used in the upper layers like GStreamer, VLC, etc. Kind of what OMX
  was supposed to be, but open [1].
  
  Oh wait, I'm describing FFmpeg :) (supports vl42, VA-API, VDPAU,
  DirectX, and soon OMAP3 DSP)
  
  Cheers.
  
  [1] 
  http://freedesktop.org/wiki/GstOpenMAX?action=AttachFiledo=gettarget=gst-openmax.png
  
  
 
 Are there any gstreamer/linaro/etc core developers attending the ELC in San 
 Francisco
 in April? I think it might be useful to get together before, during or after 
 the
 conference and see if we can turn this discussion in something more concrete.
 
 It seems to me that there is an overall agreement of what should be done, but
 that we are far from anything concrete.
 

  I will be there and this was definitely a topic I intended to talk
about.
  See you there.

 Edward

 Regards,
 
   Hans
 


--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC] snapshot mode, flash capabilities and control

2011-02-26 Thread Sylwester Nawrocki
Hi Hans,

On 02/26/2011 02:56 PM, Hans Verkuil wrote:
 On Saturday, February 26, 2011 14:39:54 Sylwester Nawrocki wrote:
 On 02/26/2011 02:03 PM, Guennadi Liakhovetski wrote:
 On Sat, 26 Feb 2011, Hans Verkuil wrote:

 On Friday, February 25, 2011 18:08:07 Guennadi Liakhovetski wrote:

 snip

 configure the sensor to react on an external trigger provided by the 
 flash
 controller is needed, and that could be a control on the flash 
 sub-device.
 What we would probably miss is a way to issue a STREAMON with a number 
 of
 frames to capture. A new ioctl is probably needed there. Maybe that 
 would be
 an opportunity to create a new stream-control ioctl that could replace
 STREAMON and STREAMOFF in the long term (we could extend the subdev 
 s_stream
 operation, and easily map STREAMON and STREAMOFF to the new ioctl in
 video_ioctl2 internally).

 How would this be different from queueing n frames (in total; count
 dequeueing, too) and issuing streamon? --- Except that when the last 
 frame
 is processed the pipeline could be stopped already before issuing 
 STREAMOFF.
 That does indeed have some benefits. Something else?

 Well, you usually see in your host driver, that the videobuffer queue is
 empty (no more free buffers are available), so, you stop streaming
 immediately too.

 This probably assumes that the host driver knows that this is a special 
 queue?
 Because in general drivers will simply keep capturing in the last buffer 
 and not
 release it to userspace until a new buffer is queued.

 Yes, I know about this spec requirement, but I also know, that not all
 drivers do that and not everyone is happy about that requirement:)

 Right, similarly a v4l2 output device is not releasing the last buffer
 to userland and keeps sending its content until a new buffer is queued to 
 the driver.
 But in case of capture device the requirement is a pain, since it only causes
 draining the power source, when from a user view the video capture is 
 stopped.
 Also it limits a minimum number of buffers that could be used in preview 
 pipeline.
 
 No, we can't change this. We can of course add some setting that will 
 explicitly
 request different behavior.
 
 The reason this is done this way comes from the traditional TV/webcam viewing 
 apps.
 If for some reason the app can't keep up with the capture rate, then frames 
 should
 just be dropped silently. All apps assume this behavior. In a normal user 
 environment
 this scenario is perfectly normal (e.g. you use a webcam app, then do a CPU
 intensive make run).

All right, I have nothing against extra flags, e.g. in REQBUFS to define a 
specific
behavior. 

Perhaps I didn't express myself straight. I was thinking only about stopping
the capture/DMA engine when there is no more empty buffers. And releasing 
the last buffer rather than keeping it in the driver. Then when subsequent 
buffer
is queued by the app the driver would restart the capture engine.
Streaming as seen from user space is not stopped. This just corresponds to a 
frame
dropping mode, discarding just happens earlier in the H/W pipeline. It's
no different from the app POV than endlessly overwriting memory with new frames.

BTW, in STREAMON ioctl documentation we have following requirement:

... Accordingly the output hardware is disabled, no video signal is produced 
until
 VIDIOC_STREAMON has been called. *The ioctl will succeed only when at least one
 output buffer is in the incoming queue*.

It has been discussed that memory-to-memory interface should be an exception
from the at least one buffer requirement on an output queue for STREAMON to 
succeed.
However I see no good way to implement it in videobuf2. Now there is a relevant 
check
in vb2_streamon. There were opinions that the above restriction causes more harm
than good. I'm not sure if we should keep it.

I'm working on mem-to-mem interface DocBook documentation and it would be nice
to have this clarified.


Regards,
Sylwester
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[cron job] v4l-dvb daily build: ERRORS

2011-02-26 Thread Hans Verkuil
This message is generated daily by a cron job that builds v4l-dvb for
the kernels and architectures in the list below.

Results of the daily build of v4l-dvb:

date:Sat Feb 26 19:00:48 CET 2011
git master:   1b59be2a6cdcb5a12e18d8315c07c94a624de48f
git media-master: gcc version:  i686-linux-gcc (GCC) 4.5.1
host hardware:x86_64
host os:  2.6.32.5

linux-git-armv5: WARNINGS
linux-git-armv5-davinci: WARNINGS
linux-git-armv5-ixp: WARNINGS
linux-git-armv5-omap2: WARNINGS
linux-git-i686: WARNINGS
linux-git-m32r: WARNINGS
linux-git-mips: WARNINGS
linux-git-powerpc64: WARNINGS
linux-git-x86_64: WARNINGS
linux-2.6.31.12-i686: ERRORS
linux-2.6.32.6-i686: ERRORS
linux-2.6.33-i686: ERRORS
linux-2.6.34-i686: ERRORS
linux-2.6.35.3-i686: ERRORS
linux-2.6.36-i686: ERRORS
linux-2.6.37-i686: ERRORS
linux-2.6.31.12-x86_64: ERRORS
linux-2.6.32.6-x86_64: ERRORS
linux-2.6.33-x86_64: ERRORS
linux-2.6.34-x86_64: ERRORS
linux-2.6.35.3-x86_64: ERRORS
linux-2.6.36-x86_64: ERRORS
linux-2.6.37-x86_64: ERRORS
spec-git: OK
sparse: ERRORS

Detailed results are available here:

http://www.xs4all.nl/~hverkuil/logs/Saturday.log

Full logs are available here:

http://www.xs4all.nl/~hverkuil/logs/Saturday.tar.bz2

The V4L-DVB specification from this daily build is here:

http://www.xs4all.nl/~hverkuil/spec/media.html
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Request clarification on videobuf irqlock and vb_lock usage

2011-02-26 Thread Ben Collins
I'm trying to cleanup some deadlocks and random crashed in my v4l2 driver 
(solo6x10) and I cannot find definitive documentation on that clear usage of 
irqlock and vb_lock in a driver that uses videobuf.

When and where should I be using either of these to ensure I work synchronously 
with the videobuf-core?

--
Bluecherry: http://www.bluecherrydvr.com/
SwissDisk : http://www.swissdisk.com/
Ubuntu: http://www.ubuntu.com/
My Blog   : http://ben-collins.blogspot.com/

--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [st-ericsson] v4l2 vs omx for camera

2011-02-26 Thread Nicolas Pitre
On Sat, 26 Feb 2011, Kyungmin Park wrote:

 On Sat, Feb 26, 2011 at 2:22 AM, Linus Walleij linus.wall...@linaro.org 
 wrote:
  2011/2/24 Edward Hervey bilb...@gmail.com:
 
   What *needs* to be solved is an API for data allocation/passing at the
  kernel level which v4l2,omx,X,GL,vdpau,vaapi,... can use and that
  userspace (like GStreamer) can pass around, monitor and know about.
 
  I think the patches sent out from ST-Ericsson's Johan Mossberg to
  linux-mm for HWMEM (hardware memory) deals exactly with buffer
  passing, pinning of buffers and so on. The CMA (Contigous Memory
  Allocator) has been slightly modified to fit hand-in-glove with HWMEM,
  so CMA provides buffers, HWMEM pass them around.
 
  Johan, when you re-spin the HWMEM patchset, can you include
  linaro-dev and linux-media in the CC? I think there is *much* interest
  in this mechanism, people just don't know from the name what it
  really does. Maybe it should be called mediamem or something
  instead...
 
 To Marek,
 
 Can you also update the CMA status and plan?
 
 The important thing is still Russell don't agree the CMA since it's
 not solve the ARM different memory attribute mapping issue. Of course
 there's no way to solve the ARM issue.

There are at least two ways to solve that issue, and I have suggested 
both on the lak mailing list already.

1) Make the direct mapped kernel memory usable by CMA mapped through a 
   page-sized two-level page table mapping which would allow for solving 
   the attributes conflict on a per page basis.

2) Use highmem more aggressively and allow only highmem pages for CMA.
   This is quite easy to make sure the target page(s) for CMA would have
   no kernel mappings and therefore no attribute conflict.  Furthermore, 
   highmem pages are always relocatable for making physically contiguous 
   segments available.


Nicolas

Re: Request clarification on videobuf irqlock and vb_lock usage

2011-02-26 Thread Andy Walls
On Sat, 2011-02-26 at 13:57 -0500, Ben Collins wrote:
 I'm trying to cleanup some deadlocks and random crashed in my v4l2
 driver (solo6x10) and I cannot find definitive documentation on that
 clear usage of irqlock and vb_lock in a driver that uses videobuf.

Here is the best documentation on videobuf(1) that I ever saw:
http://git.linuxtv.org/media_tree.git?a=blob;f=Documentation/video4linux/videobuf;h=17a1f9abf260f39a44dee35bf7b72a0c66fd71fc;hb=df37e8479875c486d668fdf5bf65dba41422dd76

And here is the bad news about videobuf(1):
http://linuxtv.org/downloads/presentations/summit_jun_2010/20100614-v4l2_summit-videobuf.pdf


Since videobuf2 in now in the bleeding edge kernel, you should look at
using it:

http://lwn.net/Articles/420512/

http://linuxtv.org/downloads/presentations/summit_jun_2010/Videobuf_Helsinki_June2010.pdf


Tonight, I've actually been xamining using videobuf2 for cx18 myself.


 When and where should I be using either of these to ensure I work 
 synchronously with the videobuf-core?

Maybe the above briefs, that detail some of the problems with
videobuf(1), can provide some details on the semantics of them.  IIRC,
Pawel's brief highlighted that iolock was really overloaded.

Regards,
Andy

 --
 Bluecherry: http://www.bluecherrydvr.com/
 SwissDisk : http://www.swissdisk.com/
 Ubuntu: http://www.ubuntu.com/
 My Blog   : http://ben-collins.blogspot.com/


--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Hauppauge 950q issue

2011-02-26 Thread Kyle
I was wondering if someone might be able to figure out what’s wrong with
my HVR-950q. When I can get it to work, it works just fine, but it seems
that after a restart in Ubuntu 10.10 x64 it won’t work for quite a while
(unable to determine how long specifically, but in the realm of hours to
days). It will work fine under windows after multiple reboots, so I’m
wondering if the linux firmware isn’t unloading properly or something.
When I start up mythtv again I get the following info from dmesg:

[ 22.832293] xc5000: xc5000_init()
[ 22.837041] xc5000: xc5000_is_firmware_loaded() returns False id =
0×2000
[ 22.837044] xc5000: waiting for firmware upload
(dvb-fe-xc5000-1.6.114.fw)…
[ 22.865012] xc5000: firmware read 12401 bytes.
[ 22.865015] xc5000: firmware uploading…
[ 22.865019] xc5000: xc5000_TunerReset()
[ 27.440072] 8:2:1: endpoint lacks sample rate attribute bit, cannot
set.
[ 27.461883] 8:2:1: endpoint lacks sample rate attribute bit, cannot
set.
[ 27.487738] 8:2:1: endpoint lacks sample rate attribute bit, cannot
set.
[ 31.671477] xc5000: firmware upload complete…
[ 31.671493] xc5000: xc_initialize()
[ 36.074720] xc5000: *** ADC envelope (0-1023) = 65535
[ 36.079462] xc5000: *** Frequency error = 1023984 Hz
[ 36.084462] xc5000: *** Lock status (0-Wait, 1-Locked, 2-No-signal) =
65535
[ 36.094082] xc5000: *** HW: V0f.0f, FW: V0f.0f.
[ 36.098826] xc5000: *** Horizontal sync frequency = 31244 Hz
[ 36.103575] xc5000: *** Frame lines = 65535
[ 36.108319] xc5000: *** Quality (0:56dB) = 65535
[ 36.427140] xc5000: xc5000_is_firmware_loaded() returns True id =
0x
[ 36.427147] xc5000: xc5000_set_params() frequency=62900 (Hz)
[ 36.427151] xc5000: xc5000_set_params() ATSC
[ 36.427154] xc5000: xc5000_set_params() VSB modulation
[ 36.427158] xc5000: xc5000_set_params() frequency=62725
(compensated)
[ 36.427162] xc5000: xc_SetSignalSource(0) Source = ANTENNA
[ 38.510023] xc5000: xc_SetTVStandard(0×8002,0x00c0)
[ 38.510029] xc5000: xc_SetTVStandard() Standard = DTV6
[ 42.900024] xc5000: xc_set_IF_frequency(freq_khz = 6000) freq_code =
0×1800
[ 45.130023] xc5000: xc_tune_channel(62725)
[ 45.130028] xc5000: xc_set_RF_frequency(62725)
[ 47.334695] xc5000: *** ADC envelope (0-1023) = 65535
[ 47.339438] xc5000: *** Frequency error = 1023984 Hz
[ 47.344186] xc5000: *** Lock status (0-Wait, 1-Locked, 2-No-signal) =
65535
[ 47.353683] xc5000: *** HW: V0f.0f, FW: V0f.0f.
[ 47.358427] xc5000: *** Horizontal sync frequency = 31244 Hz
[ 47.363175] xc5000: *** Frame lines = 65535
[ 47.367921] xc5000: *** Quality (0:56dB) = 65535

When the card does work right the line [ 47.344186] xc5000: *** Lock
status (0-Wait, 1-Locked, 2-No-signal) = 65535 doesn't end with 65535
(can't remember exactly what cause it's been a little while since I've
fiddled with this.  Since it's 65535 that makes me wonder if it's some
sort of overflow or something.  Any help would be greatly appreciated!

-Kyle


--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html