Re: RFCv2: Media controller proposal

2009-10-27 Thread Guennadi Liakhovetski
Hi

(repeating my preamble from a previous post)

This is a general comment to the whole media controller work: having 
given a talk at the ELC-E in Grenoble on soc-camera, I mentioned briefly a 
few related RFCs, including this one. I've got a couple of comments back, 
including the following ones (which is to say, opinions are not mine and 
may or may not be relevant, I'm just fulfilling my promise to pass them 
on;)):

1) what about DVB? Wouldn't they also benefit from such an API? I wasn't 
able to reply to the question, whether the DVB folks know about this and 
have a chance to take part in the discussion and eventually use this API?

2) what I am even less sure about is, whether ALSA / ASoC have been 
mentioned as possible users of MC, or, at least, possible sources for 
ideas. ASoC has definitely been mentioned as an audio analog of 
soc-camera, so, I'll be looking at that - at least at their documentation 
- to see if I can borrow some of their ideas:-)

Thanks
Guennadi
---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
http://www.open-technology.de/
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: RFCv2: Media controller proposal

2009-10-27 Thread Devin Heitmueller
On Tue, Oct 27, 2009 at 4:04 AM, Guennadi Liakhovetski
g.liakhovet...@gmx.de wrote:
 Hi

 (repeating my preamble from a previous post)

 This is a general comment to the whole media controller work: having
 given a talk at the ELC-E in Grenoble on soc-camera, I mentioned briefly a
 few related RFCs, including this one. I've got a couple of comments back,
 including the following ones (which is to say, opinions are not mine and
 may or may not be relevant, I'm just fulfilling my promise to pass them
 on;)):

 1) what about DVB? Wouldn't they also benefit from such an API? I wasn't
 able to reply to the question, whether the DVB folks know about this and
 have a chance to take part in the discussion and eventually use this API?

The extent to which DVB applies is that the DVB devices will appear in
the MC enumeration.  This will allow userland to be able to see
hybrid devices where both DVB and analog are tied to the same tuner
and cannot be used at the same time.

 2) what I am even less sure about is, whether ALSA / ASoC have been
 mentioned as possible users of MC, or, at least, possible sources for
 ideas. ASoC has definitely been mentioned as an audio analog of
 soc-camera, so, I'll be looking at that - at least at their documentation
 - to see if I can borrow some of their ideas:-)

ALSA devices will definitely be available, although at this point I
have no reason to believe this will require changes the ALSA code
itself.  All of the changes involve enumeration within v4l to find the
correct ALSA device associated with the tuner and report the correct
card number.  The ALSA case is actually my foremost concern with
regards to the MC API, since it will solve the problem related to
applications such as tvtime figuring out which ALSA device to playback
audio on.

Devin

-- 
Devin J. Heitmueller - Kernel Labs
http://www.kernellabs.com
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: RFCv2: Media controller proposal

2009-09-22 Thread Sakari Ailus

Mauro Carvalho Chehab wrote:

Em Fri, 11 Sep 2009 22:15:15 +0200
Hans Verkuil hverk...@xs4all.nl escreveu:


On Friday 11 September 2009 21:59:37 Mauro Carvalho Chehab wrote:

Em Fri, 11 Sep 2009 21:23:44 +0200
Hans Verkuil hverk...@xs4all.nl escreveu:

The second problem is that this will pollute the 'namespace' of a v4l device
node. Device drivers need to pass all those private ioctls to the right
sub-device. But they shouldn't have to care about that. If someone wants to
tweak the resizer (e.g. scaling coefficients), then pass it straight to the
resizer component.

Sorry, I missed your point here

Example: a sub-device can produce certain statistics. You want to have an
ioctl to obtain those statistics. If you call that through /dev/videoX, then
that main driver has to handle that ioctl in vidioc_default and pass it on
to the right subdev. So you have to write that vidioc_default handler,
know about the sub-devices that you have and which sub-device is linked to
the device node. You really don't want to have to do that. Especially not
when you are dealing with i2c devices that are loaded from platform code.
If a video encoder supports private ioctls, then an omap3 driver doesn't
want to know about that. Oh, and before you ask: just broadcasting that
ioctl is not a solution if you have multiple identical video encoders.


This can be as easy as reading from /sys/class/media/dsp:stat0/stats


In general, the H3A block producing the statistics is configured first,
after which it starts producing statistics. Statistics buffers are
usually smallish, the maximum size is half MiB or so. For such a buffer
you'd have to ask the data for a number of times since the sysfs show() 
limit is one page (4 kiB usually).


Statistics are also often available before the actual frame since the
whole frame is not used to compute them. The statistics are used by e.g.
the AEWB algorithm which then comes up with the new exposure and gain
values. Applying them to the sensor in time is important since the
sensor may start exposing a new frame already before the last one has ended.

This requires event delivery to userspace (Laurent has written about it
under subject [RFC] Video events).

--
Sakari Ailus
sakari.ai...@maxwell.research.nokia.com



--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: RFCv2: Media controller proposal

2009-09-17 Thread Hans Verkuil
On Thursday 17 September 2009 00:28:38 Karicheri, Muralidharan wrote:
 
  And as I explained above, a v4l2_subdev just implements an interface. It
 has
  no relation to devices. And yes, I'm beginning to agree with you that
 subdevice
  was a bad name because it suggested something that it simply isn't.
 
  That said, I also see some advantages in doing this. For statistics or
  histogram sub-devices you can implement a read() call to read the data
  instead of using ioctl. It is more flexible in that respect.
 
 I think this will be more flexible and will be less complex than creating a
 proxy
 device. For example, as you'll be directly addressing a device, you don't
 need to
 have any locking to avoid the risk that different threads accessing
 different
 sub-devices at the same time would result on a command sending to the wrong
 device.
 So, both kernel driver and userspace app can be simpler.
 
 
 Not really. User application trying to parse the output of a histogram which
 really will about 4K in size as described by Laurent. Imagine application 
 does lot of parsing to decode the values thrown by the sysfs. Again on 
 different platform, they can be different formats. With ioctl, each of these 
 platforms provides api to access them and it is much simpler to use. Same for 
 configuring IPIPE on DM355/DM365 where there are hundreds of parameters and 
 write a lot of code in sysfs to parse each of these variables. I can see it 
 as a nightmare for user space library or application developer.

I believe Mauro was talking about normal device nodes, not sysfs.

What is a bit more complex in Mauro's scheme is that to get hold of the right
device node needed to access a sub-device you will need to first get the
subdev's entity information from the media controller, then go to libudev to
translate major/minor numbers to an actual device path, and then open that.

On the other hand, we will have a library available to do this.

On balance I think that the kernel implementation will be more complex by
creating device nodes, although not by much, and that userspace will be
slightly simpler in the case of using the same mc filehandle in a multi-
threaded application.

Regards,

Hans


-- 
Hans Verkuil - video4linux developer - sponsored by TANDBERG Telecom
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: RFCv2: Media controller proposal

2009-09-17 Thread Hans Verkuil
On Thursday 17 September 2009 00:15:23 Andy Walls wrote:
 On Wed, 2009-09-16 at 23:34 +0200, Hans Verkuil wrote:
  On Wednesday 16 September 2009 22:50:43 Mauro Carvalho Chehab wrote:
   Em Wed, 16 Sep 2009 21:21:16 +0200
 
  C) in all other cases you only get it if a kernel config option is on. And 
  since
  any advanced controls are still exposed in sysfs you can still change those 
  even
  if the config option was off.
 
 That is a user interface and support annoyance.  Either decide to have a
 node for a subdevice or don't.  If a distribution wants to supress them,
 udev rules could suffice - right?  Changing udev rules is
 (theoretically) easier than rebuilding the kernel for most end users.

Good point.

Hans

 
 Regards,
 Andy
 
 
  What do you think about that? I would certainly like to hear what people 
  think
  about this.
  
  Regards,
  
  Hans
 
 
 



-- 
Hans Verkuil - video4linux developer - sponsored by TANDBERG Telecom
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: RFCv2: Media controller proposal

2009-09-17 Thread Mauro Carvalho Chehab
Em Thu, 17 Sep 2009 08:35:57 +0200
Hans Verkuil hverk...@xs4all.nl escreveu:

 On Thursday 17 September 2009 00:15:23 Andy Walls wrote:
  On Wed, 2009-09-16 at 23:34 +0200, Hans Verkuil wrote:
   On Wednesday 16 September 2009 22:50:43 Mauro Carvalho Chehab wrote:
Em Wed, 16 Sep 2009 21:21:16 +0200
  
   C) in all other cases you only get it if a kernel config option is on. 
   And since
   any advanced controls are still exposed in sysfs you can still change 
   those even
   if the config option was off.
  
  That is a user interface and support annoyance.  Either decide to have a
  node for a subdevice or don't.  If a distribution wants to supress them,
  udev rules could suffice - right?  Changing udev rules is
  (theoretically) easier than rebuilding the kernel for most end users.
 
 Good point.

I suspect that, in practice, the drivers will talk for themselves: e. g.
drivers that are used with embedded and that requires extra parameters for
tweaking will add some callback methods to indicate V4L2 core that they need
a /dev. Others will not implement those methods and won't have any /dev
associated.

Cheers,
Mauro
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: RFCv2: Media controller proposal

2009-09-17 Thread Mauro Carvalho Chehab
Em Thu, 17 Sep 2009 08:34:23 +0200
Hans Verkuil hverk...@xs4all.nl escreveu:

 On Thursday 17 September 2009 00:28:38 Karicheri, Muralidharan wrote:
  
   And as I explained above, a v4l2_subdev just implements an interface. It
  has
   no relation to devices. And yes, I'm beginning to agree with you that
  subdevice
   was a bad name because it suggested something that it simply isn't.
  
   That said, I also see some advantages in doing this. For statistics or
   histogram sub-devices you can implement a read() call to read the data
   instead of using ioctl. It is more flexible in that respect.
  
  I think this will be more flexible and will be less complex than creating a
  proxy
  device. For example, as you'll be directly addressing a device, you don't
  need to
  have any locking to avoid the risk that different threads accessing
  different
  sub-devices at the same time would result on a command sending to the wrong
  device.
  So, both kernel driver and userspace app can be simpler.
  
  
  Not really. User application trying to parse the output of a histogram which
  really will about 4K in size as described by Laurent. Imagine application 
  does lot of parsing to decode the values thrown by the sysfs. Again on 
  different platform, they can be different formats. With ioctl, each of 
  these platforms provides api to access them and it is much simpler to use. 
  Same for configuring IPIPE on DM355/DM365 where there are hundreds of 
  parameters and write a lot of code in sysfs to parse each of these 
  variables. I can see it as a nightmare for user space library or 
  application developer.
 
 I believe Mauro was talking about normal device nodes, not sysfs.

Yes.

 What is a bit more complex in Mauro's scheme is that to get hold of the right
 device node needed to access a sub-device you will need to first get the
 subdev's entity information from the media controller, then go to libudev to
 translate major/minor numbers to an actual device path, and then open that.

Good point. This reforces my thesis that the media controller (or, at least his
enumeration function) will be better done via sysfs.

As Andy pointed, one of the biggest advantages is that udev can enrich the
user's experience by calling some tweak applications or by calling special
applications (like lirc) when certain media devices are created.

Cheers,
Mauro
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: RFCv2: Media controller proposal

2009-09-17 Thread Mauro Carvalho Chehab
Em Wed, 16 Sep 2009 23:34:08 +0200
Hans Verkuil hverk...@xs4all.nl escreveu:

  I'm just guessing, but If the two usecases are so different, maybe we 
  shouldn't
  try to find a common solution for the two problems, or maybe we should use 
  an
  approach similar to debufs, where you enable/mount only were needed 
  (embedded).
 
 They are not *that* different. You still want the ability to discover the
 available device nodes for consumer products (e.g. the alsa device belonging
 to the video device). And there will no doubt be some borderline products
 belonging to, say, the professional consumer market. It's not black-and-white.

Agreed.

  v4l2-object seems good. also the -host/-client terms that Guennadi is 
  proposing.
 
 Just an idea: why not rename struct v4l2_device to v4l2_mc and v4l2_subdev to
 v4l2_object? And if we decide to go all the way, then we can rename 
 video_device
 to v4l2_devnode. Or perhaps we go straight to the media_ prefix instead.
 
 The term 'client' has for me similar problems as 'device': it's used in so 
 many
 different contexts that it is easy to get confused.

IMO, let's patch the docs, but, at least for a while, let's not change API 
names again.

Perhaps, I'm just too stressed with all the merge extra work I had to do this 
time due
to the last function rename that stopped me merging patches while the arch 
changes
were not upstream... I generally take one or two days for merging most patches,
but I'm working hardly this entire week due to that.

  This can be easily solved: Just add a Kconfig option for the tweak 
  interfaces
  eventually making it depending on CONFIG_EMBEDDED.
 
 An interesting idea. I don't think you want to make this specific for embedded
 devices only. It can be done as a separate config option within V4L.
 
 I have a problem though: what to do with sub-devices (if you don't mind, I'll
 just keep using that term for now) that want to expose some advanced control.
 We have seen several requests for that lately. 

I think we should discuss this case by case. When I said that people were 
considering the
media controller as a replacement for V4L2 API, I was referring to the fact 
that lately,
all proposals are thinking on doing things only at the sub-devices, where, on 
most cases,
the control should be applied via an already-existing API call.

 E.g. an AGC-TOP control for fine-tuning the AGC of tuners.

In this specific case, there's already AFC parameter for vidioc_[g/s]_tuner, 
being
also an example of advanced control for tuners. So, IMO, the proper place for
AGC-TOP is together with AFC, e. g., at struct v4l2_tuner.

 I think this example will be quite typical of several sub-devices: they may
 have one or two 'advanced' controls that can be useful in very particular
 cases for end-users.

On some cases, they can be just one extra G/S_CTRL. 

We need to have a clear rule of what kind of controls should go via the current
V4L2 standard way for those that will go via a subdev interface, to avoid the
mess controller scenario.

IMO, They should only use the sub-dev interface when there are more than one
subdev associated to the same /dev/video interface and were each may need
different settings for the same control.

Let me use an arbitrary scenario:

/dev/video0 - dsp0 - dsp1 - ...

let's imagine that both dsp0 and dsp1 blocks are identical, and can do
a set of image enhancement functions, including movement detection and image
filtering.

If we need to set dsp0 block to do image filtering and dsp block 2 to do
movement detection, no V4L2 current methods will fit. In this case, subdev
interface should be used.

 There are a few possible ways of doing this:
 
 1) With the mediacontroller concept from the RFC you can select the tuner
 subdev through the mc device node and call VIDIOC_S_CTRL on that node (and
 with QUERYCTRL you can also query all controls supported by that subdev,
 including these advanced controls).

In this case, what would happen if the S_CTRL were applied at /dev/video? There
will be several possible ways (refuse, apply to all subdevs, apply to the first
one that accepts, etc), each with advantages and dis-advantages. IMO, too messy.

 2) Create a device node for each subdev even if they have just a single 
 control
 to expose. Possible, but this still seems overkill for me.
 
 3) Use your idea of only creating a device node for subdevs if a kernel config
 is set. If no device nodes should be created, then the control framework can
 still export such advanced controls to sysfs, allowing end-users to change
 them. This is actually quite a nice idea: embedded systems or power-users can
 get full control through the device nodes, while the average end-user can
 just use the control from sysfs if he needs to tweak something.

IMO, both 2 and 3 are OK. Considering Andy's argument that we can always avoid
creating a device at udev, (2) seems better.
 
 4) Same as 3) but you can still use the mc to select a sub-device and call
 

Re: RFCv2: Media controller proposal

2009-09-16 Thread Mauro Carvalho Chehab
Em Sat, 12 Sep 2009 00:39:50 +0200
Hans Verkuil hverk...@xs4all.nl escreveu:
  From my previous understanding, those are the needs:
  
  1) V4L2 API will keep being used to control the devices and to do streaming,
  working under the already well defined devices;
 
 Yes.
  
  2) One Kernel object is needed to represent the entire board as a hole, to
  enumerate its sub-devices and to change their topology;
 
 Yes.
 
  3) For some very specific cases, it should be possible to tweak some
  sub-devices to act on a non-usual way;
 
 This will not be for 'some very specific cases'. This will become an essential
 feature on embedded platforms. It's probably the most important part of the
 media controller proposal.

Embedded platforms is an specific use case. 

  4) Some new ioctls are needed to control some parts of the devices that 
  aren't
  currently covered by V4L2 API.
 
 No, that is not part of the proposal. Of course, as drivers for the more
 advanced devices are submitted there may be some functionality that is general
 enough to warrant inclusion in the V4L2 API, but that's business as usual.
 
  
  Right?
  
  If so:
  
  (1) already exists;
 
 Obviously.
  
  (2) is the topology manager of the media controller, that should use
  sysfs, due to its nature.
 
 See the separate thread I started on sysfs vs ioctl.
 
  For (3), there are a few alternatives. IMO, the better is to use also sysfs,
  since we'll have all subdevs already represented there. So, to change
  something, it is just a matter to write something to a sysfs node.
 
 See that same thread why that is a really bad idea.
 
  Another 
  alternative would be to create separate subdevs at /dev, but this will end 
  on
  creating much more complex drivers than probably needed.
 
 I agree with this.
 
  (4) is implemented by some new ioctl additions at V4L2 API.
 
 Not an issue as stated above.

I can't avoid to be distracted from my merge duties to address some points that
seem to be important to bold on those new RFC discussions.

We need to take care of not creating a mess controller instead of media 
controller.

From a few emails at the mailing list, It seems to me that some people are
thinking that the media controller is a replacement for what we have, or as
a solution for all our problems.

It won't solve all our problems, nor it should be a replacement for what we 
have.

Basically, there's no reason for firing the V4L2 API.  We can extend it,
improve, add new capabilities, etc, but, considering the experiences learned
from moving from V4L1 to V4L2, for bad or for good, we can't get rid of it.

See the history: V4L2 was proposed in 1999 and added on kernel on 2002. Seven
years after its implementation, and ten years after its proposal, and there are
yet drivers needing to be ported. So, creating a media controller as a
replacement for it won't work.

The media controller, as proposed, has two very specific capabilities:

1) enumerate and change media device topology. 

This is something that it is out of the scope of V4L2 API, so it is valid to
think on implementing an API for it.

2) sub-device control. I think the mess started here.

We need to go one more step behind and see what this exactly means.

Let me try to identify the concepts and seek for the answers.

What's a sub-device?


Well, if we strip v4l2-framework.txt and driver/media from git grep, we have:

For subdevice, there are several occurences. All of them refers to
subvendor/subdevice PCI ID.

For sub-device: most references also talk about PCI subdevices. On all places
(except for V4L), where a subdevice exists, a kernel device is created.

So, basically, only V4L is using sub-device with a different meaning than 
what's at kernel.
On all other places, a subdevice is just another device.

It seems that we have a misconception here: sub-device is just an alias for
device. 

IMO, it is better to avoid using sub-device, as this cause confusion with the
widely used pci subdevice designation.

How kernel deals with (sub-)device ?


A device has nothing to do with a single physical component. In fact, since the
beginning of Linux, physical devices like superIO chips (now called as south
bridge) exports several kernel devices associated to it, for example, to
serial interface , printer interface, rtc, pci controllers, etc.

Using another example from a driver I'm working for checking memory errors at
the i7 core machines: In order to get errors from each processor, the driver 
needs
to talk with 18 devices. All of those 18 kernel devices are part of just one
physical CPU chip. Worse than that, they are the memory controller part of a
single logical unit (called QPI- Quick Path Interconnet). All those 18 devices
are bound to an specific PCI bus for each memory controller (on a machine with
2 CPU sockets, there are 2 buses, 36 total PCI devices, each with lots of
registers).

So, basically, a kernel device is the kernel 

Re: RFCv2: Media controller proposal

2009-09-16 Thread Hans Verkuil
On Wednesday 16 September 2009 20:15:20 Mauro Carvalho Chehab wrote:
 Em Sat, 12 Sep 2009 00:39:50 +0200
 Hans Verkuil hverk...@xs4all.nl escreveu:
   From my previous understanding, those are the needs:
   
   1) V4L2 API will keep being used to control the devices and to do 
   streaming,
   working under the already well defined devices;
  
  Yes.
   
   2) One Kernel object is needed to represent the entire board as a hole, to
   enumerate its sub-devices and to change their topology;
  
  Yes.
  
   3) For some very specific cases, it should be possible to tweak some
   sub-devices to act on a non-usual way;
  
  This will not be for 'some very specific cases'. This will become an 
  essential
  feature on embedded platforms. It's probably the most important part of the
  media controller proposal.
 
 Embedded platforms is an specific use case. 

However you look at it, it is certainly a very important use case. It's a
huge and important industry and we should have proper support for it in v4l-dvb.
And embedded platforms are used quite differently. Where device drivers for
comsumer market products should hide complexity (because the end-user or the
generic webcam/video application doesn't want to be bothered with that), they
should expose that complexity for embedded platforms since there the
application writes want to take full control.

   4) Some new ioctls are needed to control some parts of the devices that 
   aren't
   currently covered by V4L2 API.
  
  No, that is not part of the proposal. Of course, as drivers for the more
  advanced devices are submitted there may be some functionality that is 
  general
  enough to warrant inclusion in the V4L2 API, but that's business as usual.
  
   
   Right?
   
   If so:
   
   (1) already exists;
  
  Obviously.
   
   (2) is the topology manager of the media controller, that should use
   sysfs, due to its nature.
  
  See the separate thread I started on sysfs vs ioctl.
  
   For (3), there are a few alternatives. IMO, the better is to use also 
   sysfs,
   since we'll have all subdevs already represented there. So, to change
   something, it is just a matter to write something to a sysfs node.
  
  See that same thread why that is a really bad idea.
  
   Another 
   alternative would be to create separate subdevs at /dev, but this will 
   end on
   creating much more complex drivers than probably needed.
  
  I agree with this.
  
   (4) is implemented by some new ioctl additions at V4L2 API.
  
  Not an issue as stated above.
 
 I can't avoid to be distracted from my merge duties to address some points 
 that
 seem to be important to bold on those new RFC discussions.
 
 We need to take care of not creating a mess controller instead of media 
 controller.
 
 From a few emails at the mailing list, It seems to me that some people are
 thinking that the media controller is a replacement for what we have, or as
 a solution for all our problems.
 
 It won't solve all our problems, nor it should be a replacement for what we 
 have.
 
 Basically, there's no reason for firing the V4L2 API.  We can extend it,
 improve, add new capabilities, etc, but, considering the experiences learned
 from moving from V4L1 to V4L2, for bad or for good, we can't get rid of it.
 
 See the history: V4L2 was proposed in 1999 and added on kernel on 2002. Seven
 years after its implementation, and ten years after its proposal, and there 
 are
 yet drivers needing to be ported. So, creating a media controller as a
 replacement for it won't work.

I have absolutely no idea where you got the impression that the media
controller would replace V4L2. V4L2 has proven itself as an API and IMHO was
very well designed for the future. Sure, in hindsight there were a few things
we would do differently now, but especially in the video world it is very hard
to predict the future so the V4L2 API has and is doing an excellent job.

The media controller complements the V4L2 API and will in no way replace it.
 
 The media controller, as proposed, has two very specific capabilities:
 
 1) enumerate and change media device topology. 
 
 This is something that it is out of the scope of V4L2 API, so it is valid to
 think on implementing an API for it.
 
 2) sub-device control. I think the mess started here.
 
 We need to go one more step behind and see what this exactly means.
 
 Let me try to identify the concepts and seek for the answers.
 
 What's a sub-device?
 
 
 Well, if we strip v4l2-framework.txt and driver/media from git grep, we 
 have:
 
 For subdevice, there are several occurences. All of them refers to
 subvendor/subdevice PCI ID.
 
 For sub-device: most references also talk about PCI subdevices. On all 
 places
 (except for V4L), where a subdevice exists, a kernel device is created.
 
 So, basically, only V4L is using sub-device with a different meaning than 
 what's at kernel.
 On all other places, a subdevice is just another device.
 
 It seems that we have 

Re: RFCv2: Media controller proposal

2009-09-16 Thread Guennadi Liakhovetski
On Wed, 16 Sep 2009, Hans Verkuil wrote:

 On Wednesday 16 September 2009 20:15:20 Mauro Carvalho Chehab wrote:
  
  What's a sub-device?
  
  
  Well, if we strip v4l2-framework.txt and driver/media from git grep, we 
  have:
  
  For subdevice, there are several occurences. All of them refers to
  subvendor/subdevice PCI ID.
  
  For sub-device: most references also talk about PCI subdevices. On all 
  places
  (except for V4L), where a subdevice exists, a kernel device is created.
  
  So, basically, only V4L is using sub-device with a different meaning than 
  what's at kernel.
  On all other places, a subdevice is just another device.
  
  It seems that we have a misconception here: sub-device is just an alias for
  device. 
  
  IMO, it is better to avoid using sub-device, as this cause confusion with 
  the
  widely used pci subdevice designation.
 
 We discussed this on the list at the time. I think my original name was
 v4l2-client. If you can come up with a better name, then I'm happy to do a
 search and replace.

FWIW, I'm also mostly using the video -host and -client notation in 
soc-camera.

Thanks
Guennadi
---
Guennadi Liakhovetski
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: RFCv2: Media controller proposal

2009-09-16 Thread Mauro Carvalho Chehab
Em Wed, 16 Sep 2009 21:21:16 +0200
Hans Verkuil hverk...@xs4all.nl escreveu:

 On Wednesday 16 September 2009 20:15:20 Mauro Carvalho Chehab wrote:
  Em Sat, 12 Sep 2009 00:39:50 +0200
  Hans Verkuil hverk...@xs4all.nl escreveu:
From my previous understanding, those are the needs:

1) V4L2 API will keep being used to control the devices and to do 
streaming,
working under the already well defined devices;
   
   Yes.

2) One Kernel object is needed to represent the entire board as a hole, 
to
enumerate its sub-devices and to change their topology;
   
   Yes.
   
3) For some very specific cases, it should be possible to tweak some
sub-devices to act on a non-usual way;
   
   This will not be for 'some very specific cases'. This will become an 
   essential
   feature on embedded platforms. It's probably the most important part of 
   the
   media controller proposal.
  
  Embedded platforms is an specific use case. 
 
 However you look at it, it is certainly a very important use case. 

Yes, and I never said we shouldn't address embedded platform needs.
 It's a
 huge and important industry and we should have proper support for it in 
 v4l-dvb.

Agreed.

 And embedded platforms are used quite differently. Where device drivers for
 comsumer market products should hide complexity (because the end-user or the
 generic webcam/video application doesn't want to be bothered with that), they
 should expose that complexity for embedded platforms since there the
 application writes want to take full control.

I'm just guessing, but If the two usecases are so different, maybe we shouldn't
try to find a common solution for the two problems, or maybe we should use an
approach similar to debufs, where you enable/mount only were needed (embedded).

  IMO, it is better to avoid using sub-device, as this cause confusion with 
  the
  widely used pci subdevice designation.
 
 We discussed this on the list at the time. I think my original name was
 v4l2-client. If you can come up with a better name, then I'm happy to do a
 search and replace.
 
 Suggestions for a better name are welcome! Perhaps something more abstract
 like v4l2-block? v4l2-part? v4l2-object? v4l2-function?
 
 But the concept behind it will really not change with a different name.
 
 Anyway, the definition of a sub-device within v4l is 'anything that has a
 struct v4l2_subdev'. Seen in C++ terms a v4l2_subdev struct defines several
 possible abstract interfaces. And objects can implement ('inherit') one or
 more of these. Perhaps v4l2-object is a much better term since that removes
 the association with a kernel device, which it is most definitely not.

v4l2-object seems good. also the -host/-client terms that Guennadi is proposing.

  4) Creating device for sub-devices is the approach already taken on all 
  other
  drivers over the kernel.
 
 I gather that when you use the term 'device' you mean a 'device node' that
 userspace can access. It is an option to have sub-devices create a device
 node. Note that that would have to be a device node created by v4l; an i2c
 device node for example is quite useless to us since you can only use it
 for i2c ioctls.
 
 I have considered this myself as well. The reason I decided against it was
 that I think it is a lot of extra overhead and the creation of even more
 device nodes when adding a single media controller would function just as
 well. Especially since all this is quite uninteresting for most of the non-
 embedded drivers.

This can be easily solved: Just add a Kconfig option for the tweak interfaces
eventually making it depending on CONFIG_EMBEDDED.

 In fact, many of the current sub-devices have nothing or
 almost nothing that needs to be controlled by userspace, so creating a device
 node just for the sake of consistency sits not well with me.

If the device will never needed to be seen on userspace then we can just not 
create
a device for it.

 And as I explained above, a v4l2_subdev just implements an interface. It has
 no relation to devices. And yes, I'm beginning to agree with you that 
 subdevice
 was a bad name because it suggested something that it simply isn't.
 
 That said, I also see some advantages in doing this. For statistics or
 histogram sub-devices you can implement a read() call to read the data
 instead of using ioctl. It is more flexible in that respect.

I think this will be more flexible and will be less complex than creating a 
proxy
device. For example, as you'll be directly addressing a device, you don't need 
to
have any locking to avoid the risk that different threads accessing different
sub-devices at the same time would result on a command sending to the wrong 
device.
So, both kernel driver and userspace app can be simpler.

 This is definitely an interesting topic that can be discussed both during
 the LPC and here on the list.
 
 Regards,
 
   Hans
 




Cheers,
Mauro
--
To unsubscribe from this list: send the line 

Re: RFCv2: Media controller proposal

2009-09-16 Thread Hans Verkuil
On Wednesday 16 September 2009 22:50:43 Mauro Carvalho Chehab wrote:
 Em Wed, 16 Sep 2009 21:21:16 +0200
 Hans Verkuil hverk...@xs4all.nl escreveu:
 
  On Wednesday 16 September 2009 20:15:20 Mauro Carvalho Chehab wrote:
   Em Sat, 12 Sep 2009 00:39:50 +0200
   Hans Verkuil hverk...@xs4all.nl escreveu:
 From my previous understanding, those are the needs:
 
 1) V4L2 API will keep being used to control the devices and to do 
 streaming,
 working under the already well defined devices;

Yes.
 
 2) One Kernel object is needed to represent the entire board as a 
 hole, to
 enumerate its sub-devices and to change their topology;

Yes.

 3) For some very specific cases, it should be possible to tweak some
 sub-devices to act on a non-usual way;

This will not be for 'some very specific cases'. This will become an 
essential
feature on embedded platforms. It's probably the most important part of 
the
media controller proposal.
   
   Embedded platforms is an specific use case. 
  
  However you look at it, it is certainly a very important use case. 
 
 Yes, and I never said we shouldn't address embedded platform needs.
  It's a
  huge and important industry and we should have proper support for it in 
  v4l-dvb.
 
 Agreed.
 
  And embedded platforms are used quite differently. Where device drivers for
  comsumer market products should hide complexity (because the end-user or the
  generic webcam/video application doesn't want to be bothered with that), 
  they
  should expose that complexity for embedded platforms since there the
  application writes want to take full control.
 
 I'm just guessing, but If the two usecases are so different, maybe we 
 shouldn't
 try to find a common solution for the two problems, or maybe we should use an
 approach similar to debufs, where you enable/mount only were needed 
 (embedded).

They are not *that* different. You still want the ability to discover the
available device nodes for consumer products (e.g. the alsa device belonging
to the video device). And there will no doubt be some borderline products
belonging to, say, the professional consumer market. It's not black-and-white.

snip

 v4l2-object seems good. also the -host/-client terms that Guennadi is 
 proposing.

Just an idea: why not rename struct v4l2_device to v4l2_mc and v4l2_subdev to
v4l2_object? And if we decide to go all the way, then we can rename video_device
to v4l2_devnode. Or perhaps we go straight to the media_ prefix instead.

The term 'client' has for me similar problems as 'device': it's used in so many
different contexts that it is easy to get confused.

   4) Creating device for sub-devices is the approach already taken on all 
   other
   drivers over the kernel.
  
  I gather that when you use the term 'device' you mean a 'device node' that
  userspace can access. It is an option to have sub-devices create a device
  node. Note that that would have to be a device node created by v4l; an i2c
  device node for example is quite useless to us since you can only use it
  for i2c ioctls.
  
  I have considered this myself as well. The reason I decided against it was
  that I think it is a lot of extra overhead and the creation of even more
  device nodes when adding a single media controller would function just as
  well. Especially since all this is quite uninteresting for most of the non-
  embedded drivers.
 
 This can be easily solved: Just add a Kconfig option for the tweak interfaces
 eventually making it depending on CONFIG_EMBEDDED.

An interesting idea. I don't think you want to make this specific for embedded
devices only. It can be done as a separate config option within V4L.

I have a problem though: what to do with sub-devices (if you don't mind, I'll
just keep using that term for now) that want to expose some advanced control.
We have seen several requests for that lately. E.g. an AGC-TOP control for
fine-tuning the AGC of tuners.

I think this example will be quite typical of several sub-devices: they may
have one or two 'advanced' controls that can be useful in very particular
cases for end-users.

There are a few possible ways of doing this:

1) With the mediacontroller concept from the RFC you can select the tuner
subdev through the mc device node and call VIDIOC_S_CTRL on that node (and
with QUERYCTRL you can also query all controls supported by that subdev,
including these advanced controls).

2) Create a device node for each subdev even if they have just a single control
to expose. Possible, but this still seems overkill for me.

3) Use your idea of only creating a device node for subdevs if a kernel config
is set. If no device nodes should be created, then the control framework can
still export such advanced controls to sysfs, allowing end-users to change
them. This is actually quite a nice idea: embedded systems or power-users can
get full control through the device nodes, while the average 

RE: RFCv2: Media controller proposal

2009-09-16 Thread Karicheri, Muralidharan

 And as I explained above, a v4l2_subdev just implements an interface. It
has
 no relation to devices. And yes, I'm beginning to agree with you that
subdevice
 was a bad name because it suggested something that it simply isn't.

 That said, I also see some advantages in doing this. For statistics or
 histogram sub-devices you can implement a read() call to read the data
 instead of using ioctl. It is more flexible in that respect.

I think this will be more flexible and will be less complex than creating a
proxy
device. For example, as you'll be directly addressing a device, you don't
need to
have any locking to avoid the risk that different threads accessing
different
sub-devices at the same time would result on a command sending to the wrong
device.
So, both kernel driver and userspace app can be simpler.


Not really. User application trying to parse the output of a histogram which
really will about 4K in size as described by Laurent. Imagine application does 
lot of parsing to decode the values thrown by the sysfs. Again on different 
platform, they can be different formats. With ioctl, each of these platforms 
provides api to access them and it is much simpler to use. Same for configuring 
IPIPE on DM355/DM365 where there are hundreds of parameters and write a lot of 
code in sysfs to parse each of these variables. I can see it as a nightmare for 
user space library or application developer.


 This is definitely an interesting topic that can be discussed both during
 the LPC and here on the list.

 Regards,

  Hans





Cheers,
Mauro
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: RFCv2: Media controller proposal

2009-09-15 Thread Laurent Pinchart
Hi Hans,

On Thursday 10 September 2009 17:00:40 Hans Verkuil wrote:
  On Thu, 10 Sep 2009, Hans Verkuil wrote:
Could entities not be completely addressed (configuration ioctls)
through the mc-node?
  
   Not sure what you mean.
 
  Instead of having a device node for each entity, the ioctls for each
  entities are done on the media controller-node address an entity by ID.
 
 I definitely don't want to go there. Use device nodes (video, fb, alsa,
 dvb, etc) for streaming the actual media as we always did and use the
 media controller for controlling the board. It keeps everything nicely
 separate and clean.

I agree with this, but I think it might be what Patrick meant as well.

Beside enumeration and link setup, the media controller device will allow 
direct access to entities to get/set controls and formats. As such its API 
will overlap with the V4L2 control and format API. This is not a problem at 
all, both having different use cases (control/format at the V4L2 level are 
meant for simple applications in a backward-compatible fashion, and 
control/format at the media controller level are meant for power users).

V4L2 devices will be used for streaming video as that's what they do best. We 
don't want a video streaming API at the media controller level (not completely 
true, as we are toying with the idea of shared video buffers, but that's for 
later).

In the long term I can imagine the V4L2 control/format ioctls being deprecated 
and all control/format access being done through the media controller. That's 
very long term though.

-- 
Laurent Pinchart
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: RFCv2: Media controller proposal

2009-09-15 Thread Laurent Pinchart
On Thursday 10 September 2009 23:59:20 Hans Verkuil wrote:
 On Thursday 10 September 2009 23:28:40 Guennadi Liakhovetski wrote:
  Hi Hans
 
  a couple of comments / questions from the first glance
 
  On Thu, 10 Sep 2009, Hans Verkuil wrote:
 
[snip]

   This requires no API changes and is very easy to implement. One problem
   is that this is not thread-safe. We can either supply some sort of
   locking mechanism, or just tell the application programmer to do the
   locking in the application. I'm not sure what is the correct approach
   here. A reasonable compromise would be to store the target entity as
   part of the filehandle. So you can open the media controller multiple
   times and each handle can set its own target entity.
  
   This also has the advantage that you can have a filehandle 'targeted'
   at a resizer and a filehandle 'targeted' at the previewer, etc. If you
   want to use the same filehandle from multiple threads, then you have to
   implement locking yourself.
 
  You mean the driver should only care about internal consistency, and the
  user is allowed to otherwise shoot herself in the foot? Makes sense to
  me:-)
 
 Basically, yes :-)
 
 You can easily make something like a VIDIOC_MC_LOCK and VIDIOC_MC_UNLOCK
 ioctl that can be used to get exclusive access to the MC. Or we could
 reuse the G/S_PRIORITY ioctls. The first just feels like a big hack to me,
 the second has some merit, I think.

The target entity should really be stored at the file handle level, otherwise 
Very Bad Stuff (TM) will happen. Then, if a multi-threaded application wants 
to access the file handle from multiple threads, it will need to implement its 
own serializing.

I don't think any VIDIOC_MC_LOCK/UNLOCK is required, what would be the use 
cases for them ?

   Open issues
   ===

[snip]

   2) There can be a lot of device nodes in complicated boards. One
   suggestion is to only register them when they are linked to an entity
   (i.e. can be active). Should we do this or not?
 
  Really a lot of device nodes? not sub-devices? What can this be? Isn't
  the decision when to register them board-specific?
 
 Sub-devices do not in general have device nodes (note that i2c sub-devices
 will have an i2c device node, of course).
 
 When to register device nodes is in the end driver-specific, but what to do
 when enumerating input device nodes and the device node doesn't exist yet?
 
 I can't put my finger on it, but my intuition says that doing this is
 dangerous. I can't oversee all the consequences.

Why would it be dangerous ? As long as an input or output device node is not 
connected to anything in the internal board graph it will be completely 
pointless for applications to use those device nodes. What do you imagine 
going wrong ?

[snip]

   6) For now I think we should leave enumerating input and output
   connectors to the bridge drivers (ENUMINPUT/ENUMOUTPUT). But as a
   future step it would make sense to also enumerate those in the media
   controller. However, it is not entirely clear what the relationship
   will be between that and the existing enumeration ioctls.
 
  Why should a bridge driver care? This isn't board-specific, is it?
 
 I don't follow you. What input and output connectors a board has is by
 definition board specific. If you can enumerate them through the media
 controller, then you can be more precise how they are hooked up. E.g. an
 antenna input is connected to a tuner sub-device, while the composite
 video-in is connected to a video decoder and the audio inputs to an audio
 mixer sub-device. All things that cannot be represented by ENUMINPUT. But
 do we really care about that?

 My opinion is that we should leave this alone for now. There is enough to
 do and we can always add it later.

In that end that boils down to a (few) table(s) of static data. It won't make 
drivers more complex, and I think we should support enumerating the input and 
output connectors at the media controller level, if only for the sake of 
completeness and coherency.

-- 
Regards,

Laurent Pinchart
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: RFCv2: Media controller proposal

2009-09-11 Thread Hiremath, Vaibhav

 -Original Message-
 From: linux-media-ow...@vger.kernel.org [mailto:linux-media-
 ow...@vger.kernel.org] On Behalf Of Hans Verkuil
 Sent: Thursday, September 10, 2009 12:43 PM
 To: linux-media@vger.kernel.org
 Subject: RFCv2: Media controller proposal

 Hi all,

 Here is the new Media Controller RFC. It is completely rewritten
 from the
 original RFC. This original RFC can be found here:

 http://www.archivum.info/video4linux-list%40redhat.com/2008-
 07/00371/RFC:_Add_support_to_query_and_change_connections_inside_a_m
 edia_device

[Hiremath, Vaibhav] I could see implementation has changed/evolved a lot here 
from last RFC.

I added some quick comments below, try to provide more during weekend.

 This document will be the basis of the discussions during the
 Plumbers
 Conference in two weeks time.

 Open issue #3 is the main unresolved item, but I hope to come up
 with something
 during the weekend.

 Regards,

   Hans


 RFC: Media controller proposal

 Version 2.0

 Background
 ==

 This RFC is a new version of the original RFC that was written in
 cooperation
 with and on behalf of Texas Instruments about a year ago.

 Much work has been done in the past year to put the foundation in
 place to
 be able to implement a media controller and now it is time for this
 updated
 version. The intention is to discuss this in more detail during this
 years
 Plumbers Conference.

 Although the high-level concepts are the same as in the original
 RFC, many
 of the details have changed based on what was learned over the past
 year.

 This RFC is based on the original discussions with Manjunath Hadli
 from TI
 last year, on discussions during a recent meeting between Laurent
 Pinchart,
 Guennadi Liakhovetski and myself, and on recent discussions with
 Nokia.
 Thanks to Sakari Ailus for doing an initial review of this RFC.

 One note regarding terminology: a 'board' is the name I use for the
 SoC,
 PCI or USB device that contains the video hardware. Each board has
 its own
 driver instance and its own v4l2_device struct. Originally I called
 it
 'device', but that name is already used in too many places.


 What is a media controller?
 ===

 In a nutshell: a media controller is a new v4l device node that can
 be used
 to discover and modify the topology of the board and to give access
 to the
 low-level nodes (such as previewers, resizers, color space
 converters, etc.)
 that are part of the topology.

 It does not do any streaming, that is the exclusive domain of video
 nodes.
 It is meant purely for controlling a board as a whole.


 Why do we need one?
 ===

 There are currently several problems that are impossible to solve
 within the
 current V4L2 API:

 1) Discovering the various device nodes that are typically created
 by a video
 board, such as: video nodes, vbi nodes, dvb nodes, alsa nodes,
 framebuffer
 nodes, input nodes (for e.g. webcam button events or IR remotes).

 It would be very handy if an application can just open an
 /dev/v4l/mc0 node
 and be able to figure out where all the nodes are, and to be able to
 figure
 out what the capabilities of the board are (e.g. does it support
 DVB, is the
 audio going through a loopback cable or is there an alsa device, can
 it do
 compressed MPEG video, etc. etc.). Currently the end-user has no
 choice but to
 supply the device nodes manually.

[Hiremath, Vaibhav] I am still confused here, Can we take one common use case, 
for example say video board has one /de/fb0 and /dev/video0 along with that we 
have one node for media controller /dev/v4l/mc0

How are we interacting or talking to /dev/fb0 through media controller node?

I looked into your presentation you created for LPC I guess, but still I am not 
clear on this.

 2) Some of the newer SoC devices can connect or disconnect internal
 components
 dynamically. As an example, the omap3 can either connect a sensor
 output to a
 CCDC module to a previewer module to a resizer module and finally to
 a capture
 device node. But it is also possible to capture the sensor output
 directly
 after the CCDC module. The previewer can get its input from another
 video
 device node and output either to the resizer or to another video
 capture
 device node. The same is true for the resizer, that too can get its
 input from
 a device node.

 So there are lots of connections here that can be modified at will
 depending
 on what the application wants. And in real life there are even more
 links than
 I mentioned here. And it will only get more complicated in the
 future.

 All this requires that there has to be a way to connect and
 disconnect parts
 of the internal topology of a video board at will.

 3) There is increasing demand to be able to control e.g. sensors or
 video
 encoders/decoders at a much more precise manner. Currently the V4L2
 API
 provides only limited support in the form of a set of controls. But
 when
 building a high-end camera the developer

Re: RFCv2: Media controller proposal

2009-09-11 Thread Hans Verkuil
On Friday 11 September 2009 01:08:30 Karicheri, Muralidharan wrote:
 
 Hans,
 
 Thanks for your reply..
 
 
  What you mean by controlling the board?
 
 In general: the media controller can do anything except streaming. However,
 that is an extreme position and in practice all the usual ioctls should
 remain supported by the video device nodes.
 
  We have currently ported DMxxx VPBE display drivers to 2.6.31 (Not
 submitted yet to mainline). In our current implementation, the output and
 standard/mode are controlled through sysfs because it is a common
 functionality affecting both v4l and FBDev framebuffer devices. Traditional
 applications such x-windows should be able to stream video/graphics to VPBE
 output. V4l2 applications should be able to stream video. Both these
 devices needs to know the display parameters such as frame buffer
 resolution, field etc that are to be configured in the video or osd layers
 in VPBE to output frames to the encoder that is driving the output. So to
 stream, first the output and mode/standard are selected using sysfs command
 and then the application is started. Following scenarios are supported by
 VPBE display drivers in our internal release:-
 
  1)Traditional FBDev applications (x-window) can be run using OSD device.
 Allows changing mode/standards at the output using fbset command.
 
  2)v4l2 driver doesn't provide s_output/s_std support since it is done
 through sysfs.
 
  3)Applications that requires to stream both graphics and video to the
 output uses both FBDev and V4l2 devices. So these application first set the
 output and mode/standard using sysfs, before doing io operations with these
 devices.
 
 I don't understand this approach. I'm no expert on the fb API but as far as
 I
 know the V4L2 API allows a lot more precision over the video timings (esp.
 with
 the new API you are working on). Furthermore, I assume it is possible to
 use
 the DMxxx without an OSD, right?
 
 
 Right. That case (2 above) is easily taken care by v4l2 device driver. We 
 used FBDev driver to drive OSD Layer because that way VPBE can be used by 
 user space applications like x-windows? What is the alternative for this?
 Is there a example v4l2 device using OSD like hardware and running x-windows 
 or other traditional graphics application? I am not aware of any and the 
 solution seems to be the right one here.
 
 So the solution we used (case 3)involves FBDev to drive the OSD layers and 
 V4L2 to drive the video layer.

As usual, ivtv is doing all that. The ivtv driver is the main controller of
the hardware. The ivtvfb driver provides the FB API towards the OSD. The
X driver for the OSD is available here:

http://dl.ivtvdriver.org/xf86-video-ivtv/archive/1.0.x/xf86-video-ivtv-1.0.2.tar.gz

This is the way to handle it.

 
 
 This is very similar to the ivtv and ivtvfb drivers: if the framebuffer is
 in
 use, then you cannot change the output standard (you'll get an EBUSY error)
 through a video device node.
 
 
 Does the ivtvfb and ivtv works with the same set of v4l2 sub devices for 
 output? In our case, VPBE can work with any sub device that can accept a 
 BT.656/BT1120/RGB bus interface. So the FBDev device and V4L2 device( either 
 as standalone device or as co-existent device) should work with the same set 
 of sub devices. So the question is, how both these bridge device can work on 
 the same sub device? If both can work with the same sub device, then what you 
 say is true and can be handled. That is the reason we used the sysfs/Encoder 
 manager as explained in my earlier email.

Look at ivtvfb.c (it's in media/video/ivtv). The ivtvfb_init function will just
find any ivtv driver instances and register itself with them. Most of the
hard work is actually done by ivtv and ivtvfb is just the front-end that
implements the FB API. The video and OSD hardware is usually if not always
so intertwined that it should be controlled by one driver, not two.

This way ivtv keeps full control over the sub-devices as well and all output
changes will go to the same encoder, regardless of whether they originated
from the fb or a video device node.

 
 That's exactly what you would expect. If the framebuffer isn't used, then
 you
 can just use the normal V4L2 API to change the output standard.
 
 In practice, I think that you can only change the resolution in the FB API.
 Not things like the framerate, let alone precise pixelclock, porch and sync
 widths.
 
 
 There are 3 use cases 
 
 1) Pure FBDev device driving graphics to VPBE OSD layers - sub devices - 
 Display (LCD/TV)
 
   This would require FBDev loading a required v4l2 the sub device (Not 
 sure if FBDev community like this approach) and using it to drive the output. 
 We will not be able to change the output. But output resolutions and timing 
 can be controlled through fbset command which allow you to change pixel 
 clock, porch, sync etc.

Bad idea. The fb API and framework is not really able to deal with the
complexity of 

RE: RFCv2: Media controller proposal

2009-09-11 Thread Hiremath, Vaibhav

 -Original Message-
 From: linux-media-ow...@vger.kernel.org [mailto:linux-media-
 ow...@vger.kernel.org] On Behalf Of Hans Verkuil
 Sent: Friday, September 11, 2009 1:57 AM
 To: Karicheri, Muralidharan
 Cc: Patrick Boettcher; Linux Media Mailing List
 Subject: Re: RFCv2: Media controller proposal
 
 On Thursday 10 September 2009 21:19:25 Karicheri, Muralidharan
 wrote:
  Hans,
 
  I haven't gone through the RFC, but thought will respond to the
 below comment.
 
  Murali Karicheri
  Software Design Engineer
  Texas Instruments Inc.
  Germantown, MD 20874
  new phone: 301-407-9583
  Old Phone : 301-515-3736 (will be deprecated)
  email: m-kariche...@ti.com
 
  
   I may be mistaken, but I don't believe soundcards have this
 same
   complexity are media board.
  
   When I launch alsa-mixer I see 4 input devices where I can
 select 4
   difference sources. This gives 16 combinations which is enough
 for me to
   call it 'complex' .
  
   Could entities not be completely addressed (configuration
 ioctls)
   through
   the mc-node?
  
   Not sure what you mean.
  
   Instead of having a device node for each entity, the ioctls for
 each
   entities are done on the media controller-node address an
 entity by ID.
  
  I definitely don't want to go there. Use device nodes (video, fb,
 alsa,
  dvb, etc) for streaming the actual media as we always did and use
 the
  media controller for controlling the board. It keeps everything
 nicely
  separate and clean.
  
 
 
  What you mean by controlling the board?
 
 In general: the media controller can do anything except streaming.
 However,
 that is an extreme position and in practice all the usual ioctls
 should
 remain supported by the video device nodes.
 
  We have currently ported DMxxx VPBE display drivers to 2.6.31 (Not
 submitted yet to mainline). In our current implementation, the
 output and standard/mode are controlled through sysfs because it is
 a common functionality affecting both v4l and FBDev framebuffer
 devices. Traditional applications such x-windows should be able to
 stream video/graphics to VPBE output. V4l2 applications should be
 able to stream video. Both these devices needs to know the display
 parameters such as frame buffer resolution, field etc that are to be
 configured in the video or osd layers in VPBE to output frames to
 the encoder that is driving the output. So to stream, first the
 output and mode/standard are selected using sysfs command and then
 the application is started. Following scenarios are supported by
 VPBE display drivers in our internal release:-
 
  1)Traditional FBDev applications (x-window) can be run using OSD
 device. Allows changing mode/standards at the output using fbset
 command.
 
  2)v4l2 driver doesn't provide s_output/s_std support since it is
 done through sysfs.
 
  3)Applications that requires to stream both graphics and video to
 the output uses both FBDev and V4l2 devices. So these application
 first set the output and mode/standard using sysfs, before doing io
 operations with these devices.
 
 I don't understand this approach. I'm no expert on the fb API but as
 far as I
 know the V4L2 API allows a lot more precision over the video timings
 (esp. with
 the new API you are working on). Furthermore, I assume it is
 possible to use
 the DMxxx without an OSD, right?
 
 This is very similar to the ivtv and ivtvfb drivers: if the
 framebuffer is in
 use, then you cannot change the output standard (you'll get an EBUSY
 error)
 through a video device node.
 
[Hiremath, Vaibhav] Framebuffer always be in use till the point you don't call 
FBIO_BLANK ioctl.

 That's exactly what you would expect. If the framebuffer isn't used,
 then you
 can just use the normal V4L2 API to change the output standard.
 
 In practice, I think that you can only change the resolution in the
 FB API.
 Not things like the framerate, let alone precise pixelclock, porch
 and sync
 widths.
 
 Much better to let the two cooperate: you can use both APIs, but you
 can't
 change the resolution in the fb if streaming is going on, and you
 can't
 change the output standard of a video device node if that changes
 the
 resolution while the framebuffer is in used.
 
[Hiremath, Vaibhav] To overcome this we brought in or rely on SYSFS interface, 
same is applicable to OMAP devices.

We are using SYSFS interface for all common features like Standard/output 
selection, etc...

I believe media controller will play some role here.

Thanks,
Vaibhav 

 No need for additional sysfs entries.
 
 
  There is an encoder manager to which all available encoders
 registers (using internally developed interface) and based on
 commands received at Fbdev/sysfs interfaces, the current encoder is
 selected by the encoder manager and current standard is selected.
 The encoder manager provides API to retrieve current timing
 information from the current encoder. FBDev and V4L2 drivers uses
 this API to configure OSD/video layers for streaming.
 
  As you can

RE: RFCv2: Media controller proposal

2009-09-11 Thread Hiremath, Vaibhav

 -Original Message-
 From: linux-media-ow...@vger.kernel.org [mailto:linux-media-
 ow...@vger.kernel.org] On Behalf Of Hans Verkuil
 Sent: Friday, September 11, 2009 11:51 AM
 To: Karicheri, Muralidharan
 Cc: Patrick Boettcher; Linux Media Mailing List
 Subject: Re: RFCv2: Media controller proposal
 
 On Friday 11 September 2009 01:08:30 Karicheri, Muralidharan wrote:
 
  Hans,
 
  Thanks for your reply..
  
  
snip

 
  Right. That case (2 above) is easily taken care by v4l2 device
 driver. We used FBDev driver to drive OSD Layer because that way
 VPBE can be used by user space applications like x-windows? What is
 the alternative for this?
  Is there a example v4l2 device using OSD like hardware and running
 x-windows or other traditional graphics application? I am not aware
 of any and the solution seems to be the right one here.
 
  So the solution we used (case 3)involves FBDev to drive the OSD
 layers and V4L2 to drive the video layer.
 
 As usual, ivtv is doing all that. The ivtv driver is the main
 controller of
 the hardware. The ivtvfb driver provides the FB API towards the OSD.
 The
 X driver for the OSD is available here:
 
 http://dl.ivtvdriver.org/xf86-video-ivtv/archive/1.0.x/xf86-video-
 ivtv-1.0.2.tar.gz
 
 This is the way to handle it.
 
 
  
  This is very similar to the ivtv and ivtvfb drivers: if the
 framebuffer is
  in
  use, then you cannot change the output standard (you'll get an
 EBUSY error)
  through a video device node.
  
 
  Does the ivtvfb and ivtv works with the same set of v4l2 sub
 devices for output? In our case, VPBE can work with any sub device
 that can accept a BT.656/BT1120/RGB bus interface. So the FBDev
 device and V4L2 device( either as standalone device or as co-
 existent device) should work with the same set of sub devices. So
 the question is, how both these bridge device can work on the same
 sub device? If both can work with the same sub device, then what you
 say is true and can be handled. That is the reason we used the
 sysfs/Encoder manager as explained in my earlier email.
 
 Look at ivtvfb.c (it's in media/video/ivtv). The ivtvfb_init
 function will just
[Hiremath, Vaibhav] I think our mail crossed each other.

Interesting, and is something new for me. Let me understand the implementation 
here first then I can provide some comments on this.

Thanks,
Vaibhav

 find any ivtv driver instances and register itself with them. Most
 of the
 hard work is actually done by ivtv and ivtvfb is just the front-end
 that
 implements the FB API. The video and OSD hardware is usually if not
 always
 so intertwined that it should be controlled by one driver, not two.
 
 This way ivtv keeps full control over the sub-devices as well and
 all output
 changes will go to the same encoder, regardless of whether they
 originated
 from the fb or a video device node.
 
 
  That's exactly what you would expect. If the framebuffer isn't
 used, then
  you
  can just use the normal V4L2 API to change the output standard.
  
  In practice, I think that you can only change the resolution in
 the FB API.
  Not things like the framerate, let alone precise pixelclock,
 porch and sync
  widths.
 
 
  There are 3 use cases
 
  1) Pure FBDev device driving graphics to VPBE OSD layers - sub
 devices - Display (LCD/TV)
 
  This would require FBDev loading a required v4l2 the sub
 device (Not sure if FBDev community like this approach) and using it
 to drive the output. We will not be able to change the output. But
 output resolutions and timing can be controlled through fbset
 command which allow you to change pixel clock, porch, sync etc.
 
 Bad idea. The fb API and framework is not really able to deal with
 the
 complexity of combined video and OSD devices. The v4l2 framework can
 (esp.
 when we have a media controller).
 
  2)Pure V4L2 device driving video to VPBE video layers - sub
 devices
  -Display (LCD/TV)
  - No issues here
 
  3)v4l2 and FBDev nodes co-exists. V4l2 drives video and FBDev
 drives OSD layers and the combined out -VPBE -sub devices -
 Display (LCD/TV)
  - Not sure which bridge device should load up and manage the
 sub devices. If V4l2 manages the sub devices, how FBDev driver can
 set the timings in the current sub device since it has no knowledge
 of the v4l2 device and the sub device it owns/manages.
 
 You should not attempt to artificially separate the two. You can't
 since both
 v4l and fb share the same hardware. You need one v4l driver that
 will take
 care of both and the FB driver just delegates the core OSD low-level
 work to
 the v4l driver.
 
 
  
  Much better to let the two cooperate: you can use both APIs, but
 you can't
  change the resolution in the fb if streaming is going on, and you
 can't
  change the output standard of a video device node if that changes
 the
  resolution while the framebuffer is in used.
  That is what I mean by use case 3). We can live with the
 restriction. But sub device model currently is v4l2

Re: RFCv2: Media controller proposal

2009-09-11 Thread Hans Verkuil
On Friday 11 September 2009 08:16:34 Hiremath, Vaibhav wrote:
 
  -Original Message-
  From: linux-media-ow...@vger.kernel.org [mailto:linux-media-
  ow...@vger.kernel.org] On Behalf Of Hans Verkuil
  Sent: Thursday, September 10, 2009 12:43 PM
  To: linux-media@vger.kernel.org
  Subject: RFCv2: Media controller proposal
 
  Hi all,
 
  Here is the new Media Controller RFC. It is completely rewritten
  from the
  original RFC. This original RFC can be found here:
 
  http://www.archivum.info/video4linux-list%40redhat.com/2008-
  07/00371/RFC:_Add_support_to_query_and_change_connections_inside_a_m
  edia_device
 
 [Hiremath, Vaibhav] I could see implementation has changed/evolved a lot here 
 from last RFC.

Yes it has. The global idea remains the same, but at the time we didn't have
sub-devices and that is (not entirely accidentally) a perfect match for what
we need here.

 I added some quick comments below, try to provide more during weekend.
 
  This document will be the basis of the discussions during the
  Plumbers
  Conference in two weeks time.
 
  Open issue #3 is the main unresolved item, but I hope to come up
  with something
  during the weekend.
 
  Regards,
 
Hans
 
 
  RFC: Media controller proposal
 
  Version 2.0
 
  Background
  ==
 
  This RFC is a new version of the original RFC that was written in
  cooperation
  with and on behalf of Texas Instruments about a year ago.
 
  Much work has been done in the past year to put the foundation in
  place to
  be able to implement a media controller and now it is time for this
  updated
  version. The intention is to discuss this in more detail during this
  years
  Plumbers Conference.
 
  Although the high-level concepts are the same as in the original
  RFC, many
  of the details have changed based on what was learned over the past
  year.
 
  This RFC is based on the original discussions with Manjunath Hadli
  from TI
  last year, on discussions during a recent meeting between Laurent
  Pinchart,
  Guennadi Liakhovetski and myself, and on recent discussions with
  Nokia.
  Thanks to Sakari Ailus for doing an initial review of this RFC.
 
  One note regarding terminology: a 'board' is the name I use for the
  SoC,
  PCI or USB device that contains the video hardware. Each board has
  its own
  driver instance and its own v4l2_device struct. Originally I called
  it
  'device', but that name is already used in too many places.
 
 
  What is a media controller?
  ===
 
  In a nutshell: a media controller is a new v4l device node that can
  be used
  to discover and modify the topology of the board and to give access
  to the
  low-level nodes (such as previewers, resizers, color space
  converters, etc.)
  that are part of the topology.
 
  It does not do any streaming, that is the exclusive domain of video
  nodes.
  It is meant purely for controlling a board as a whole.
 
 
  Why do we need one?
  ===
 
  There are currently several problems that are impossible to solve
  within the
  current V4L2 API:
 
  1) Discovering the various device nodes that are typically created
  by a video
  board, such as: video nodes, vbi nodes, dvb nodes, alsa nodes,
  framebuffer
  nodes, input nodes (for e.g. webcam button events or IR remotes).
 
  It would be very handy if an application can just open an
  /dev/v4l/mc0 node
  and be able to figure out where all the nodes are, and to be able to
  figure
  out what the capabilities of the board are (e.g. does it support
  DVB, is the
  audio going through a loopback cable or is there an alsa device, can
  it do
  compressed MPEG video, etc. etc.). Currently the end-user has no
  choice but to
  supply the device nodes manually.
 
 [Hiremath, Vaibhav] I am still confused here, Can we take one common use 
 case, for example say video board has one /de/fb0 and /dev/video0 along with 
 that we have one node for media controller /dev/v4l/mc0
 
 How are we interacting or talking to /dev/fb0 through media controller node?
 
 I looked into your presentation you created for LPC I guess, but still I am 
 not clear on this.

The media controller will just tell the application that there is a framebuffer
device and where that node can be found in /dev. In addition, it will show how
it is connected to some sub-device and possibly you can dynamically connect it
to another sub-device instead.

To access the actual framebuffer you still need to go to fbX. That will never
change. The media controller provides the high-level control you need to hook
an OSD up to different outputs for example.

This also means that the v4l driver should have knowledge of (and probably
implement) the OSD. See also the RFC thread with Murali.
 

snip

  The idea is this:
 
  // Select a particular target entity
  ioctl(mc, VIDIOC_S_SUBDEV, entityID);
  // Send S_FMT directly to that entity
  ioctl(mc, VIDIOC_S_FMT, fmt);
  // Send a custom ioctl to that entity
  ioctl(mc

Re: RFCv2: Media controller proposal

2009-09-11 Thread Mauro Carvalho Chehab
Em Thu, 10 Sep 2009 16:27:20 -0400
Devin Heitmueller dheitmuel...@kernellabs.com escreveu:

 On Thu, Sep 10, 2009 at 4:20 PM, Mauro Carvalho
 Chehabmche...@infradead.org wrote:
  In fact, this can already be done by using the sysfs interface. the current
  version of v4l2-sysfs-path.c already enumerates the associated nodes to
  a /dev/video device, by just navigating at the already existing device
  description nodes at sysfs. I hadn't tried yet, but I bet that a similar 
  kind
  of topology can be obtained from a dvb device (probably, we need to do some
  adjustments).
 
 For the audio case, I did some digging into this a bit and It's worth
 noting that this behavior varies by driver (at least on USB).  In some
 cases, the parent points to the USB device, in other cases it points
 to the USB interface.  My original thought was to pick one or the
 other and make the various drivers consistent, but even that is a
 challenge since in some cases the audio device was provided by
 snd-usb-audio (which has no knowledge of the v4l subsystem).

We may consider adding a quick at snd-usb-audio for em28xx devices, in order
to create the proper sysfs nodes.

Cheers,
Mauro
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: RFCv2: Media controller proposal

2009-09-11 Thread Mauro Carvalho Chehab
Em Thu, 10 Sep 2009 23:35:52 +0200
Hans Verkuil hverk...@xs4all.nl escreveu:

  First of all, a generic comment: you enumerated on your RFC several needs 
  that
  you expect to be solved with a media controller, but you didn't mention what
  userspace API will be used to solve it (e. g. what ioctls, sysfs interfaces,
  etc). As this is missing, I'm adding a few notes about how this can be
  implemented. For example, as I've already pointed when you sent the first
  proposal and at LPC, sysfs is the proper kernel API for enumerating things.
 
 I hate sysfs with a passion. All of the V4L2 API is designed around ioctls,
 and so is the media controller.
 
 Note that I did not go into too much implementation detail in this RFC. The
 best way to do that is by trying to implement it. Only after implementing it
 for a few drivers will you get a real feel of what works and what doesn't.
 
 Of course, whether to use sysfs or ioctls is something that has to be designed
 beforehand.

   1) Discovering the various device nodes that are typically created by a 
   video
   board, such as: video nodes, vbi nodes, dvb nodes, alsa nodes, framebuffer
   nodes, input nodes (for e.g. webcam button events or IR remotes).
  
  In fact, this can already be done by using the sysfs interface. the current
  version of v4l2-sysfs-path.c already enumerates the associated nodes to
  a /dev/video device, by just navigating at the already existing device
  description nodes at sysfs. I hadn't tried yet, but I bet that a similar 
  kind
  of topology can be obtained from a dvb device (probably, we need to do some
  adjustments).
 
 sysfs is crap. It's a poorly documented public API that is hell to use. Take
 a device node entity as enumerated by the media controller: I want to provide
 the application with information like the sort of node (alsa, fb, v4l, etc),
 how to access it (alsa card nr or major/minor), a description (Captured MPEG
 stream), possibly some capabilities and addition data. With an ENUM ioctl
 you can just call it. With sysfs you have to open/read/close files for each of
 these properties, walk through the tree to find related alsa/v4l/fb devices,

sysfs is an hierarchical description of the kernel objects, used to describe
devices, buses, sub-devices, etc. navigating on it, reading, etc is very fast,
since it is done in ram, as described in:

http://lwn.net/Articles/31185/

Unfortunately, it were designed after V4L2 API, otherwise, probably several
things at the API would be different.

Of course, we need to properly document the media controller sysfs nodes at 
V4L2.

 and in drivers you must write a hell of a lot of code just to make those sysfs
 nodes. It's an uncontrollable mess.

Huh? How much sysfs code is currently present at the drivers? Nothing. Yet, you
can already enumerate several things as shown with v4l2-sysfs-path, since V4L2 
core
already has the code for implementing it. Of course if you want to have a
customized set of nodes for changing some attributes, you'll need to say to
sysfs the name of the attribute, and have a get/set pair of methods. Nothing
different from what we currently have. In a matter of fact, it is even simpler,
since you don't need to add an enum method.

So, it is the proper Kernel API for the objectives you described. Doing it via
ioctl will duplicate things, since the sysfs stuff will still be there, and
will use a wrong API.

So, we should use sysfs for media controller.

  The big missing component is an userspace library that will properly return 
  the
  device components to the applications. Maybe we need to do also some
  adjustments at the sysfs nodes to represent all that it is needed.
 
 So we write a userspace library that collects all that information? So that
 has to:
 
 1) walk through the sysfs tree trying to find all the related parts of the
 media board.
 2) open the property that we are interested in.
 3) attempt to read the property's value.
 4) the driver will then copy that value into a buffer that is returned to the
 application, usually through a sprintf() call.
 5) the library than uses atol() to convert the string back to an integer and
 stores the result in a struct.
 6) repeat for all properties.
 
 Isn't that the same as calling an enum ioctl() with a struct pointer? Except
 a zillion times slower and more obfuscated?

You'll need a similar process with enum, to get each value. Also, by using
sysfs, it is easy to write udev rules that, once a new sysfs node is created,
some action will be started, like for example the action of setting the board
into the needed configuration.

  The better would be to create a /sys/class/media node, and having the
  media controllers above that struct. So, mc0 will be at 
  /sys/class/media/mc0.
 
 Why? It's a device. Devices belong in /dev. That's where applications and 
 users
 look for devices. Not in sysfs.

A device is something that does some sort of input/output transfer,
not something that controls the 

Re: RFCv2: Media controller proposal

2009-09-11 Thread Devin Heitmueller
On Fri, Sep 11, 2009 at 11:13 AM, Mauro Carvalho Chehab
mche...@infradead.org wrote:
 Em Thu, 10 Sep 2009 23:35:52 +0200
 Hans Verkuil hverk...@xs4all.nl escreveu:

  First of all, a generic comment: you enumerated on your RFC several needs 
  that
  you expect to be solved with a media controller, but you didn't mention 
  what
  userspace API will be used to solve it (e. g. what ioctls, sysfs 
  interfaces,
  etc). As this is missing, I'm adding a few notes about how this can be
  implemented. For example, as I've already pointed when you sent the first
  proposal and at LPC, sysfs is the proper kernel API for enumerating things.

 I hate sysfs with a passion. All of the V4L2 API is designed around ioctls,
 and so is the media controller.

 Note that I did not go into too much implementation detail in this RFC. The
 best way to do that is by trying to implement it. Only after implementing it
 for a few drivers will you get a real feel of what works and what doesn't.

 Of course, whether to use sysfs or ioctls is something that has to be 
 designed
 beforehand.

   1) Discovering the various device nodes that are typically created by a 
   video
   board, such as: video nodes, vbi nodes, dvb nodes, alsa nodes, 
   framebuffer
   nodes, input nodes (for e.g. webcam button events or IR remotes).
 
  In fact, this can already be done by using the sysfs interface. the current
  version of v4l2-sysfs-path.c already enumerates the associated nodes to
  a /dev/video device, by just navigating at the already existing device
  description nodes at sysfs. I hadn't tried yet, but I bet that a similar 
  kind
  of topology can be obtained from a dvb device (probably, we need to do some
  adjustments).

 sysfs is crap. It's a poorly documented public API that is hell to use. Take
 a device node entity as enumerated by the media controller: I want to provide
 the application with information like the sort of node (alsa, fb, v4l, etc),
 how to access it (alsa card nr or major/minor), a description (Captured MPEG
 stream), possibly some capabilities and addition data. With an ENUM ioctl
 you can just call it. With sysfs you have to open/read/close files for each 
 of
 these properties, walk through the tree to find related alsa/v4l/fb devices,

 sysfs is an hierarchical description of the kernel objects, used to describe
 devices, buses, sub-devices, etc. navigating on it, reading, etc is very fast,
 since it is done in ram, as described in:

        http://lwn.net/Articles/31185/

 Unfortunately, it were designed after V4L2 API, otherwise, probably several
 things at the API would be different.

 Of course, we need to properly document the media controller sysfs nodes at 
 V4L2.

 and in drivers you must write a hell of a lot of code just to make those 
 sysfs
 nodes. It's an uncontrollable mess.

 Huh? How much sysfs code is currently present at the drivers? Nothing. Yet, 
 you
 can already enumerate several things as shown with v4l2-sysfs-path, since 
 V4L2 core
 already has the code for implementing it. Of course if you want to have a
 customized set of nodes for changing some attributes, you'll need to say to
 sysfs the name of the attribute, and have a get/set pair of methods. Nothing
 different from what we currently have. In a matter of fact, it is even 
 simpler,
 since you don't need to add an enum method.

 So, it is the proper Kernel API for the objectives you described. Doing it via
 ioctl will duplicate things, since the sysfs stuff will still be there, and
 will use a wrong API.

 So, we should use sysfs for media controller.

  The big missing component is an userspace library that will properly 
  return the
  device components to the applications. Maybe we need to do also some
  adjustments at the sysfs nodes to represent all that it is needed.

 So we write a userspace library that collects all that information? So that
 has to:

 1) walk through the sysfs tree trying to find all the related parts of the
 media board.
 2) open the property that we are interested in.
 3) attempt to read the property's value.
 4) the driver will then copy that value into a buffer that is returned to the
 application, usually through a sprintf() call.
 5) the library than uses atol() to convert the string back to an integer and
 stores the result in a struct.
 6) repeat for all properties.

 Isn't that the same as calling an enum ioctl() with a struct pointer? Except
 a zillion times slower and more obfuscated?

 You'll need a similar process with enum, to get each value. Also, by using
 sysfs, it is easy to write udev rules that, once a new sysfs node is created,
 some action will be started, like for example the action of setting the board
 into the needed configuration.

  The better would be to create a /sys/class/media node, and having the
  media controllers above that struct. So, mc0 will be at 
  /sys/class/media/mc0.

 Why? It's a device. Devices belong in /dev. That's where applications and 
 users
 look for 

RE: RFCv2: Media controller proposal

2009-09-11 Thread Hiremath, Vaibhav
 -Original Message-
 From: linux-media-ow...@vger.kernel.org [mailto:linux-media-
 ow...@vger.kernel.org] On Behalf Of Devin Heitmueller
 Sent: Friday, September 11, 2009 9:16 PM
 To: Mauro Carvalho Chehab
 Cc: Hans Verkuil; linux-media@vger.kernel.org
 Subject: Re: RFCv2: Media controller proposal
 
 On Fri, Sep 11, 2009 at 11:13 AM, Mauro Carvalho Chehab
 mche...@infradead.org wrote:
  Em Thu, 10 Sep 2009 23:35:52 +0200
  Hans Verkuil hverk...@xs4all.nl escreveu:
 
snip
 
  I was talking not about specific attributes, but about the V4L2
 API controls
  that you may eventually need to hijack (using that context-
 sensitive
  thread-unsafe approach you described).
 
  Anyway, by using sysfs, you won't have any thread issues, since
 you'll be able
  to address each sub-device individually:
 
  echo 1 /sys/class/media/mc0/video:dsp0/enable_stats
 
 
 
  Cheers,
  Mauro
 
 Mauro,
 
 Please, *seriously* reconsider the notion of making sysfs a
 dependency
 of V4L.  While sysfs is great for a developer who wants to poke
 around
 at various properties from a command line during debugging, it is an
 absolute nightmare for any developer who wants to write an
 application
 in C that is expected to actually use the interface.  The amount of
 extra code for all the string parsing alone would be ridiculous
 (think
 of how many calls you're going to have to make to sscanf or atoi).
 It's so much more straightforward to be able to have ioctl() calls
 that can return an actual struct with nice things like enumeration
 data types etc.
 
 Just my opinion, of course.
 
[Hiremath, Vaibhav] Mauro,

Definitely SYSFS interface is a nightmare for the application developer, and 
again we have not thought of backward compatibility here.

How application would know/decide on which node is exist and stuff? Every video 
board will have his separate way of notions for creating SYSFS nodes and 
maintaining standard between them would be really mess. 

There has to be enumeration kind of interface to make standard application work 
seamlessly.

Thanks,
Vaibhav

 Devin
 
 --
 Devin J. Heitmueller - Kernel Labs
 http://www.kernellabs.com
 --
 To unsubscribe from this list: send the line unsubscribe linux-
 media in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: RFCv2: Media controller proposal

2009-09-11 Thread Mauro Carvalho Chehab
Em Fri, 11 Sep 2009 21:23:50 +0530
Hiremath, Vaibhav hvaib...@ti.com escreveu:

  -Original Message-
  From: linux-media-ow...@vger.kernel.org [mailto:linux-media-
  ow...@vger.kernel.org] On Behalf Of Devin Heitmueller
  Sent: Friday, September 11, 2009 9:16 PM
  To: Mauro Carvalho Chehab
  Cc: Hans Verkuil; linux-media@vger.kernel.org
  Subject: Re: RFCv2: Media controller proposal
  
  On Fri, Sep 11, 2009 at 11:13 AM, Mauro Carvalho Chehab
  mche...@infradead.org wrote:
   Em Thu, 10 Sep 2009 23:35:52 +0200
   Hans Verkuil hverk...@xs4all.nl escreveu:
  
 snip
  
   I was talking not about specific attributes, but about the V4L2
  API controls
   that you may eventually need to hijack (using that context-
  sensitive
   thread-unsafe approach you described).
  
   Anyway, by using sysfs, you won't have any thread issues, since
  you'll be able
   to address each sub-device individually:
  
   echo 1 /sys/class/media/mc0/video:dsp0/enable_stats
  
  
  
   Cheers,
   Mauro
  
  Mauro,
  
  Please, *seriously* reconsider the notion of making sysfs a
  dependency
  of V4L.  While sysfs is great for a developer who wants to poke
  around
  at various properties from a command line during debugging, it is an
  absolute nightmare for any developer who wants to write an
  application
  in C that is expected to actually use the interface.  The amount of
  extra code for all the string parsing alone would be ridiculous
  (think
  of how many calls you're going to have to make to sscanf or atoi).
  It's so much more straightforward to be able to have ioctl() calls
  that can return an actual struct with nice things like enumeration
  data types etc.

The complexity of the interface will greatly depend on the way things will be
mapped there, and the number of tree levels will be used. Also, as sysfs
accepts soft links, we may have the same node pointed on different places.
This can be useful to ek speed.

In order to have something optimized for application, we can imagine having,
for example, under /sys/class/media/mc0/subdevs, links to all the several 
subdevs,
like:

video:vin0
video:vin1
audio:audio0
audio:audio1
dsp:dsp0
dsp:dsp0
dvb:adapter0
i2c:vin0:tvp5150
...

each of them being a link to some specific sysfs node, all of this created by
V4L2 core, to be sure that all devices will implement it at the standard way.

If some parameter should be bind, for example at the video input device 0, you
just need to write to a node like:
/sys/class/media/mc0/subdevs/attr/atribute

(all the above names are just examples - we'll need to properly define the
sysfs tree we need to fulfill the requirements).

Also, it should be noticed that you'll need to use sysfs anyway, to get subdev's
major/minor numbers and to associate them with a file name under /dev.

  
  Just my opinion, of course.
  
 [Hiremath, Vaibhav] Mauro,
 
 Definitely SYSFS interface is a nightmare for the application developer, and 
 again we have not thought of backward compatibility here.

What do you mean by backward compatibility? An application using the standard
V4L2 API will keep working, but if they'll use the media controller sysfs, 
they'll have
extra functionality.

I'm not saying that we should use what we currently have, but to use sysfs to
create standard classes (and/or buses) that fulfill the needs for media
controller to match the RFC requirements.

 How application would know/decide on which node is exist and stuff? Every 
 video board will have his separate way of notions for creating SYSFS nodes 
 and maintaining standard between them would be really mess. 

Yes, but none currently have a media controller node. As sysfs provides links,
we can link the media controller to the old nodes or vice versa (for the few
devices that already have their proper nodes).

 There has to be enumeration kind of interface to make standard application 
 work seamlessly.

That's for sure.



Cheers,
Mauro
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: RFCv2: Media controller proposal

2009-09-11 Thread Hiremath, Vaibhav

 -Original Message-
 From: Mauro Carvalho Chehab [mailto:mche...@infradead.org]
 Sent: Friday, September 11, 2009 10:34 PM
 To: Hiremath, Vaibhav
 Cc: Devin Heitmueller; Hans Verkuil; linux-media@vger.kernel.org
 Subject: Re: RFCv2: Media controller proposal
 
 Em Fri, 11 Sep 2009 21:23:50 +0530
 Hiremath, Vaibhav hvaib...@ti.com escreveu:
 
   -Original Message-
   From: linux-media-ow...@vger.kernel.org [mailto:linux-media-
   ow...@vger.kernel.org] On Behalf Of Devin Heitmueller
   Sent: Friday, September 11, 2009 9:16 PM
   To: Mauro Carvalho Chehab
   Cc: Hans Verkuil; linux-media@vger.kernel.org
   Subject: Re: RFCv2: Media controller proposal
  
   On Fri, Sep 11, 2009 at 11:13 AM, Mauro Carvalho Chehab
   mche...@infradead.org wrote:
Em Thu, 10 Sep 2009 23:35:52 +0200
Hans Verkuil hverk...@xs4all.nl escreveu:
   
  snip
   
I was talking not about specific attributes, but about the
 V4L2
   API controls
that you may eventually need to hijack (using that context-
   sensitive
thread-unsafe approach you described).
   
Anyway, by using sysfs, you won't have any thread issues,
 since
   you'll be able
to address each sub-device individually:
   
echo 1 /sys/class/media/mc0/video:dsp0/enable_stats
   
   
   
Cheers,
Mauro
  
   Mauro,
  
   Please, *seriously* reconsider the notion of making sysfs a
   dependency
   of V4L.  While sysfs is great for a developer who wants to poke
   around
   at various properties from a command line during debugging, it
 is an
   absolute nightmare for any developer who wants to write an
   application
   in C that is expected to actually use the interface.  The amount
 of
   extra code for all the string parsing alone would be ridiculous
   (think
   of how many calls you're going to have to make to sscanf or
 atoi).
   It's so much more straightforward to be able to have ioctl()
 calls
   that can return an actual struct with nice things like
 enumeration
   data types etc.
 
 The complexity of the interface will greatly depend on the way
 things will be
 mapped there, and the number of tree levels will be used. Also, as
 sysfs
 accepts soft links, we may have the same node pointed on different
 places.
 This can be useful to ek speed.
 
 In order to have something optimized for application, we can imagine
 having,
 for example, under /sys/class/media/mc0/subdevs, links to all the
 several subdevs,
 like:
 
   video:vin0
   video:vin1
   audio:audio0
   audio:audio1
   dsp:dsp0
   dsp:dsp0
   dvb:adapter0
   i2c:vin0:tvp5150
   ...
 
 each of them being a link to some specific sysfs node, all of this
 created by
 V4L2 core, to be sure that all devices will implement it at the
 standard way.
 
 If some parameter should be bind, for example at the video input
 device 0, you
 just need to write to a node like:
   /sys/class/media/mc0/subdevs/attr/atribute
 
 (all the above names are just examples - we'll need to properly
 define the
 sysfs tree we need to fulfill the requirements).
 
 Also, it should be noticed that you'll need to use sysfs anyway, to
 get subdev's
 major/minor numbers and to associate them with a file name under
 /dev.
 
  
   Just my opinion, of course.
  
  [Hiremath, Vaibhav] Mauro,
 
  Definitely SYSFS interface is a nightmare for the application
 developer, and again we have not thought of backward compatibility
 here.
 
 What do you mean by backward compatibility? An application using the
 standard
 V4L2 API will keep working, but if they'll use the media controller
 sysfs, they'll have
 extra functionality.
 
[Hiremath, Vaibhav] I was referring to standard V4L2 interface; I was referring 
to backward compatibility between Media controller devices itself.

Have you thought of custom parameter configuration? For example 
H3A(20)/Resizer(64) sub-device will have coeff. Which is non-standard (we had 
some discussion in the past) -

With SYSFS approach it is really difficult to pass big parameter to sub-device, 
which we can easily achieve using IOCTL.

Thanks,
Vaibhav
 I'm not saying that we should use what we currently have, but to use
 sysfs to
 create standard classes (and/or buses) that fulfill the needs for
 media
 controller to match the RFC requirements.
 
  How application would know/decide on which node is exist and
 stuff? Every video board will have his separate way of notions for
 creating SYSFS nodes and maintaining standard between them would be
 really mess.
 
 Yes, but none currently have a media controller node. As sysfs
 provides links,
 we can link the media controller to the old nodes or vice versa (for
 the few
 devices that already have their proper nodes).
 
  There has to be enumeration kind of interface to make standard
 application work seamlessly.
 
 That's for sure.
 
 
 
 Cheers,
 Mauro

--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More

Re: RFCv2: Media controller proposal

2009-09-11 Thread Mauro Carvalho Chehab
Em Fri, 11 Sep 2009 23:04:13 +0530
Hiremath, Vaibhav hvaib...@ti.com escreveu:

 [Hiremath, Vaibhav] I was referring to standard V4L2 interface; I was 
 referring to backward compatibility between Media controller devices itself.

Huh? There's no media controller concept implemented yet. Hans proposal is to
add a new API to enumerate devices, not to replace what currently exists.
 
 Have you thought of custom parameter configuration? For example 
 H3A(20)/Resizer(64) sub-device will have coeff. Which is non-standard (we had 
 some discussion in the past) -
 

I'm not saying that all new features should be implemented via sysfs. I'm just
saying that sysfs is the way Linux Kernel uses to describe device topology,
and, due to that, this is is the interface that applies at under the media
controller proposal.

In the case of resizer, I don't see why this can't be implemented as an ioctl
over /dev/video device.

 With SYSFS approach it is really difficult to pass big parameter to 
 sub-device, which we can easily achieve using IOCTL.

I didn't get you point here. With sysfs, you can pass everything, even a mix of
strings and numbers, since get operation can be parsed via sscanf and generated
set uses sprintf (this doesn't mean that this is the recommended way to use it).

For example, on kernel 2.6.31, we have the complete hda audio driver pinup by
reading to just one var:

# cat /sys/class/sound/hwC0D0/init_pin_configs
0x11 0x02214040
0x12 0x01014010
0x13 0x991301f0
0x14 0x02a19020
0x15 0x01813030
0x16 0x413301f0
0x17 0x41a601f0
0x18 0x41a601f0
0x1a 0x41f301f0
0x1b 0x414511f0
0x1c 0x41a190f0

If you want to alter PIN 0x15 output config, all you need to do is:

# echo 0x15 0x02214040 /sys/class/sound/hwC0D0/user_pin_configs
(or open /sys/class/sound/hwC0D0/init_pin_configs and write 0x15 0x02214040 
to it)

And to reset to init config:
# echo 1 /sys/class/sound/hwC0D0/clear

One big advantage is that you can have a shell script to do the needed setup,
automatically called by some udev rule, without needing to write a single line
of code. So, for those advanced configuration parameters that doesn't change
(for example board xtal speeds), you don't need to code it on your application.
Yet, you can do there, if needed.

Cheers,
Mauro
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: RFCv2: Media controller proposal

2009-09-11 Thread Hans Verkuil
Mauro,

I am going to move the ioctl vs sysfs discussion to a separate thread. I'll
post an analysis of that later today or tomorrow.

See below for my comments on some misunderstandings for non-sysfs issues.

On Friday 11 September 2009 17:13:42 Mauro Carvalho Chehab wrote:

snip

All this requires that there has to be a way to connect and disconnect 
parts
of the internal topology of a video board at will.
   
   We should design this with care, since each change at the internal 
   topology may
   create/delete devices.
  
  No, devices aren't created or deleted. Only links between devices.
 
 I think that there are some cases where devices are created/deleted. For
 example, on some hardware, you have some blocks that allow you to have either 
 4 SD
 video inputs or 1 HD video input. So, if you change the type of input, you'll
 end by creating or deleting devices.

Normally you will create four device nodes, but if you switch to HD mode,
then only one is active and attempting to use the others will return EBUSY
or something. That's what the davinci driver does.

Creating and deleting device nodes depending on the mode makes a driver very
complex and the application as well. Say you are in SD mode and you have nodes
video[0-3], now you switch to HD mode and you have only node video0. You go
back to SD mode and you may end up with nodes video0 and video[2-4] if in the
meantime the user connected a USB webcam which took up video1.

Just create them upfront. You know beforehand what the maximum number of video
nodes is since that is determined by the hardware. Let's keep things simple.
Media boards are getting very, very complex and we should keep away from adding
unnecessary further complications.

And yes, I too can generate hypothetical situations where this might be needed.
But that's something we can tackle when it arrives.

 
   If you do such changes at topology, udev will need to 
   delete the old devices and create the new ones. 
  
  udev is not involved at all. Exception: open issue #2 suggests that we
  dynamically register device nodes when they are first linked to some source
  or sink. That would involve udev.
  
  All devices are setup when the board is configured. But the links between
  them can be changed. This is nothing more than bringing the board's block
  diagram to life: each square of the diagram (video device node, resizer, 
  video
  encoder or decoder) is a v4l2-subdev with inputs and outputs. And in some 
  cases
  you can change links dynamically (in effect this will change a mutex 
  register).
 
 See above. If you're grouping 4 A/D blocks into one A/D for handling HD, 
 you're
 doing more than just changing links, since the HD device will be just one
 device: one STD, one video input mux, one audio input mux, etc.

So? You will just deactivate three SD device nodes. I don't see a problem with
that, and that concept has already been proven to work in the davinci driver.
 
   This will happen on separate 
   threads and may cause locking issues at the device, especially since you 
   can be
   modifying several components at the same time (being even possible to do 
   it on
   separate threads).
  
  This is definitely not something that should be allowed while streaming. I
  would like to hear from e.g. TI whether this could be a problem or not. I
  suspect that it isn't a problem unless streaming is in progress.
 
 Even when streaming, providing that you don't touch at the used IC blocks, it
 should be possible to reconfigure the unused parts. It is just a matter of
 having the right resource locks at the driver.

As you say, this will depend on the driver. Some may be able to do this,
others may just return -EBUSY. I would do the latter, personally, since
allowing this would just make the driver more complicated for IMHO little
gain.
 
   I've seen some high-end core network routers that implements topology 
   changes
   on an interesting way: any changes done are not immediately applied at the
   node, but are stored into a file, where the configuration that can be 
   changed
   anytime. However, the topology changes only happen after giving a commit
   command. After commit, it validates the new config and apply them 
   atomically
   (e. g. or all changes are applied or none), to avoid bad effects that
   intermediate changes could cause.
   
   As we are at kernelspace, we need to take care to not create a very 
   complex
   interface. Yet, the idea of applying the new topology atomically seems
   interesting. 
  
  I see no need for it. At least, not for any of the current or forthcoming
  devices that I am aware of. Should it ever be needed, then we can introduce 
  a
  'shadow topology' in the future. You can change the shadow links and when 
  done
  commit it. That wouldn't be too difficult and we can easily prepare for that
  eventuality (e.g. have some 'flags' field available where you can set a 
  SHADOW
  flag in the future).
 
   Alsa is 

Re: RFCv2: Media controller proposal

2009-09-11 Thread Hans Verkuil
On Friday 11 September 2009 20:52:17 Mauro Carvalho Chehab wrote:
 Em Fri, 11 Sep 2009 23:04:13 +0530
 Hiremath, Vaibhav hvaib...@ti.com escreveu:
 
  [Hiremath, Vaibhav] I was referring to standard V4L2 interface; I was 
  referring to backward compatibility between Media controller devices itself.
 
 Huh? There's no media controller concept implemented yet. Hans proposal is to
 add a new API to enumerate devices, not to replace what currently exists.
  
  Have you thought of custom parameter configuration? For example 
  H3A(20)/Resizer(64) sub-device will have coeff. Which is non-standard (we 
  had some discussion in the past) -
  
 
 I'm not saying that all new features should be implemented via sysfs. I'm just
 saying that sysfs is the way Linux Kernel uses to describe device topology,
 and, due to that, this is is the interface that applies at under the media
 controller proposal.
 
 In the case of resizer, I don't see why this can't be implemented as an ioctl
 over /dev/video device.

Well, no. Not in general. There are two problems. The first problem occurs if
you have multiple instances of a resizer (OK, not likely, but you *can* have
multiple video encoders or decoders or sensors). If all you have is the
streaming device node, then you cannot select to which resizer (or video
encoder) the ioctl should go. The media controller allows you to select the
recipient of the ioctl explicitly. Thus providing the control that these
applications need.

The second problem is that this will pollute the 'namespace' of a v4l device
node. Device drivers need to pass all those private ioctls to the right
sub-device. But they shouldn't have to care about that. If someone wants to
tweak the resizer (e.g. scaling coefficients), then pass it straight to the
resizer component.

Regards,

Hans

 
  With SYSFS approach it is really difficult to pass big parameter to 
  sub-device, which we can easily achieve using IOCTL.
 
 I didn't get you point here. With sysfs, you can pass everything, even a mix 
 of
 strings and numbers, since get operation can be parsed via sscanf and 
 generated
 set uses sprintf (this doesn't mean that this is the recommended way to use 
 it).
 
 For example, on kernel 2.6.31, we have the complete hda audio driver pinup by
 reading to just one var:
 
 # cat /sys/class/sound/hwC0D0/init_pin_configs
 0x11 0x02214040
 0x12 0x01014010
 0x13 0x991301f0
 0x14 0x02a19020
 0x15 0x01813030
 0x16 0x413301f0
 0x17 0x41a601f0
 0x18 0x41a601f0
 0x1a 0x41f301f0
 0x1b 0x414511f0
 0x1c 0x41a190f0
 
 If you want to alter PIN 0x15 output config, all you need to do is:
 
 # echo 0x15 0x02214040 /sys/class/sound/hwC0D0/user_pin_configs
 (or open /sys/class/sound/hwC0D0/init_pin_configs and write 0x15 0x02214040 
 to it)
 
 And to reset to init config:
 # echo 1 /sys/class/sound/hwC0D0/clear
 
 One big advantage is that you can have a shell script to do the needed setup,
 automatically called by some udev rule, without needing to write a single line
 of code. So, for those advanced configuration parameters that doesn't change
 (for example board xtal speeds), you don't need to code it on your 
 application.
 Yet, you can do there, if needed.
 
 Cheers,
 Mauro
 --
 To unsubscribe from this list: send the line unsubscribe linux-media in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
 



-- 
Hans Verkuil - video4linux developer - sponsored by TANDBERG Telecom
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: RFCv2: Media controller proposal

2009-09-11 Thread Mauro Carvalho Chehab
Em Fri, 11 Sep 2009 21:08:13 +0200
Hans Verkuil hverk...@xs4all.nl escreveu:

   No, devices aren't created or deleted. Only links between devices.
  
  I think that there are some cases where devices are created/deleted. For
  example, on some hardware, you have some blocks that allow you to have 
  either 4 SD
  video inputs or 1 HD video input. So, if you change the type of input, 
  you'll
  end by creating or deleting devices.
 
 Normally you will create four device nodes, but if you switch to HD mode,
 then only one is active and attempting to use the others will return EBUSY
 or something. That's what the davinci driver does.
 
 Creating and deleting device nodes depending on the mode makes a driver very
 complex and the application as well. Say you are in SD mode and you have nodes
 video[0-3], now you switch to HD mode and you have only node video0. You go
 back to SD mode and you may end up with nodes video0 and video[2-4] if in the
 meantime the user connected a USB webcam which took up video1.
 
 Just create them upfront. You know beforehand what the maximum number of video
 nodes is since that is determined by the hardware. Let's keep things simple.
 Media boards are getting very, very complex and we should keep away from 
 adding
 unnecessary further complications.

Ok, we may start with this approach, and move to a more complex one only if
needed. This should be properly documented to avoid miss-understandings.

  See above. If you're grouping 4 A/D blocks into one A/D for handling HD, 
  you're
  doing more than just changing links, since the HD device will be just one
  device: one STD, one video input mux, one audio input mux, etc.
 
 So? You will just deactivate three SD device nodes. I don't see a problem with
 that, and that concept has already been proven to work in the davinci driver.

If just disabling applies to all cases, I agree stick with this idea. The
issue with enabling/disabling devices is that some complex hardware may need to
register a large amount of devices to expose all the different possibilities,
but only a very few of them being possible to be enabled. Let's see as time
goes by.

To work like you said, this means that we'll need an enable attribute at
the corresponding sysfs entry.

It should be noticed that, even not deleting a hardware, udev can still be
called. For example, an userspace application (like lirc) may need to be
started/stopped if you enable/disable IR (or restarted on some topology
changes, like using a different IR protocol).

  Even when streaming, providing that you don't touch at the used IC blocks, 
  it
  should be possible to reconfigure the unused parts. It is just a matter of
  having the right resource locks at the driver.
 
 As you say, this will depend on the driver.

Yes.

 Some may be able to do this,
 others may just return -EBUSY. I would do the latter, personally, since
 allowing this would just make the driver more complicated for IMHO little
 gain.

Ok. Both approaches are valid. So the API should be able to support both ways,
providing a thread safe interface to userspace.

  It would be easy to implement something like:
  
  echo 1 /sys/class/media/mc0/config_reload
  
  to call request_firmware() and load the new topology. This is enough to 
  have an
  atomic operation, and we don't need to implement a shadow config.
 
 OK, so instead we require an application to construct a file containing a new
 topology, write something to a sysfs file, require code in the v4l core to 
 load
 and parse that file, then find out which links have changed (since you really
 don't want to set all the links: there can be many, many links, believe me on
 that), and finally call the driver to tell it to change those links.

As I said before, the design should take into account how frequent are those
changes. If they are very infrequent, this approach works, and offers one
advantage: the topology will survive to application crashes and warm/cold
reboots. If the changes are frequent, an approach like the audio
user_pin_configs work better (see my previous email - note that this approach
can be used for atomic operations if needed). You add at a sysfs node just the
dynamic changes you need. We may even have both ways, as alsa seems to have
(init_pin_configs and user_pin_configs).



Cheers,
Mauro
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: RFCv2: Media controller proposal

2009-09-11 Thread Hans Verkuil
On Friday 11 September 2009 21:54:03 Mauro Carvalho Chehab wrote:
 Em Fri, 11 Sep 2009 21:08:13 +0200
 Hans Verkuil hverk...@xs4all.nl escreveu:

snip

  OK, so instead we require an application to construct a file containing a 
  new
  topology, write something to a sysfs file, require code in the v4l core to 
  load
  and parse that file, then find out which links have changed (since you 
  really
  don't want to set all the links: there can be many, many links, believe me 
  on
  that), and finally call the driver to tell it to change those links.
 
 As I said before, the design should take into account how frequent are those
 changes. If they are very infrequent, this approach works, and offers one
 advantage: the topology will survive to application crashes and warm/cold
 reboots. If the changes are frequent, an approach like the audio
 user_pin_configs work better (see my previous email - note that this approach
 can be used for atomic operations if needed). You add at a sysfs node just the
 dynamic changes you need. We may even have both ways, as alsa seems to have
 (init_pin_configs and user_pin_configs).

How frequent those changes are will depend entirely on the application.
Never underestimate the creativity of the end-users :-)

I think that a good worst case guideline would be 60 times per second.
Say for a surveillance type application that switches between video decoders
for each frame. Or some 3D type application that switches between two
sensors for each frame.

Of course, in the future you might want to get 3D done at 60 fps, meaning
that you have to switch between sensors 120 times per second.

One problem with media boards is that it is very hard to predict how they
will be used and what they will be capable of in the future.

Note that I am pretty sure that no application wants to have a media
board boot into an unpredicable initial topology. That would make life
very difficult for them.

Regards,

Hans

-- 
Hans Verkuil - video4linux developer - sponsored by TANDBERG Telecom
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: RFCv2: Media controller proposal

2009-09-11 Thread Mauro Carvalho Chehab
Em Fri, 11 Sep 2009 22:29:41 +0200
Hans Verkuil hverk...@xs4all.nl escreveu:

 On Friday 11 September 2009 21:54:03 Mauro Carvalho Chehab wrote:
  Em Fri, 11 Sep 2009 21:08:13 +0200
  Hans Verkuil hverk...@xs4all.nl escreveu:
 
 snip
 
   OK, so instead we require an application to construct a file containing a 
   new
   topology, write something to a sysfs file, require code in the v4l core 
   to load
   and parse that file, then find out which links have changed (since you 
   really
   don't want to set all the links: there can be many, many links, believe 
   me on
   that), and finally call the driver to tell it to change those links.
  
  As I said before, the design should take into account how frequent are those
  changes. If they are very infrequent, this approach works, and offers one
  advantage: the topology will survive to application crashes and warm/cold
  reboots. If the changes are frequent, an approach like the audio
  user_pin_configs work better (see my previous email - note that this 
  approach
  can be used for atomic operations if needed). You add at a sysfs node just 
  the
  dynamic changes you need. We may even have both ways, as alsa seems to have
  (init_pin_configs and user_pin_configs).
 
 How frequent those changes are will depend entirely on the application.
 Never underestimate the creativity of the end-users :-)
 
 I think that a good worst case guideline would be 60 times per second.
 Say for a surveillance type application that switches between video decoders
 for each frame.

The video input switch control, is already used by surveillance applications
for a long time. There's no need to add any API for it.

 Or some 3D type application that switches between two sensors for each frame.

Also, another case of video input selection.

We shouldn't design any new device for it.

I may be wrong, but from Vaibhav and your last comments, I'm starting to think
that you're wanting to replace V4L2 by a new media controller based new API.

So, let's go one step back and better understand what's expected by the media
controller.

From my previous understanding, those are the needs:

1) V4L2 API will keep being used to control the devices and to do streaming,
working under the already well defined devices;

2) One Kernel object is needed to represent the entire board as a hole, to
enumerate its sub-devices and to change their topology;

3) For some very specific cases, it should be possible to tweak some
sub-devices to act on a non-usual way;

4) Some new ioctls are needed to control some parts of the devices that aren't
currently covered by V4L2 API.

Right?

If so:

(1) already exists;

(2) is the topology manager of the media controller, that should use
sysfs, due to its nature.

For (3), there are a few alternatives. IMO, the better is to use also sysfs,
since we'll have all subdevs already represented there. So, to change
something, it is just a matter to write something to a sysfs node. Another
alternative would be to create separate subdevs at /dev, but this will end on
creating much more complex drivers than probably needed.

(4) is implemented by some new ioctl additions at V4L2 API.

Cheers,
Mauro
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: RFCv2: Media controller proposal

2009-09-11 Thread Mauro Carvalho Chehab
Em Fri, 11 Sep 2009 22:15:15 +0200
Hans Verkuil hverk...@xs4all.nl escreveu:

 On Friday 11 September 2009 21:59:37 Mauro Carvalho Chehab wrote:
  Em Fri, 11 Sep 2009 21:23:44 +0200
  Hans Verkuil hverk...@xs4all.nl escreveu:
  
In the case of resizer, I don't see why this can't be implemented as an 
ioctl
over /dev/video device.
   
   Well, no. Not in general. There are two problems. The first problem 
   occurs if
   you have multiple instances of a resizer (OK, not likely, but you *can* 
   have
   multiple video encoders or decoders or sensors). If all you have is the
   streaming device node, then you cannot select to which resizer (or video
   encoder) the ioctl should go. The media controller allows you to select 
   the
   recipient of the ioctl explicitly. Thus providing the control that these
   applications need.
  
  This case doesn't apply, since, if you have multiple encoders and/or 
  decoders,
  you'll also have multiple /dev/video instances. All you need is to call it 
  at
  the right device you need to control. Am I missing something here?
 
 Typical use-case: two video decoders feed video into a composer that combines
 the two (e.g. for PiP) and streams the result to one video node.
 
 Now you want to change e.g. the contrast on one of those video decoders. 
 That's
 not going to be possible using /dev/video.

On your above example, each video decoder will need a /dev/video, and also the
video composer. 

So, if you want to control the first decoder, you'll use /dev/video0. If you
want to control the second, /dev/video1, and the mux, /dev/video2.

The topology will be properly described at the media controller sysfs nodes.

 
   The second problem is that this will pollute the 'namespace' of a v4l 
   device
   node. Device drivers need to pass all those private ioctls to the right
   sub-device. But they shouldn't have to care about that. If someone wants 
   to
   tweak the resizer (e.g. scaling coefficients), then pass it straight to 
   the
   resizer component.
  
  Sorry, I missed your point here
 
 Example: a sub-device can produce certain statistics. You want to have an
 ioctl to obtain those statistics. If you call that through /dev/videoX, then
 that main driver has to handle that ioctl in vidioc_default and pass it on
 to the right subdev. So you have to write that vidioc_default handler,
 know about the sub-devices that you have and which sub-device is linked to
 the device node. You really don't want to have to do that. Especially not
 when you are dealing with i2c devices that are loaded from platform code.
 If a video encoder supports private ioctls, then an omap3 driver doesn't
 want to know about that. Oh, and before you ask: just broadcasting that
 ioctl is not a solution if you have multiple identical video encoders.

This can be as easy as reading from /sys/class/media/dsp:stat0/stats


Cheers,
Mauro
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: RFCv2: Media controller proposal

2009-09-11 Thread Hans Verkuil
On Friday 11 September 2009 23:37:58 Mauro Carvalho Chehab wrote:
 Em Fri, 11 Sep 2009 22:15:15 +0200
 Hans Verkuil hverk...@xs4all.nl escreveu:
 
  On Friday 11 September 2009 21:59:37 Mauro Carvalho Chehab wrote:
   Em Fri, 11 Sep 2009 21:23:44 +0200
   Hans Verkuil hverk...@xs4all.nl escreveu:
   
 In the case of resizer, I don't see why this can't be implemented as 
 an ioctl
 over /dev/video device.

Well, no. Not in general. There are two problems. The first problem 
occurs if
you have multiple instances of a resizer (OK, not likely, but you *can* 
have
multiple video encoders or decoders or sensors). If all you have is the
streaming device node, then you cannot select to which resizer (or video
encoder) the ioctl should go. The media controller allows you to select 
the
recipient of the ioctl explicitly. Thus providing the control that these
applications need.
   
   This case doesn't apply, since, if you have multiple encoders and/or 
   decoders,
   you'll also have multiple /dev/video instances. All you need is to call 
   it at
   the right device you need to control. Am I missing something here?
  
  Typical use-case: two video decoders feed video into a composer that 
  combines
  the two (e.g. for PiP) and streams the result to one video node.
  
  Now you want to change e.g. the contrast on one of those video decoders. 
  That's
  not going to be possible using /dev/video.
 
 On your above example, each video decoder will need a /dev/video, and also the
 video composer. 

Why? The video decoders do not do any streaming. There may well be just one
DMA engine that DMAs the output from the video composer.

Regards,

Hans



-- 
Hans Verkuil - video4linux developer - sponsored by TANDBERG Telecom
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: RFCv2: Media controller proposal

2009-09-11 Thread Hans Verkuil
On Friday 11 September 2009 23:28:47 Mauro Carvalho Chehab wrote:
 Em Fri, 11 Sep 2009 22:29:41 +0200
 Hans Verkuil hverk...@xs4all.nl escreveu:
 
  On Friday 11 September 2009 21:54:03 Mauro Carvalho Chehab wrote:
   Em Fri, 11 Sep 2009 21:08:13 +0200
   Hans Verkuil hverk...@xs4all.nl escreveu:
  
  snip
  
OK, so instead we require an application to construct a file containing 
a new
topology, write something to a sysfs file, require code in the v4l core 
to load
and parse that file, then find out which links have changed (since you 
really
don't want to set all the links: there can be many, many links, believe 
me on
that), and finally call the driver to tell it to change those links.
   
   As I said before, the design should take into account how frequent are 
   those
   changes. If they are very infrequent, this approach works, and offers one
   advantage: the topology will survive to application crashes and warm/cold
   reboots. If the changes are frequent, an approach like the audio
   user_pin_configs work better (see my previous email - note that this 
   approach
   can be used for atomic operations if needed). You add at a sysfs node 
   just the
   dynamic changes you need. We may even have both ways, as alsa seems to 
   have
   (init_pin_configs and user_pin_configs).
  
  How frequent those changes are will depend entirely on the application.
  Never underestimate the creativity of the end-users :-)
  
  I think that a good worst case guideline would be 60 times per second.
  Say for a surveillance type application that switches between video decoders
  for each frame.
 
 The video input switch control, is already used by surveillance applications
 for a long time. There's no need to add any API for it.
 
  Or some 3D type application that switches between two sensors for each 
  frame.
 
 Also, another case of video input selection.

True, bad example. Given enough time I can no doubt come up with some example 
:-)

 We shouldn't design any new device for it.
 
 I may be wrong, but from Vaibhav and your last comments, I'm starting to think
 that you're wanting to replace V4L2 by a new media controller based new API.
 
 So, let's go one step back and better understand what's expected by the media
 controller.
 
 From my previous understanding, those are the needs:
 
 1) V4L2 API will keep being used to control the devices and to do streaming,
 working under the already well defined devices;

Yes.
 
 2) One Kernel object is needed to represent the entire board as a hole, to
 enumerate its sub-devices and to change their topology;

Yes.

 3) For some very specific cases, it should be possible to tweak some
 sub-devices to act on a non-usual way;

This will not be for 'some very specific cases'. This will become an essential
feature on embedded platforms. It's probably the most important part of the
media controller proposal.

 4) Some new ioctls are needed to control some parts of the devices that aren't
 currently covered by V4L2 API.

No, that is not part of the proposal. Of course, as drivers for the more
advanced devices are submitted there may be some functionality that is general
enough to warrant inclusion in the V4L2 API, but that's business as usual.

 
 Right?
 
 If so:
 
 (1) already exists;

Obviously.
 
 (2) is the topology manager of the media controller, that should use
 sysfs, due to its nature.

See the separate thread I started on sysfs vs ioctl.

 For (3), there are a few alternatives. IMO, the better is to use also sysfs,
 since we'll have all subdevs already represented there. So, to change
 something, it is just a matter to write something to a sysfs node.

See that same thread why that is a really bad idea.

 Another 
 alternative would be to create separate subdevs at /dev, but this will end on
 creating much more complex drivers than probably needed.

I agree with this.

 (4) is implemented by some new ioctl additions at V4L2 API.

Not an issue as stated above.

Regards,

Hans

-- 
Hans Verkuil - video4linux developer - sponsored by TANDBERG Telecom
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RFCv2: Media controller proposal

2009-09-10 Thread Hans Verkuil
Hi all,

Here is the new Media Controller RFC. It is completely rewritten from the
original RFC. This original RFC can be found here:

http://www.archivum.info/video4linux-list%40redhat.com/2008-07/00371/RFC:_Add_support_to_query_and_change_connections_inside_a_media_device

This document will be the basis of the discussions during the Plumbers
Conference in two weeks time.

Open issue #3 is the main unresolved item, but I hope to come up with something
during the weekend.

Regards,

Hans


RFC: Media controller proposal

Version 2.0

Background
==

This RFC is a new version of the original RFC that was written in cooperation
with and on behalf of Texas Instruments about a year ago.

Much work has been done in the past year to put the foundation in place to
be able to implement a media controller and now it is time for this updated
version. The intention is to discuss this in more detail during this years
Plumbers Conference.

Although the high-level concepts are the same as in the original RFC, many
of the details have changed based on what was learned over the past year.

This RFC is based on the original discussions with Manjunath Hadli from TI
last year, on discussions during a recent meeting between Laurent Pinchart,
Guennadi Liakhovetski and myself, and on recent discussions with Nokia.
Thanks to Sakari Ailus for doing an initial review of this RFC.

One note regarding terminology: a 'board' is the name I use for the SoC,
PCI or USB device that contains the video hardware. Each board has its own
driver instance and its own v4l2_device struct. Originally I called it
'device', but that name is already used in too many places.


What is a media controller?
===

In a nutshell: a media controller is a new v4l device node that can be used
to discover and modify the topology of the board and to give access to the 
low-level nodes (such as previewers, resizers, color space converters, etc.)
that are part of the topology.

It does not do any streaming, that is the exclusive domain of video nodes.
It is meant purely for controlling a board as a whole.


Why do we need one?
===

There are currently several problems that are impossible to solve within the
current V4L2 API:

1) Discovering the various device nodes that are typically created by a video
board, such as: video nodes, vbi nodes, dvb nodes, alsa nodes, framebuffer
nodes, input nodes (for e.g. webcam button events or IR remotes).

It would be very handy if an application can just open an /dev/v4l/mc0 node
and be able to figure out where all the nodes are, and to be able to figure
out what the capabilities of the board are (e.g. does it support DVB, is the
audio going through a loopback cable or is there an alsa device, can it do
compressed MPEG video, etc. etc.). Currently the end-user has no choice but to
supply the device nodes manually.

2) Some of the newer SoC devices can connect or disconnect internal components
dynamically. As an example, the omap3 can either connect a sensor output to a
CCDC module to a previewer module to a resizer module and finally to a capture
device node. But it is also possible to capture the sensor output directly
after the CCDC module. The previewer can get its input from another video
device node and output either to the resizer or to another video capture
device node. The same is true for the resizer, that too can get its input from
a device node.

So there are lots of connections here that can be modified at will depending
on what the application wants. And in real life there are even more links than
I mentioned here. And it will only get more complicated in the future.

All this requires that there has to be a way to connect and disconnect parts
of the internal topology of a video board at will.

3) There is increasing demand to be able to control e.g. sensors or video
encoders/decoders at a much more precise manner. Currently the V4L2 API
provides only limited support in the form of a set of controls. But when
building a high-end camera the developer of the application controlling it
needs very detailed control of the sensor and image processing devices.
On the other hand, you do not want to have all this polluting the V4L2 API
since there is absolutely no sense in exporting this as part of the existing
controls, or to allow for a large number of private ioctls.

What would be a good solution is to give access to the various components of
the board and allow the application to send component-specific ioctls or
controls to it. Any application that will do this is by default tailored to
that board. In addition, none of these new controls or commands will pollute
the namespace of V4L2.

A media controller can solve all these problems: it will provide a window into
the architecture of the board and all its device nodes. Since it is already
enumerating the nodes and components of the board and how they are linked up,
it is only a small step to also use it to change 

Re: RFCv2: Media controller proposal

2009-09-10 Thread Patrick Boettcher

Hello Hans,


On Thu, 10 Sep 2009, Hans Verkuil wrote:

Here is the new Media Controller RFC. It is completely rewritten from the
original RFC. This original RFC can be found here:

http://www.archivum.info/video4linux-list%40redhat.com/2008-07/00371/RFC:_Add_support_to_query_and_change_connections_inside_a_media_device

This document will be the basis of the discussions during the Plumbers
Conference in two weeks time.


I wasn't following this RFC during the past year, though I heard you 
talking about this idea at LPC 2008.


I will add some things to discussion (see below) I have in my mind 
regarding similar difficulties we face today with some pure-DTV devices.


From a first look, it seems media controller could not only unify v4l and 
DVB device abstraction layers, but also a missing features to DTV devices 
which are not present right now.



[..]

Topology


The topology is represented by entities. Each entity has 0 or more inputs and
0 or more outputs. Each input or output can be linked to 0 or more possible
outputs or inputs from other entities. This is either mutually exclusive
(i.e. an input/output can be connected to only one output/input at a time)
or it can be connected to multiple inputs/outputs at the same time.

A device node is a special kind of entity with just one input (capture node)
or output (video node). It may have both if it does some in-place operation.

Each entity has a unique numerical ID (unique for the board). Each input or
output has a unique numerical ID as well, but that ID is only unique to the
entity. To specify a particular input or output of an entity one would give
an entity ID, input/output ID tuple.

When enumerating over entities you will need to retrieve at least the
following information:

- type (subdev or device node)
- entity ID
- entity description (can be quite long)
- subtype (what sort of device node or subdev is it?)
- capabilities (what can the entity do? Specific to the subtype and more
precise than the v4l2_capability struct which only deals with the board
capabilities)
- addition subtype-specific data (union)
- number of inputs and outputs. The input IDs should probably just be a value
of 0 - (#inputs - 1) (ditto for output IDs).

Another ioctl is needed to obtain the list of possible links that can be made
for each input and output.

It is good to realize that most applications will just enumerate e.g. capture
device nodes. Few applications will do a full scan of the whole topology.
Instead they will just specify the unique entity ID and if needed the
input/output ID as well. These IDs are declared in the board or sub-device
specific header.


Very good this topology-idea!

I can even see this to be continued in user-space in a very smart 
application/library: A software MPEG decoder/rescaler whatever would be 
such an entity for example.



A full enumeration will typically only be done by some sort of generic
application like v4l2-ctl.


Hmm... I'm seeing this idea covering other stream-oriented devices. Like 
sound-cards (*ouch*).



[..]

Open issues
===

In no particular order:

1) How to tell the application that this board uses an audio loopback cable
to the PC's audio input?

2) There can be a lot of device nodes in complicated boards. One suggestion
is to only register them when they are linked to an entity (i.e. can be
active). Should we do this or not?


Could entities not be completely addressed (configuration ioctls) through 
the mc-node?


Only entities who have an output/input with is of type 
'user-space-interface' are actually having a node where the user (in 
user-space) can read from/write to?



3) Format and bus configuration and enumeration. Sub-devices are connected
together by a bus. These busses can have different configurations that will
influence the list of possible formats that can be received or sent from
device nodes. This was always pretty straightforward, but if you have several
sub-devices such as scalers and colorspace converters in a pipeline then this
becomes very complex indeed. This is already a problem with soc-camera, but
that is only the tip of the iceberg.

How to solve this problem is something that requires a lot more thought.


For me the entities (components) you're describing are having 2 basic 
bus-types: one control bus (which gives register access) and one or more 
data-stream buses.


In your topology I understood that the inputs/outputs are exactly 
representing the data-stream buses.


Depending on the main-type of the media controller a library could give 
some basic-models of how all entities can be connected together. EG:


(I have no clue about webcams, that why I use this as an example :) ):

Webcam: sensor + resize + filtering = picture

WEBCAM model X provides:

2 sensor-types + 3 resizers + 5 filters

one of each of it provides a pictures. By default this first one of each 
is taken.



[..]


My additional comments for DTV

1) In DTV as of today we can't handle a 

Re: RFCv2: Media controller proposal

2009-09-10 Thread Patrick Boettcher

On Thu, 10 Sep 2009, Hans Verkuil wrote:

Now that this is in we can continue with the next phase and actually think
on how it should be implemented.


Sounds logic.


Hmm... I'm seeing this idea covering other stream-oriented devices. Like
sound-cards (*ouch*).


I may be mistaken, but I don't believe soundcards have this same
complexity are media board.


When I launch alsa-mixer I see 4 input devices where I can select 4 
difference sources. This gives 16 combinations which is enough for me to 
call it 'complex' .



Could entities not be completely addressed (configuration ioctls) through
the mc-node?


Not sure what you mean.


Instead of having a device node for each entity, the ioctls for each 
entities are done on the media controller-node address an entity by ID.



Only entities who have an output/input with is of type
'user-space-interface' are actually having a node where the user (in
user-space) can read from/write to?


Yes, each device node (i.e. that can be read from or written to) is
represented by an entity. That makes sense as well, since there usually is
a DMA engine associated with this, which definitely qualifies as something
more than 'just' an input or output from some other block. You may even
want to control this in someway through the media controller (setting up
DMA parameters?).

Inputs and outputs are not meant to represent anything complex. They just
represent pins or busses.


Or DMA-engines.

When I say bus I meant something which transfer data from a to b, so a bus 
covers DMA engines. Thus a DMA engine or a real bus represents a 
connection of an output and an input.



Not really a datastream bus, more the DMA engine (or something similar)
associated with a datastream bus. It's really the place where data is
passed to/from userspace. I.e. the bus between a sensor and a resizer is
not an entity. It's probably what you meant in any case.


Yes.


2) What is today a dvb_frontend could become several entities: I'm seeing
tuner, demodulator, channel-decoder, amplifiers.


In practice every i2c device will be an entity. If the main bridge IC
contains integrated tuners, demods, etc., then the driver can divide them
up in sub-devices at will.

I have actually thought of sub-sub-devices. Some i2c devices can be very,
very complex. It's possible to do and we should probably allow for this to
happen in the future. Although we shouldn't implement this initially.


Yes, for me i2c-bus-client-device is not necessarily one media_subdevice.

Even the term i2c is not terminal. Meaning that more and more devices will 
use SPI or SDIO or other busses for communication between components in 
the future. Or at least there will be some.


Also: If we sub-bus is implemented as a subdev other devices are attached 
to that bus can be normal subdevs.


Why is it important to have all devices on one bus? Because of the 
propagation of ioctl? If so, the sub-bus-subdev from above can simply 
forward the ioctls on its bus to it's attached subdevs. No need of 
sub-sub-devs ;) .



I really, really like this approach as it gives flexibily to user-space
applications which will ultimatetly improve the quality of the supported
devices, but I think it has to be assisted by a user-space library and the
access has to be done exclusively by that library. I'm aware that this
library-idea could be a hot discussion point.


I do not see how you can make any generic library for this. You can make
libraries for each specific board (I'm talking SoCs here mostly) that
provide a slightly higher level of abstraction, but making something
generic? I don't see how. You could perhaps do something for specific
use-cases, though.


Not a 100% generic library, but a library which has some models inside for 
different types of media controllers. Of course the model of a webcam is 
different as the model of a DTV-device.


Maybe model is not the right word, let's call it template. A template 
defines a possible chain of certain types of entities which provide 
a media-stream at their output.



I would love to see that happen. But then dvb should first migrate to the
standard i2c API, and then integrate that into v4l2_subdev (by that time
we should probably rename it to media_subdev).

Not a trivial job, but it would truly integrate the two parts.


As you state in your initial approach, existing APIs are not broken, so 
it's all about future development.


--

Patrick
http://www.kernellabs.com/
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: RFCv2: Media controller proposal

2009-09-10 Thread Hans Verkuil

 On Thu, 10 Sep 2009, Hans Verkuil wrote:
 Now that this is in we can continue with the next phase and actually
 think
 on how it should be implemented.

 Sounds logic.

 Hmm... I'm seeing this idea covering other stream-oriented devices.
 Like
 sound-cards (*ouch*).

 I may be mistaken, but I don't believe soundcards have this same
 complexity are media board.

 When I launch alsa-mixer I see 4 input devices where I can select 4
 difference sources. This gives 16 combinations which is enough for me to
 call it 'complex' .

 Could entities not be completely addressed (configuration ioctls)
 through
 the mc-node?

 Not sure what you mean.

 Instead of having a device node for each entity, the ioctls for each
 entities are done on the media controller-node address an entity by ID.

I definitely don't want to go there. Use device nodes (video, fb, alsa,
dvb, etc) for streaming the actual media as we always did and use the
media controller for controlling the board. It keeps everything nicely
separate and clean.


 Only entities who have an output/input with is of type
 'user-space-interface' are actually having a node where the user (in
 user-space) can read from/write to?

 Yes, each device node (i.e. that can be read from or written to) is
 represented by an entity. That makes sense as well, since there usually
 is
 a DMA engine associated with this, which definitely qualifies as
 something
 more than 'just' an input or output from some other block. You may even
 want to control this in someway through the media controller (setting up
 DMA parameters?).

 Inputs and outputs are not meant to represent anything complex. They
 just
 represent pins or busses.

 Or DMA-engines.

 When I say bus I meant something which transfer data from a to b, so a bus
 covers DMA engines. Thus a DMA engine or a real bus represents a
 connection of an output and an input.

Not quite: a DMA engine transfers the media to or from memory over some
bus. The crucial bit is 'memory'. Anyway, device nodes is where an
application can finally get hold of the data and you need a way to tell
the app where to find those devices and what properties they have. And
that's what a device node entity does.


 Not really a datastream bus, more the DMA engine (or something similar)
 associated with a datastream bus. It's really the place where data is
 passed to/from userspace. I.e. the bus between a sensor and a resizer is
 not an entity. It's probably what you meant in any case.

 Yes.

 2) What is today a dvb_frontend could become several entities: I'm
 seeing
 tuner, demodulator, channel-decoder, amplifiers.

 In practice every i2c device will be an entity. If the main bridge IC
 contains integrated tuners, demods, etc., then the driver can divide
 them
 up in sub-devices at will.

 I have actually thought of sub-sub-devices. Some i2c devices can be
 very,
 very complex. It's possible to do and we should probably allow for this
 to
 happen in the future. Although we shouldn't implement this initially.

 Yes, for me i2c-bus-client-device is not necessarily one media_subdevice.

It is currently, but I agree, that's something that we may want to make
more generic in the future.


 Even the term i2c is not terminal. Meaning that more and more devices will
 use SPI or SDIO or other busses for communication between components in
 the future. Or at least there will be some.

That's no problem, v4l2_subdev is bus-agnostic.


 Also: If we sub-bus is implemented as a subdev other devices are attached
 to that bus can be normal subdevs.

 Why is it important to have all devices on one bus? Because of the
 propagation of ioctl? If so, the sub-bus-subdev from above can simply
 forward the ioctls on its bus to it's attached subdevs. No need of
 sub-sub-devs ;) .

Sub-devices are registered with the v4l2_device. And that's really all you
need. In the end it is a design issue how many sub-devices you create.


 I really, really like this approach as it gives flexibily to user-space
 applications which will ultimatetly improve the quality of the
 supported
 devices, but I think it has to be assisted by a user-space library and
 the
 access has to be done exclusively by that library. I'm aware that this
 library-idea could be a hot discussion point.

 I do not see how you can make any generic library for this. You can make
 libraries for each specific board (I'm talking SoCs here mostly) that
 provide a slightly higher level of abstraction, but making something
 generic? I don't see how. You could perhaps do something for specific
 use-cases, though.

 Not a 100% generic library, but a library which has some models inside for
 different types of media controllers. Of course the model of a webcam is
 different as the model of a DTV-device.

 Maybe model is not the right word, let's call it template. A template
 defines a possible chain of certain types of entities which provide
 a media-stream at their output.

That might work, yes.

 I would love to see that 

RE: RFCv2: Media controller proposal

2009-09-10 Thread Karicheri, Muralidharan
Hans,

I haven't gone through the RFC, but thought will respond to the below comment.

Murali Karicheri
Software Design Engineer
Texas Instruments Inc.
Germantown, MD 20874
new phone: 301-407-9583
Old Phone : 301-515-3736 (will be deprecated)
email: m-kariche...@ti.com


 I may be mistaken, but I don't believe soundcards have this same
 complexity are media board.

 When I launch alsa-mixer I see 4 input devices where I can select 4
 difference sources. This gives 16 combinations which is enough for me to
 call it 'complex' .

 Could entities not be completely addressed (configuration ioctls)
 through
 the mc-node?

 Not sure what you mean.

 Instead of having a device node for each entity, the ioctls for each
 entities are done on the media controller-node address an entity by ID.

I definitely don't want to go there. Use device nodes (video, fb, alsa,
dvb, etc) for streaming the actual media as we always did and use the
media controller for controlling the board. It keeps everything nicely
separate and clean.



What you mean by controlling the board?

We have currently ported DMxxx VPBE display drivers to 2.6.31 (Not submitted 
yet to mainline). In our current implementation, the output and standard/mode 
are controlled through sysfs because it is a common functionality affecting 
both v4l and FBDev framebuffer devices. Traditional applications such x-windows 
should be able to stream video/graphics to VPBE output. V4l2 applications 
should be able to stream video. Both these devices needs to know the display 
parameters such as frame buffer resolution, field etc that are to be configured 
in the video or osd layers in VPBE to output frames to the encoder that is 
driving the output. So to stream, first the output and mode/standard are 
selected using sysfs command and then the application is started. Following 
scenarios are supported by VPBE display drivers in our internal release:-

1)Traditional FBDev applications (x-window) can be run using OSD device. Allows 
changing mode/standards at the output using fbset command.

2)v4l2 driver doesn't provide s_output/s_std support since it is done through 
sysfs. 

3)Applications that requires to stream both graphics and video to the output 
uses both FBDev and V4l2 devices. So these application first set the output and 
mode/standard using sysfs, before doing io operations with these devices.

There is an encoder manager to which all available encoders  registers (using 
internally developed interface) and based on commands received at Fbdev/sysfs 
interfaces, the current encoder is selected by the encoder manager and current 
standard is selected. The encoder manager provides API to retrieve current 
timing information from the current encoder. FBDev and V4L2 drivers uses this 
API to configure OSD/video layers for streaming.

As you can see, controlling output/mode is a common function required for both 
v4l2 and FBDev devices. 

One way to do this to modify the encoder manager such that it load up the 
encoder sub devices. This will allow our customers to migrate to this driver on 
GIT kernel with minimum effort. If v4l2 display bridge driver load up the sub 
devices, it will make FBDev driver useless unless media controller has some way 
to handle this scenario. Any idea if media controller RFC address this? I will 
go over the RFC in details, but if you have a ready answer, let me know.

Thanks
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: RFCv2: Media controller proposal

2009-09-10 Thread Mauro Carvalho Chehab
Hi Hans,

Hi Hans,

Em Thu, 10 Sep 2009 09:13:09 +0200
Hans Verkuil hverk...@xs4all.nl escreveu:

First of all, a generic comment: you enumerated on your RFC several needs that
you expect to be solved with a media controller, but you didn't mention what
userspace API will be used to solve it (e. g. what ioctls, sysfs interfaces,
etc). As this is missing, I'm adding a few notes about how this can be
implemented. For example, as I've already pointed when you sent the first
proposal and at LPC, sysfs is the proper kernel API for enumerating things.

 Why do we need one?
 ===
 
 There are currently several problems that are impossible to solve within the
 current V4L2 API:
 
 1) Discovering the various device nodes that are typically created by a video
 board, such as: video nodes, vbi nodes, dvb nodes, alsa nodes, framebuffer
 nodes, input nodes (for e.g. webcam button events or IR remotes).

In fact, this can already be done by using the sysfs interface. the current
version of v4l2-sysfs-path.c already enumerates the associated nodes to
a /dev/video device, by just navigating at the already existing device
description nodes at sysfs. I hadn't tried yet, but I bet that a similar kind
of topology can be obtained from a dvb device (probably, we need to do some
adjustments).

The big missing component is an userspace library that will properly return the
device components to the applications. Maybe we need to do also some
adjustments at the sysfs nodes to represent all that it is needed.

 It would be very handy if an application can just open an /dev/v4l/mc0 node
 and be able to figure out where all the nodes are, and to be able to figure
 out what the capabilities of the board are (e.g. does it support DVB, is the
 audio going through a loopback cable or is there an alsa device, can it do
 compressed MPEG video, etc. etc.). Currently the end-user has no choice but to
 supply the device nodes manually.

The better would be to create a /sys/class/media node, and having the
media controllers above that struct. So, mc0 will be at /sys/class/media/mc0.
 
 2) Some of the newer SoC devices can connect or disconnect internal components
 dynamically. As an example, the omap3 can either connect a sensor output to a
 CCDC module to a previewer module to a resizer module and finally to a capture
 device node. But it is also possible to capture the sensor output directly
 after the CCDC module. The previewer can get its input from another video
 device node and output either to the resizer or to another video capture
 device node. The same is true for the resizer, that too can get its input from
 a device node.
 
 So there are lots of connections here that can be modified at will depending
 on what the application wants. And in real life there are even more links than
 I mentioned here. And it will only get more complicated in the future.
 
 All this requires that there has to be a way to connect and disconnect parts
 of the internal topology of a video board at will.

We should design this with care, since each change at the internal topology may
create/delete devices. If you do such changes at topology, udev will need to
delete the old devices and create the new ones. This will happen on separate
threads and may cause locking issues at the device, especially since you can be
modifying several components at the same time (being even possible to do it on
separate threads).

I've seen some high-end core network routers that implements topology changes
on an interesting way: any changes done are not immediately applied at the
node, but are stored into a file, where the configuration that can be changed
anytime. However, the topology changes only happen after giving a commit
command. After commit, it validates the new config and apply them atomically
(e. g. or all changes are applied or none), to avoid bad effects that
intermediate changes could cause.

As we are at kernelspace, we need to take care to not create a very complex
interface. Yet, the idea of applying the new topology atomically seems
interesting. 

Alsa is facing a similar problem with pinup quirks needed with HD-audio boards.
They are proposing a firmware like interface:
http://linux.derkeiler.com/Mailing-Lists/Kernel/2009-09/msg03198.html

On their case, they are just using request_firmware() for it, at board probing
time.

IMO, the same approach can be used here.

 3) There is increasing demand to be able to control e.g. sensors or video
 encoders/decoders at a much more precise manner. Currently the V4L2 API
 provides only limited support in the form of a set of controls. But when
 building a high-end camera the developer of the application controlling it
 needs very detailed control of the sensor and image processing devices.
 On the other hand, you do not want to have all this polluting the V4L2 API
 since there is absolutely no sense in exporting this as part of the existing
 controls, or to allow for a large number of private 

Re: RFCv2: Media controller proposal

2009-09-10 Thread Hans Verkuil
On Thursday 10 September 2009 21:19:25 Karicheri, Muralidharan wrote:
 Hans,
 
 I haven't gone through the RFC, but thought will respond to the below comment.
 
 Murali Karicheri
 Software Design Engineer
 Texas Instruments Inc.
 Germantown, MD 20874
 new phone: 301-407-9583
 Old Phone : 301-515-3736 (will be deprecated)
 email: m-kariche...@ti.com
 
 
  I may be mistaken, but I don't believe soundcards have this same
  complexity are media board.
 
  When I launch alsa-mixer I see 4 input devices where I can select 4
  difference sources. This gives 16 combinations which is enough for me to
  call it 'complex' .
 
  Could entities not be completely addressed (configuration ioctls)
  through
  the mc-node?
 
  Not sure what you mean.
 
  Instead of having a device node for each entity, the ioctls for each
  entities are done on the media controller-node address an entity by ID.
 
 I definitely don't want to go there. Use device nodes (video, fb, alsa,
 dvb, etc) for streaming the actual media as we always did and use the
 media controller for controlling the board. It keeps everything nicely
 separate and clean.
 
 
 
 What you mean by controlling the board?

In general: the media controller can do anything except streaming. However,
that is an extreme position and in practice all the usual ioctls should
remain supported by the video device nodes.

 We have currently ported DMxxx VPBE display drivers to 2.6.31 (Not submitted 
 yet to mainline). In our current implementation, the output and standard/mode 
 are controlled through sysfs because it is a common functionality affecting 
 both v4l and FBDev framebuffer devices. Traditional applications such 
 x-windows should be able to stream video/graphics to VPBE output. V4l2 
 applications should be able to stream video. Both these devices needs to know 
 the display parameters such as frame buffer resolution, field etc that are to 
 be configured in the video or osd layers in VPBE to output frames to the 
 encoder that is driving the output. So to stream, first the output and 
 mode/standard are selected using sysfs command and then the application is 
 started. Following scenarios are supported by VPBE display drivers in our 
 internal release:-
 
 1)Traditional FBDev applications (x-window) can be run using OSD device. 
 Allows changing mode/standards at the output using fbset command.
 
 2)v4l2 driver doesn't provide s_output/s_std support since it is done through 
 sysfs. 
 
 3)Applications that requires to stream both graphics and video to the output 
 uses both FBDev and V4l2 devices. So these application first set the output 
 and mode/standard using sysfs, before doing io operations with these devices.

I don't understand this approach. I'm no expert on the fb API but as far as I
know the V4L2 API allows a lot more precision over the video timings (esp. with
the new API you are working on). Furthermore, I assume it is possible to use
the DMxxx without an OSD, right?

This is very similar to the ivtv and ivtvfb drivers: if the framebuffer is in
use, then you cannot change the output standard (you'll get an EBUSY error)
through a video device node.

That's exactly what you would expect. If the framebuffer isn't used, then you
can just use the normal V4L2 API to change the output standard.

In practice, I think that you can only change the resolution in the FB API.
Not things like the framerate, let alone precise pixelclock, porch and sync
widths.

Much better to let the two cooperate: you can use both APIs, but you can't
change the resolution in the fb if streaming is going on, and you can't
change the output standard of a video device node if that changes the
resolution while the framebuffer is in used.

No need for additional sysfs entries.

 
 There is an encoder manager to which all available encoders  registers (using 
 internally developed interface) and based on commands received at Fbdev/sysfs 
 interfaces, the current encoder is selected by the encoder manager and 
 current standard is selected. The encoder manager provides API to retrieve 
 current timing information from the current encoder. FBDev and V4L2 drivers 
 uses this API to configure OSD/video layers for streaming.
 
 As you can see, controlling output/mode is a common function required for 
 both v4l2 and FBDev devices. 
 
 One way to do this to modify the encoder manager such that it load up the 
 encoder sub devices. This will allow our customers to migrate to this driver 
 on GIT kernel with minimum effort. If v4l2 display bridge driver load up the 
 sub devices, it will make FBDev driver useless unless media controller has 
 some way to handle this scenario. Any idea if media controller RFC address 
 this? I will go over the RFC in details, but if you have a ready answer, let 
 me know.

I don't think this has anything to do with the media controller. It sounds
more like a driver design issue to me.

Regards,

Hans

-- 
Hans Verkuil - video4linux developer - sponsored 

Re: RFCv2: Media controller proposal

2009-09-10 Thread Guennadi Liakhovetski
Hi Hans

a couple of comments / questions from the first glance

On Thu, 10 Sep 2009, Hans Verkuil wrote:

[snip]

 Topology
 
 
 The topology is represented by entities. Each entity has 0 or more inputs and
 0 or more outputs. Each input or output can be linked to 0 or more possible
 outputs or inputs from other entities. This is either mutually exclusive 
 (i.e. an input/output can be connected to only one output/input at a time)
 or it can be connected to multiple inputs/outputs at the same time.
 
 A device node is a special kind of entity with just one input (capture node)
 or output (video node). It may have both if it does some in-place operation.
 
 Each entity has a unique numerical ID (unique for the board). Each input or
 output has a unique numerical ID as well, but that ID is only unique to the
 entity. To specify a particular input or output of an entity one would give
 an entity ID, input/output ID tuple.
 
 When enumerating over entities you will need to retrieve at least the
 following information:
 
 - type (subdev or device node)
 - entity ID
 - entity description (can be quite long)
 - subtype (what sort of device node or subdev is it?)
 - capabilities (what can the entity do? Specific to the subtype and more
 precise than the v4l2_capability struct which only deals with the board
 capabilities)
 - addition subtype-specific data (union)
 - number of inputs and outputs. The input IDs should probably just be a value
 of 0 - (#inputs - 1) (ditto for output IDs).
 
 Another ioctl is needed to obtain the list of possible links that can be made
 for each input and output.

Shall we not just let the user try? and return an error if the requested 
connection is impossible? Remember, media-controller users are 
board-tailored, so, they will not be very dynamic.

 It is good to realize that most applications will just enumerate e.g. capture
 device nodes. Few applications will do a full scan of the whole topology.
 Instead they will just specify the unique entity ID and if needed the
 input/output ID as well. These IDs are declared in the board or sub-device
 specific header.
 
 A full enumeration will typically only be done by some sort of generic
 application like v4l2-ctl.

Well, is this the reason why you wanted to enumerate possible connections? 
Should v4l2-ctrl be able to manipulate those connections? What is it for 
actually?

 In addition, most entities will have only one or two inputs/outputs at most.
 So we might optimize the data structures for this. We probably will have to
 see how it goes when we implement it.
 
 We obviously need ioctls to make and break links between entities. It
 shouldn't be hard to do this.
 
 Access to sub-devices
 -
 
 What is a bit trickier is how to select a sub-device as the target for ioctls.
 Normally ioctls like S_CTRL are sent to a /dev/v4l/videoX node and the driver
 will figure out which sub-device (or possibly the bridge itself) will receive
 it. There is no way of hijacking this mechanism to e.g. specify a specific
 entity ID without also having to modify most of the v4l2 structs by adding
 such an ID field. But with the media controller we can at least create an
 ioctl that specifies a 'target entity' that will receive any non-media
 controller ioctl. Note that for now we only support sub-devices as the target
 entity.
 
 The idea is this:
 
 // Select a particular target entity
 ioctl(mc, VIDIOC_S_SUBDEV, entityID);
 // Send S_FMT directly to that entity
 ioctl(mc, VIDIOC_S_FMT, fmt);

is this really a mc fd or the respective video-devive fd?

 // Send a custom ioctl to that entity
 ioctl(mc, VIDIOC_OMAP3_G_HISTOGRAM, hist);
 
 This requires no API changes and is very easy to implement. One problem is
 that this is not thread-safe. We can either supply some sort of locking
 mechanism, or just tell the application programmer to do the locking in the
 application. I'm not sure what is the correct approach here. A reasonable
 compromise would be to store the target entity as part of the filehandle.
 So you can open the media controller multiple times and each handle can set
 its own target entity.
 
 This also has the advantage that you can have a filehandle 'targeted' at a
 resizer and a filehandle 'targeted' at the previewer, etc. If you want to use
 the same filehandle from multiple threads, then you have to implement locking
 yourself.

You mean the driver should only care about internal consistency, and the 
user is allowed to otherwise shoot herself in the foot? Makes sense to 
me:-)

 
 
 Open issues
 ===
 
 In no particular order:
 
 1) How to tell the application that this board uses an audio loopback cable
 to the PC's audio input?
 
 2) There can be a lot of device nodes in complicated boards. One suggestion
 is to only register them when they are linked to an entity (i.e. can be
 active). Should we do this or not?

Really a lot of device nodes? not sub-devices? What can this be? Isn't the 
decision 

Re: RFCv2: Media controller proposal

2009-09-10 Thread Hans Verkuil
On Thursday 10 September 2009 22:20:13 Mauro Carvalho Chehab wrote:
 Hi Hans,
 
 Hi Hans,
 
 Em Thu, 10 Sep 2009 09:13:09 +0200
 Hans Verkuil hverk...@xs4all.nl escreveu:
 
 First of all, a generic comment: you enumerated on your RFC several needs that
 you expect to be solved with a media controller, but you didn't mention what
 userspace API will be used to solve it (e. g. what ioctls, sysfs interfaces,
 etc). As this is missing, I'm adding a few notes about how this can be
 implemented. For example, as I've already pointed when you sent the first
 proposal and at LPC, sysfs is the proper kernel API for enumerating things.

I hate sysfs with a passion. All of the V4L2 API is designed around ioctls,
and so is the media controller.

Note that I did not go into too much implementation detail in this RFC. The
best way to do that is by trying to implement it. Only after implementing it
for a few drivers will you get a real feel of what works and what doesn't.

Of course, whether to use sysfs or ioctls is something that has to be designed
beforehand.

 
  Why do we need one?
  ===
  
  There are currently several problems that are impossible to solve within the
  current V4L2 API:
  
  1) Discovering the various device nodes that are typically created by a 
  video
  board, such as: video nodes, vbi nodes, dvb nodes, alsa nodes, framebuffer
  nodes, input nodes (for e.g. webcam button events or IR remotes).
 
 In fact, this can already be done by using the sysfs interface. the current
 version of v4l2-sysfs-path.c already enumerates the associated nodes to
 a /dev/video device, by just navigating at the already existing device
 description nodes at sysfs. I hadn't tried yet, but I bet that a similar kind
 of topology can be obtained from a dvb device (probably, we need to do some
 adjustments).

sysfs is crap. It's a poorly documented public API that is hell to use. Take
a device node entity as enumerated by the media controller: I want to provide
the application with information like the sort of node (alsa, fb, v4l, etc),
how to access it (alsa card nr or major/minor), a description (Captured MPEG
stream), possibly some capabilities and addition data. With an ENUM ioctl
you can just call it. With sysfs you have to open/read/close files for each of
these properties, walk through the tree to find related alsa/v4l/fb devices,
and in drivers you must write a hell of a lot of code just to make those sysfs
nodes. It's an uncontrollable mess.

Basically you're just writing a lot of bloat for no reason. And even worse is
that this would introduce a completely different type of API compared to what
we already have.

 The big missing component is an userspace library that will properly return 
 the
 device components to the applications. Maybe we need to do also some
 adjustments at the sysfs nodes to represent all that it is needed.

So we write a userspace library that collects all that information? So that
has to:

1) walk through the sysfs tree trying to find all the related parts of the
media board.
2) open the property that we are interested in.
3) attempt to read the property's value.
4) the driver will then copy that value into a buffer that is returned to the
application, usually through a sprintf() call.
5) the library than uses atol() to convert the string back to an integer and
stores the result in a struct.
6) repeat for all properties.

Isn't that the same as calling an enum ioctl() with a struct pointer? Except
a zillion times slower and more obfuscated?

There are certain areas where sysfs is suitable, but this isn't one of them.

 
  It would be very handy if an application can just open an /dev/v4l/mc0 node
  and be able to figure out where all the nodes are, and to be able to figure
  out what the capabilities of the board are (e.g. does it support DVB, is the
  audio going through a loopback cable or is there an alsa device, can it do
  compressed MPEG video, etc. etc.). Currently the end-user has no choice but 
  to
  supply the device nodes manually.
 
 The better would be to create a /sys/class/media node, and having the
 media controllers above that struct. So, mc0 will be at /sys/class/media/mc0.

Why? It's a device. Devices belong in /dev. That's where applications and users
look for devices. Not in sysfs. You should be able to use this even without
sysfs being mounted (on e.g. an embedded system). Another reason BTW not to use
sysfs, BTW.

  
  2) Some of the newer SoC devices can connect or disconnect internal 
  components
  dynamically. As an example, the omap3 can either connect a sensor output to 
  a
  CCDC module to a previewer module to a resizer module and finally to a 
  capture
  device node. But it is also possible to capture the sensor output directly
  after the CCDC module. The previewer can get its input from another video
  device node and output either to the resizer or to another video capture
  device node. The same is true for the resizer, that too can 

RE: RFCv2: Media controller proposal

2009-09-10 Thread Karicheri, Muralidharan

Hans,

Thanks for your reply..


 What you mean by controlling the board?

In general: the media controller can do anything except streaming. However,
that is an extreme position and in practice all the usual ioctls should
remain supported by the video device nodes.

 We have currently ported DMxxx VPBE display drivers to 2.6.31 (Not
submitted yet to mainline). In our current implementation, the output and
standard/mode are controlled through sysfs because it is a common
functionality affecting both v4l and FBDev framebuffer devices. Traditional
applications such x-windows should be able to stream video/graphics to VPBE
output. V4l2 applications should be able to stream video. Both these
devices needs to know the display parameters such as frame buffer
resolution, field etc that are to be configured in the video or osd layers
in VPBE to output frames to the encoder that is driving the output. So to
stream, first the output and mode/standard are selected using sysfs command
and then the application is started. Following scenarios are supported by
VPBE display drivers in our internal release:-

 1)Traditional FBDev applications (x-window) can be run using OSD device.
Allows changing mode/standards at the output using fbset command.

 2)v4l2 driver doesn't provide s_output/s_std support since it is done
through sysfs.

 3)Applications that requires to stream both graphics and video to the
output uses both FBDev and V4l2 devices. So these application first set the
output and mode/standard using sysfs, before doing io operations with these
devices.

I don't understand this approach. I'm no expert on the fb API but as far as
I
know the V4L2 API allows a lot more precision over the video timings (esp.
with
the new API you are working on). Furthermore, I assume it is possible to
use
the DMxxx without an OSD, right?


Right. That case (2 above) is easily taken care by v4l2 device driver. We used 
FBDev driver to drive OSD Layer because that way VPBE can be used by user space 
applications like x-windows? What is the alternative for this?
Is there a example v4l2 device using OSD like hardware and running x-windows or 
other traditional graphics application? I am not aware of any and the solution 
seems to be the right one here.

So the solution we used (case 3)involves FBDev to drive the OSD layers and V4L2 
to drive the video layer.


This is very similar to the ivtv and ivtvfb drivers: if the framebuffer is
in
use, then you cannot change the output standard (you'll get an EBUSY error)
through a video device node.


Does the ivtvfb and ivtv works with the same set of v4l2 sub devices for 
output? In our case, VPBE can work with any sub device that can accept a 
BT.656/BT1120/RGB bus interface. So the FBDev device and V4L2 device( either as 
standalone device or as co-existent device) should work with the same set of 
sub devices. So the question is, how both these bridge device can work on the 
same sub device? If both can work with the same sub device, then what you say 
is true and can be handled. That is the reason we used the sysfs/Encoder 
manager as explained in my earlier email.

That's exactly what you would expect. If the framebuffer isn't used, then
you
can just use the normal V4L2 API to change the output standard.

In practice, I think that you can only change the resolution in the FB API.
Not things like the framerate, let alone precise pixelclock, porch and sync
widths.


There are 3 use cases 

1) Pure FBDev device driving graphics to VPBE OSD layers - sub devices - 
Display (LCD/TV)

This would require FBDev loading a required v4l2 the sub device (Not 
sure if FBDev community like this approach) and using it to drive the output. 
We will not be able to change the output. But output resolutions and timing can 
be controlled through fbset command which allow you to change pixel clock, 
porch, sync etc.

2)Pure V4L2 device driving video to VPBE video layers - sub devices 
-Display (LCD/TV)
- No issues here

3)v4l2 and FBDev nodes co-exists. V4l2 drives video and FBDev drives OSD layers 
and the combined out -VPBE -sub devices - Display (LCD/TV)
- Not sure which bridge device should load up and manage the sub 
devices. If V4l2 manages the sub devices, how FBDev driver can set the timings 
in the current sub device since it has no knowledge of the v4l2 device and the 
sub device it owns/manages.   


Much better to let the two cooperate: you can use both APIs, but you can't
change the resolution in the fb if streaming is going on, and you can't
change the output standard of a video device node if that changes the
resolution while the framebuffer is in used.
That is what I mean by use case 3). We can live with the restriction. But sub 
device model currently is v4l2 specific and I am not sure if there is a way 
same sub device can be accessed by both bridge devices. Any help here is 
appreciated.


No need for additional sysfs entries.


If we can use sub devices framework,