On Tue, Dec 08, 2009 at 09:17:42AM -0200, Mauro Carvalho Chehab wrote:
> Jon Smirl wrote:
> > On Mon, Dec 7, 2009 at 6:44 PM, Mauro Carvalho Chehab
> > <mche...@redhat.com> wrote:
> 
> >>> Where is the documentation for the protocol?
> >> I'm not sure what you're meaning here. I've started a doc about IR at the 
> >> media
> > 
> > What is the format of the pulse stream data coming out of the lirc device?
> 
> AFAIK, it is at:
>       http://www.lirc.org/html/index.html
> 
> It would be nice to to add it to DocBook after integrating the API in kernel.
> 
> >> docbook. This is currently inside the kernel Documents/DocBook. If you want
> >> to browse, it is also available as:
> >>
> >>        http://linuxtv.org/downloads/v4l-dvb-apis/ch17.html
> >>
> >> For sure we need to better document the IR's, and explain the API's there.
> >>
> >>> Is it a device interface or something else?
> >> lirc_dev should create a device interface.
> >>
> >>> What about capabilities of the receiver, what frequencies?
> >>> If a receiver has multiple frequencies, how do you report what
> >>> frequency the data came in on?
> >> IMO, via sysfs.
> > 
> > Say you have a hardware device with two IR diodes, one at 38K and one
> > at 56K. Both of these receivers can get pulses. How do we tell the
> > user space app which frequency the pulses were received on? Seems to
> > me like there has to be a header on the pulse data indicating the
> > received carrier frequency. There is also baseband signaling. sysfs
> > won't work for this because of the queuing latency.
> 
> Simply create two interfaces. One for each IR receiver. At sysfs, you'll
> have /sys/class/irrcv/irrcv0 for the first one and /sys/class/irrcv/irrcv1.

Yes, please. Distinct hardware - distinct representation in the kernel.
This is the most sane way.

...
> >>
> >>> What is the interface for attaching an in-kernel decoder?
> >> IMO, it should use the kfifo for it. However, if we allow both raw data and
> >> in-kernel decoders to read data there, we'll need a spinlock to protect the
> >> kfifo.

Probably we should do what input layer does - the data is pushed into
all handlers that are signed up for it and they can deal with it at
their leisure.

> >>
> >>> If there is an in-kernel decoder should the pulse data stop being
> >>> reported, partially stopped, something else?
> >> I don't have a strong opinion here, but, from the previous discussions, it
> >> seems that people want it to be double-reported by default. If so, I think
> >> we need to implement a command at the raw interface to allow disabling the
> >> in-kernel decoder, while the raw interface is kept open.
> > 
> > Data could be sent to the in-kernel decoders first and then if they
> > don't handle it, send it to user space.

You do not know what userspace wants to do with the data. They may want
to simply observe it, store or do something else. Since we do provide
interface for such raw[ish] data we just need to transmit it to
userpsace as long as there are users (i.e. interface is open).

> 
> Hmm... like adding a delay if the raw userspace is open and, if the raw 
> userspace
> doesn't read all pulse data, it will send via in-kernel decoder instead? This 
> can
> work, but I'm not sure if this is the better way, and will require some logic 
> to
> synchronize lirc_dev and IR core modules. Also, doing it key by key will 
> introduce
> some delay.
> 
> If you're afraid of having the userspace app hanged and having no IR output, 
> it would be simpler to just close the raw interface if an available data 
> won't be
> read after a bigger timeout (3 seconds? 5 seconds?).

We can not foresee all use cases. Just let all parties signed up for the
data get and process it, do not burden the core with heuristics.

-- 
Dmitry
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to