Re: Remote that breaks current system

2010-08-11 Thread Christoph Bartelmus
Hi Jarod,

on 11 Aug 10 at 10:38, Jarod Wilson wrote:
> On Mon, Aug 2, 2010 at 4:42 PM, Jon Smirl  wrote:
>> On Mon, Aug 2, 2010 at 2:09 PM, Jarod Wilson  wrote:
>>> On Mon, Aug 02, 2010 at 01:13:22PM -0400, Jon Smirl wrote:
>>>> On Mon, Aug 2, 2010 at 12:42 PM, Christoph Bartelmus 
>>>> wrote:
> 
>>>>> It has nothing to do with start bits.
>>>>> The Streamzap remote just sends 14 (sic!) bits instead of 13.
>>>>> The decoder expects 13 bits.
>>>>> Yes, the Streamzap remote does _not_ use standard RC-5.
>>>>> Did I mention this already? Yes. ;-)
>>>>
>>>> If the remote is sending a weird protocol then there are several choices:
>>>>   1) implement raw mode
>>>>   2) make a Stream-Zap protocol engine (it would be a 14b version of
>>>> RC-5). Standard RC5 engine will still reject the messages.
>>>>   3) throw away your Stream-Zap remotes
>>>>
>>>> I'd vote for #3, but #2 will probably make people happier.
>>>
>>> Hm. Yeah, I know a few people who are quite attached to their Streamzap
>>> remotes. I'm not a particularly big fan of it, I only got the thing off
>>> ebay to have the hardware so I could work on the driver. :) So yeah, #3 is
>>> probably not the best route. But I don't know that I'm a huge fan of #2
>>> either. Another decoder engine just for one quirky remote seems excessive,
>>> and there's an option #4:
>>>
>>> 4) just keep passing data out to lirc by default.
>>
>> That's a decent idea. Implement the mainstream, standard protocols in
>> the kernel and kick the weird stuff out to LIRC. We can avoid the
>> whole world of raw mode, config files, etc. Let LIRC deal with all
>> that. If the weird stuff gets enough users bring it in-kernel.  Maybe
>> StreamZap will suddenly sell a million units, you never know.
>>
>> It is easy to implement a StreamZap engine. Just copy the RC5 one.
>> Rename everything and adjust it to require one more bit. You'll have
>> to modify the RC5 to wait for a bit interval (timeout) before sending
>> the data up. If you want to get fancy use a weak symbol in the
>> StrreamZap engine to tell the RC5 engine if Stream Zap is loaded. Then
>> you can decide to wait the extra bit interval or not.

> The other thought I had was to not load the engine by default, and
> only auto-load it from the streamzap driver itself.

>>> Let lircd handle the default remote in this case. If you want to use
>>> another remote that actually uses a standard protocol, and want to use
>>> in-kernel decoding for that, its just an ir-keytable call away.
>>>
>>> For giggles, I may tinker with implementing another decoder engine though,
>>> if only to force myself to actually pay more attention to protocol
>>> specifics. For the moment, I'm inclined to go ahead with the streamzap
>>> port as it is right now, and include either an empty or not-empty, but
>>> not-functional key table.

> So I spent a while beating on things the past few nights for giggles
> (and for a sanity break from "vacation" with too many kids...). I
> ended up doing a rather large amount of somewhat invasive work to the
> streamzap driver itself, but the end result is functional in-kernel
> decoding, and lirc userspace decode continues to behave correctly. RFC
> patch here:
>
> http://wilsonet.com/jarod/ir-core/IR-streamzap-in-kernel-decode.patch
>
> Core changes to streamzap.c itself:
>
> - had to enable reporting of a long space at the conclusion of each
> signal (which is what the lirc driver would do w/timeout_enabled set),
> otherwise I kept having issues with key bounce and/or old data being
> buffered (i.e., press up, cursor moves up. push down, cursor moves up
> then down, press left, it moves down, then left, etc.). Still not
> quite sure what the real problem is there, the lirc userspace decoder
> has no problems with it either way.
>
> - removed streamzap's internal delay buffer, as the ir-core kfifo
> seems to provide the necessary signal buffering just fine by itself.
> Can't see any significant difference in decode performance either
> in-kernel or via lirc with it removed, anyway. (Christoph, can you
> perhaps expand a bit on why the delay buffer was originally needed/how
> to reproduce the problem it was intended to solve? Maybe I'm just not
> triggering it yet.)

Should be fine. Current lircd with timeout support shouldn't have a  
problem with that anymore. I was already thinking of suggesting to remove  
th

Re: Remote that breaks current system

2010-08-02 Thread Christoph Bartelmus
Hi!

Jon Smirl "jonsm...@gmail.com" wrote:
[...]
>> Got one. The Streamzap PC Remote. Its 14-bit RC5. Can't get it to properly
>> decode in-kernel for the life of me. I got lirc_streamzap 99% of the way
>> ported over the weekend, but this remote just won't decode correctly w/the
>> in-kernel RC5 decoder.

> Manchester encoding may need a decoder that waits to get 2-3 edge
> changes before deciding what the first bit. As you decode the output
> is always a couple bits behind the current input data.
>
> You can build of a table of states
> L0 S1 S0 L1  - emit a 1, move forward an edge
> S0 S1 L0 L1 - emit a 0, move forward an edge
>
> By doing it that way you don't have to initially figure out the bit clock.
>
> The current decoder code may not be properly tracking the leading
> zero. In Manchester encoding it is illegal for a bit to be 11 or 00.
> They have to be 01 or 10. If you get a 11 or 00 bit, your decoding is
> off by 1/2 a bit cycle.
>
> Did you note the comment that Extended RC-5 has only a single start
> bit instead of two?

It has nothing to do with start bits.
The Streamzap remote just sends 14 (sic!) bits instead of 13.
The decoder expects 13 bits.
Yes, the Streamzap remote does _not_ use standard RC-5.
Did I mention this already? Yes. ;-)

Christoph
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 13/13] IR: Port ene driver to new IR subsystem and enable it.

2010-08-01 Thread Christoph Bartelmus
Hi!

Jon Smirl "jonsm...@gmail.com" wrote:

> On Sun, Aug 1, 2010 at 5:50 AM, Christoph Bartelmus 
> wrote:
>> Hi Jon,
>>
>> on 31 Jul 10 at 14:14, Jon Smirl wrote:
>>> On Sat, Jul 31, 2010 at 1:47 PM, Christoph Bartelmus 
>>> wrote:
>>>> Hi Jon,
>>>>
>>>> on 31 Jul 10 at 12:25, Jon Smirl wrote:
>>>>> On Sat, Jul 31, 2010 at 11:12 AM, Andy Walls 
>>>>> wrote:
>>>>>> I think you won't be able to fix the problem conclusively either way.
>>>>>>  A lot of how the chip's clocks should be programmed depends on how the
>>>>>> GPIOs are used and what crystal is used.
>>>>>>
>>>>>> I suspect many designers will use some reference design layout from
>>>>>> ENE, but it won't be good in every case.  The wire-up of the ENE of
>>>>>> various motherboards is likely something you'll have to live with as
>>>>>> unknowns.
>>>>>>
>>>>>> This is a case where looser tolerances in the in kernel decoders could
>>>>>> reduce this driver's complexity and/or get rid of arbitrary fudge
>>>>>> factors in the driver.
>>>>
>>>>> The tolerances are as loose as they can be. The NEC protocol uses
>>>>> pulses that are 4% longer than JVC. The decoders allow errors up to 2%
>>>>> (50% of 4%).  The crystals used in electronics are accurate to
>>>>> 0.0001%+.
>>>>
>>>> But the standard IR receivers are far from being accurate enough to allow
>>>> tolerance windows of only 2%.
>>>> I'm surprised that this works for you. LIRC uses a standard tolerance of
>>>> 30% / 100 us and even this is not enough sometimes.
>>>>
>>>> For the NEC protocol one signal consists of 22 individual pulses at
>>>> 38kHz.. If the receiver just misses one pulse, you already have an error
>>>> of 1/22
>>>>> 4%.
>>
>>> There are different types of errors. The decoders can take large
>>> variations in bit times. The problem is with cumulative errors. In
>>> this case the error had accumulated up to 450us in the lead pulse.
>>> That's just too big of an error and caused the JVC code to be
>>> misclassified as NEC.
>>>
>>> I think he said lirc was misclassifying it too. So we both did the same
>>> thing.
>>
>> No way. JVC is a 16 bit code. NEC uses 32 bits. How can you ever confuse
>> JVC with NEC signals?
>>
>> LIRC will work if there is a 4% or 40% or 400% error. Because irrecord
>> generates the config file using your receiver it will compensate for any

> At the end of the process we can build a record and match raw mode if
> we have to.

I'm not talking about raw mode here. lircd will happily decode the signals  
despite of any timing error as long it's consistent.

I'm still interested how JVC can be confused with NEC codes.

>> timing error. It will work with pulses cut down to 50 us like IrDA
>> hardware does and it will work when half of the bits are swallowed like
>> the IgorPlug USB receiver does.

> The code for fixing IrDA and IgorPLug should live inside their low
> level device drivers.  The characteristics of the errors produced by
> this hardware are known so a fix can be written to compensate.

The function f(x) = 50 is not bijective. No way to compensate.

Missing bits cannot be magically regenerated by the driver.

> The
> IgorPlug people might find it easier to fix their firmware.

There is a firmware patch available? Do you have a pointer?

Christoph
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 13/13] IR: Port ene driver to new IR subsystem and enable it.

2010-08-01 Thread Christoph Bartelmus
Hi Jon,

on 31 Jul 10 at 14:14, Jon Smirl wrote:
> On Sat, Jul 31, 2010 at 1:47 PM, Christoph Bartelmus 
> wrote:
>> Hi Jon,
>>
>> on 31 Jul 10 at 12:25, Jon Smirl wrote:
>>> On Sat, Jul 31, 2010 at 11:12 AM, Andy Walls 
>>> wrote:
>>>> I think you won't be able to fix the problem conclusively either way.  A
>>>> lot of how the chip's clocks should be programmed depends on how the
>>>> GPIOs are used and what crystal is used.
>>>>
>>>> I suspect many designers will use some reference design layout from ENE,
>>>> but it won't be good in every case.  The wire-up of the ENE of various
>>>> motherboards is likely something you'll have to live with as unknowns.
>>>>
>>>> This is a case where looser tolerances in the in kernel decoders could
>>>> reduce this driver's complexity and/or get rid of arbitrary fudge
>>>> factors in the driver.
>>
>>> The tolerances are as loose as they can be. The NEC protocol uses
>>> pulses that are 4% longer than JVC. The decoders allow errors up to 2%
>>> (50% of 4%).  The crystals used in electronics are accurate to
>>> 0.0001%+.
>>
>> But the standard IR receivers are far from being accurate enough to allow
>> tolerance windows of only 2%.
>> I'm surprised that this works for you. LIRC uses a standard tolerance of
>> 30% / 100 us and even this is not enough sometimes.
>>
>> For the NEC protocol one signal consists of 22 individual pulses at 38kHz..
>> If the receiver just misses one pulse, you already have an error of 1/22
>>> 4%.

> There are different types of errors. The decoders can take large
> variations in bit times. The problem is with cumulative errors. In
> this case the error had accumulated up to 450us in the lead pulse.
> That's just too big of an error and caused the JVC code to be
> misclassified as NEC.
>
> I think he said lirc was misclassifying it too. So we both did the same
> thing.

No way. JVC is a 16 bit code. NEC uses 32 bits. How can you ever confuse  
JVC with NEC signals?

LIRC will work if there is a 4% or 40% or 400% error. Because irrecord  
generates the config file using your receiver it will compensate for any  
timing error. It will work with pulses cut down to 50 us like IrDA  
hardware does and it will work when half of the bits are swollowed like  
the IgorPlug USB receiver does.

But of course the driver should try to generate timings as accurate as  
possible.

Christoph
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 13/13] IR: Port ene driver to new IR subsystem and enable it.

2010-08-01 Thread Christoph Bartelmus
Hi Jon,

on 31 Jul 10 at 17:53, Jon Smirl wrote:
> On Sat, Jul 31, 2010 at 2:51 PM, Andy Walls  wrote:
>> On Sat, 2010-07-31 at 14:14 -0400, Jon Smirl wrote:
>>> On Sat, Jul 31, 2010 at 1:47 PM, Christoph Bartelmus 
>>> wrote:
>>>> Hi Jon,
>>>>
>>>> on 31 Jul 10 at 12:25, Jon Smirl wrote:
>>>>> On Sat, Jul 31, 2010 at 11:12 AM, Andy Walls 
>>>>> wrote:
>>>>>> I think you won't be able to fix the problem conclusively either way..
>>>>>>  A lot of how the chip's clocks should be programmed depends on how the
>>>>>> GPIOs are used and what crystal is used.
>>>>>>
>>>>>> I suspect many designers will use some reference design layout from
>>>>>> ENE, but it won't be good in every case.  The wire-up of the ENE of
>>>>>> various motherboards is likely something you'll have to live with as
>>>>>> unknowns.
>>>>>>
>>>>>> This is a case where looser tolerances in the in kernel decoders could
>>>>>> reduce this driver's complexity and/or get rid of arbitrary fudge
>>>>>> factors in the driver.
>>>>
>>>>> The tolerances are as loose as they can be. The NEC protocol uses
>>>>> pulses that are 4% longer than JVC. The decoders allow errors up to 2%
>>>>> (50% of 4%).  The crystals used in electronics are accurate to
>>>>> 0.0001%+.
>>>>
>>>> But the standard IR receivers are far from being accurate enough to allow
>>>> tolerance windows of only 2%.
>>>> I'm surprised that this works for you. LIRC uses a standard tolerance of
>>>> 30% / 100 us and even this is not enough sometimes.
>>>>
>>>> For the NEC protocol one signal consists of 22 individual pulses at
>>>> 38kHz. If the receiver just misses one pulse, you already have an error
>>>> of 1/22
>>>>> 4%.
>>>
>>> There are different types of errors. The decoders can take large
>>> variations in bit times. The problem is with cumulative errors. In
>>> this case the error had accumulated up to 450us in the lead pulse.
>>> That's just too big of an error
>>
>> Hi Jon,
>>
>> Hmmm.  Leader marks are, by protocol design, there to give time for the
>> receiver's AGC to settle.  We should make it OK to miss somewhat large
>> portions of leader marks.  I'm not sure what the harm is in accepting
>> too long of a leader mark, but I'm pretty sure a leader mark that is too
>> long will always be due to systematic error and not noise errors.
>>
>> However, if we know we have systematic errors caused by unknowns, we
>> should be designing and implementing a decoding system that reasonably
>> deals with those systematic errors.  Again the part of the system almost
>> completely out of our control is the remote controls, and we *have no
>> control* over systematic errors introduced by remotes.

> We haven't encountered remotes with systematic errors. If remotes had
> large errors in them they wouldn't be able to reliably control their
> target devices. Find a remote that won't work with the protocol
> engines and a reasonably accurate receiver.

>>
>> Obviously we want to reduce or eliminate systematic errors by
>> determining the unknowns and undoing their effects or by compensating
>> for their overall effect.  But in the case of the ENE receiver driver,
>> you didn't seem to like the Maxim's software compensation for the
>> systematic receiver errors.

> I would be happier if we could track down the source of the error
> instead of sticking a bandaid on at the end of the process.

>>> and caused the JVC code to be
>>> misclassified as NEC.
>>
>> I still have not heard why we need protocol discrimination/classifcation
>> in the kernel.  Doing discrimination between two protocols with such
>> close timings is whose requirement again?

> If we don't do protocol engines we have to revert back to raw
> recording and having everyone train the system with their remotes. The
> goal is to eliminate the training step. We would also have to have
> large files (LIRC configs) for building the keymaps and a new API to
> deal with them. With the engines the key presses are identified by
> short strings.

Only 437 of 3486 config files on lirc.org use raw mode (probably what you  
mean with large files). Many of them are recorded with an very old  
irrecord version. Current versions of i

Re: [PATCH 13/13] IR: Port ene driver to new IR subsystem and enable it.

2010-07-31 Thread Christoph Bartelmus
Hi Jon,

on 31 Jul 10 at 12:25, Jon Smirl wrote:
> On Sat, Jul 31, 2010 at 11:12 AM, Andy Walls 
> wrote:
>> I think you won't be able to fix the problem conclusively either way.  A
>> lot of how the chip's clocks should be programmed depends on how the
>> GPIOs are used and what crystal is used.
>>
>> I suspect many designers will use some reference design layout from ENE,
>> but it won't be good in every case.  The wire-up of the ENE of various
>> motherboards is likely something you'll have to live with as unknowns.
>>
>> This is a case where looser tolerances in the in kernel decoders could
>> reduce this driver's complexity and/or get rid of arbitrary fudge
>> factors in the driver.

> The tolerances are as loose as they can be. The NEC protocol uses
> pulses that are 4% longer than JVC. The decoders allow errors up to 2%
> (50% of 4%).  The crystals used in electronics are accurate to
> 0.0001%+.

But the standard IR receivers are far from being accurate enough to allow
tolerance windows of only 2%.
I'm surprised that this works for you. LIRC uses a standard tolerance of
30% / 100 us and even this is not enough sometimes.

For the NEC protocol one signal consists of 22 individual pulses at 38kHz.
If the receiver just misses one pulse, you already have an error of 1/22
> 4%.

Christoph
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 10/13] IR: extend interfaces to support more device settings LIRC: add new IOCTL that enables learning mode (wide band receiver)

2010-07-31 Thread Christoph Bartelmus
Hi Maxim,

on 31 Jul 10 at 01:01, Maxim Levitsky wrote:
> On Fri, 2010-07-30 at 23:22 +0200, Christoph Bartelmus wrote:
[...]
>>> +#define LIRC_SET_WIDEBAND_RECEIVER _IOW('i', 0x0023, __u32)
>>
>> If you really want this new ioctl, then it should be clarified how it
>> behaves in relation to LIRC_SET_MEASURE_CARRIER_MODE.

> In my opinion, I won't need the LIRC_SET_MEASURE_CARRIER_MODE,
> I would just optionally turn that on in learning mode.
> You disagree, and since that is not important (besides TX and learning
> features are present only at fraction of ENE devices. The only user I
> did the debugging with, doesn't seem to want to help debug that code
> anymore...)
>
> But anyway, in current state I want these features to be independent.
> Driver will enable learning mode if it have to.

Please avoid the term "learning mode" as to you it probably means  
something different than to me.

>
> I'll add the documentation.

>>
>> Do you have to enable the wide-band receiver explicitly before you can
>> enable carrier reports or does enabling carrier reports implicitly switch
>> to the wide-band receiver?
> I would implicitly switch the learning mode on, untill user turns off
> the carrier reports.

You mean that you'll implicitly switch on the wide-band receiver. Ok.

>>
>> What happens if carrier mode is enabled and you explicitly turn off the
>> wide-band receiver?
> Wouldn't it be better to have one ioctl for both after all?

There may be hardware that allows carrier measurement but does not have a  
wide-band receiver. And there may be hardware that does have a wide-band  
receiver but does not allow carrier measurement. irrecord needs to be able  
to distinguish these cases, so we need separate ioctls.

I'd say: carrier reports may switch on the wide-band reciever implicitly.  
In that case the wide-band receiver cannot be switched off explicitly  
until carrier reports are disabled again. It just needs to be documented.

>>
>> And while we're at interface stuff:
>> Do we really need LIRC_SETUP_START and LIRC_SETUP_END? It is only used
>> once in lircd during startup.
> I don't think so.
>

Christoph
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 10/13] IR: extend interfaces to support more device settings LIRC: add new IOCTL that enables learning mode (wide band receiver)

2010-07-30 Thread Christoph Bartelmus
Hi!

Maxim Levitsky "maximlevit...@gmail.com" wrote:

> Still missing features: carrier report & timeout reports.
> Will need to pack these into ir_raw_event


Hm, this patch changes the LIRC interface but I can't see the according  
patch to the documentation.

[...]
>   * @tx_ir: transmit IR
>   * @s_idle: optional: enable/disable hardware idle mode, upon which,
> +<<< current
>   *   device doesn't interrupt host untill it sees IR data
> +===

Huh?

> + device doesn't interrupt host untill it sees IR data
> + * @s_learning_mode: enable wide band receiver used for learning
+ patched

s/untill/until/

[...]
>  #define LIRC_CAN_MEASURE_CARRIER  0x0200
> +#define LIRC_CAN_HAVE_WIDEBAND_RECEIVER   0x0400

LIRC_CAN_USE_WIDEBAND_RECEIVER

[...]
> @@ -145,7 +146,7 @@
>   * if enabled from the next key press on the driver will send
>   * LIRC_MODE2_FREQUENCY packets
>   */
> -#define LIRC_SET_MEASURE_CARRIER_MODE  _IOW('i', 0x001d, __u32)
> +#define LIRC_SET_MEASURE_CARRIER_MODE_IOW('i', 0x001d, __u32)
>
>  /*
>   * to set a range use
> @@ -162,4 +163,6 @@
>  #define LIRC_SETUP_START   _IO('i', 0x0021)
>  #define LIRC_SETUP_END _IO('i', 0x0022)
>
> +#define LIRC_SET_WIDEBAND_RECEIVER _IOW('i', 0x0023, __u32)

If you really want this new ioctl, then it should be clarified how it  
behaves in relation to LIRC_SET_MEASURE_CARRIER_MODE.

Do you have to enable the wide-band receiver explicitly before you can  
enable carrier reports or does enabling carrier reports implicitly switch  
to the wide-band receiver?

What happens if carrier mode is enabled and you explicitly turn off the  
wide-band receiver?

And while we're at interface stuff:
Do we really need LIRC_SETUP_START and LIRC_SETUP_END? It is only used  
once in lircd during startup.

Christoph
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/9 v2] IR: few fixes, additions and ENE driver

2010-07-29 Thread Christoph Bartelmus
Hi!

Maxim Levitsky "maximlevit...@gmail.com" wrote:

> On Thu, 2010-07-29 at 18:58 +0200, Christoph Bartelmus wrote:
>> Hi Maxim,
>>
>> on 29 Jul 10 at 17:41, Maxim Levitsky wrote:
>> [...]
>>>>> Note that I send timeout report with zero value.
>>>>> I don't think that this value is importaint.
>>>>
>>>> This does not sound good. Of course the value is important to userspace
>>>> and 2 spaces in a row will break decoding.
>>>>
>>> Could you explain exactly how timeout reports work?
>>
>> It all should be documented in the interface description. Jarod probably
>> can point you where it can be found.
>> Timeout reports can only be generated by the hardware because only the
>> hardware can know the exact amount of time passed since the last pulse
>> when any kind of buffering is used by the hardware. You see this esp. with
>> USB devices.
> In my case hardware doesn't have that capability.
> However, I thought that timeout reports are useful to stop hardware as
> soon as timeout is hit.

You are starting a software timer for this? That's not the intention of  
timeout reports. It's just a hint to the decoder which needs to run it's  
own timer anyway. Having to stop the hardware is something very specific  
to your driver.

>>> Lirc interface isn't set to stone, so how about a reasonable compromise.
>>> After reasonable long period of inactivity (200 ms for example), space
>>> is sent, and then next report starts with a pulse.
>>> So gaps between keypresses will be maximum of 200 ms, and as a bonus I
>>> could rip of the logic that deals with remembering the time?
>>
>> For sure I will not agree to any constant introduced here. And I also
>> don't see why. Can you explain why you are trying to change the lirc
>> interface here?

> Currently, to comply with strict lirc requirements I have to send one
> big space between keypresses. Of course I can send it only when I get
> next pulse, which might happen much later.
>
> However, the in-kernel decoders depend on the last space to be sent
> right away.

Ugh. What's the most polite way to express my disgust? ;)

> that it I need to and a keypress with a space, but currently it ends
> with pulse.
>
> So my idea was to wait reasonable time for next pulse, and if it doesn't
> arrive, send a space mark even though no new pulse is registered.
>
> Of course the size of that space can be configured.

The "reasonable time" is protocol specific and must be handled by the  
decoder IMHO and not by the driver.

Christoph
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/9 v2] IR: few fixes, additions and ENE driver

2010-07-29 Thread Christoph Bartelmus
Hi!

Maxim Levitsky "maximlevit...@gmail.com" wrote:
[...]
> Could you explain exactly how timeout reports work?
[...]
>>> So, timeout report is just another sample, with a mark attached, that
>>> this is last sample? right?
>>
>> No, a timeout report is just an additional hint for the decoder that a
>> specific amount of time has passed since the last pulse _now_.
>>
>> [...]
>>> In that case, lets do that this way:
>>>
>>> As soon as timeout is reached, I just send lirc the timeout report.
>>> Then next keypress will start with pulse.
>>
>> When timeout reports are enabled the sequence must be:
>>
>> where  is optional.
>>
>> lircd will not work when you leave out the space. It must know the exact
>> time between the pulses. Some hardware generates timeout reports that are
>> too short to distinguish between spaces that are so short that the next
>> sequence can be interpreted as a repeat or longer spaces which indicate
>> that this is a new key press.

> Let me give an example to see if I got that right.
>
>
> Suppose we have this sequence of reports from the driver:
>
> 500 (pulse)
> 20 (timeout)
> 1 (space)
> 500 (pulse)
>
>
> Is that correct that time between first and second pulse is
> '10020' ?

No, it's 1. The timeout is optional and just a hint to the decoder  
how much time has passed already since the last pulse. It does not change  
the meaning of the next space.

Christoph
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/9 v2] IR: few fixes, additions and ENE driver

2010-07-29 Thread Christoph Bartelmus
Hi Maxim,

on 29 Jul 10 at 19:26, Maxim Levitsky wrote:
> On Thu, 2010-07-29 at 11:38 -0400, Andy Walls wrote:
>> On Thu, 2010-07-29 at 17:41 +0300, Maxim Levitsky wrote:
>>> On Thu, 2010-07-29 at 09:23 +0200, Christoph Bartelmus wrote:
>>>> Hi Maxim,
>>>>
>>>> on 29 Jul 10 at 02:40, Maxim Levitsky wrote:
>>>> [...]
>>>>> In addition to comments, I changed helper function that processes
>>>>> samples so it sends last space as soon as timeout is reached.
>>>>> This breaks somewhat lirc, because now it gets 2 spaces in row.
>>>>> However, if it uses timeout reports (which are now fully supported)
>>>>> it will get such report in middle.
>>>>>
>>>>> Note that I send timeout report with zero value.
>>>>> I don't think that this value is importaint.
>>>>
>>>> This does not sound good. Of course the value is important to userspace
>>>> and 2 spaces in a row will break decoding.
>>>>
>>>> Christoph
>>>
>>> Could you explain exactly how timeout reports work?
>>>
>>> Lirc interface isn't set to stone, so how about a reasonable compromise.
>>> After reasonable long period of inactivity (200 ms for example), space
>>> is sent, and then next report starts with a pulse.
>>> So gaps between keypresses will be maximum of 200 ms, and as a bonus I
>>> could rip of the logic that deals with remembering the time?
>>>
>>> Best regards,
>>> Maxim Levitsky

> So, timeout report is just another sample, with a mark attached, that
> this is last sample? right?

No, a timeout report is just an additional hint for the decoder that a  
specific amount of time has passed since the last pulse _now_.

[...]
> In that case, lets do that this way:
>
> As soon as timeout is reached, I just send lirc the timeout report.
> Then next keypress will start with pulse.

When timeout reports are enabled the sequence must be:
   
where  is optional.

lircd will not work when you leave out the space. It must know the exact  
time between the pulses. Some hardware generates timeout reports that are  
too short to distinguish between spaces that are so short that the next  
sequence can be interpreted as a repeat or longer spaces which indicate  
that this is a new key press.

Christoph
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/9 v2] IR: few fixes, additions and ENE driver

2010-07-29 Thread Christoph Bartelmus
Hi Maxim,

on 29 Jul 10 at 17:41, Maxim Levitsky wrote:
[...]
>>> Note that I send timeout report with zero value.
>>> I don't think that this value is importaint.
>>
>> This does not sound good. Of course the value is important to userspace
>> and 2 spaces in a row will break decoding.
>>
>> Christoph

> Could you explain exactly how timeout reports work?

It all should be documented in the interface description. Jarod probably  
can point you where it can be found.
Timeout reports can only be generated by the hardware because only the  
hardware can know the exact amount of time passed since the last pulse  
when any kind of buffering is used by the hardware. You see this esp. with  
USB devices.

> Lirc interface isn't set to stone, so how about a reasonable compromise.
> After reasonable long period of inactivity (200 ms for example), space
> is sent, and then next report starts with a pulse.
> So gaps between keypresses will be maximum of 200 ms, and as a bonus I
> could rip of the logic that deals with remembering the time?

For sure I will not agree to any constant introduced here. And I also  
don't see why. Can you explain why you are trying to change the lirc  
interface here?

Christoph
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/9 v2] IR: few fixes, additions and ENE driver

2010-07-29 Thread Christoph Bartelmus
Hi Andy,

on 29 Jul 10 at 11:38, Andy Walls wrote:
> On Thu, 2010-07-29 at 17:41 +0300, Maxim Levitsky wrote:
>> On Thu, 2010-07-29 at 09:23 +0200, Christoph Bartelmus wrote:
>>> Hi Maxim,
>>>
>>> on 29 Jul 10 at 02:40, Maxim Levitsky wrote:
>>> [...]
>>>> In addition to comments, I changed helper function that processes samples
>>>> so it sends last space as soon as timeout is reached.
>>>> This breaks somewhat lirc, because now it gets 2 spaces in row.
>>>> However, if it uses timeout reports (which are now fully supported)
>>>> it will get such report in middle.
>>>>
>>>> Note that I send timeout report with zero value.
>>>> I don't think that this value is importaint.
>>>
>>> This does not sound good. Of course the value is important to userspace
>>> and 2 spaces in a row will break decoding.
>>>
>>> Christoph
>>
>> Could you explain exactly how timeout reports work?
>>
>> Lirc interface isn't set to stone, so how about a reasonable compromise.
>> After reasonable long period of inactivity (200 ms for example), space
>> is sent, and then next report starts with a pulse.
>> So gaps between keypresses will be maximum of 200 ms, and as a bonus I
>> could rip of the logic that deals with remembering the time?
>>
>> Best regards,
>> Maxim Levitsky

> Just for some context, the Conexant hardware generates such reports on
> it's hardware Rx FIFO:

>> From section 3.8.2.3 of

> http://dl.ivtvdriver.org/datasheets/video/cx25840.pdf
>
> "When the demodulated input signal no longer transitions, the RX pulse
> width timer overflows, which indicates the end of data transmission.
> When this occurs, the timer value contains all 1s. This value can be
> stored to the RX FIFO, to indicate the end of the transmission [...].
> Additionally, a status bit is set which can interrupt the
> microprocessor, [...]".
>
> So the value in the hardware RX FIFO is the maximum time measurable
> given the current hardware clock divider settings, plus a flag bit
> indicating overflow.
>
> The CX2388[58] IR implementation currently translates that hardware
> notification into V4L2_SUBDEV_IR_PULSE_RX_SEQ_END:
>
> http://git.linuxtv.org/awalls/v4l-dvb.git?a=blob;f=drivers/media/video/cx238
> 85/cx23888-ir.c;h=51f21636e639330bcf528568c0f08c7a4a674f42;hb=094fc94360cf01
> 960da3311698fedfca566d4712#l678
>
> which is defined here:
>
> http://git.linuxtv.org/awalls/v4l-dvb.git?a=blob;f=include/media/v4l2-subdev
> .h;h=bacd52568ef9fd17787554aa347f46ca6f23bdb2;hb=094fc94360cf01960da3311698f
> edfca566d4712#l366
>
> as
>
> #define V4L2_SUBDEV_IR_PULSE_RX_SEQ_END 0x
>
>
> I didn't look too hard at it, but IIRC the in kernel decoders would have
> interpreted this value incorrectly (the longest possible mark).
> Instead, I just pass along the longest possible space:
>
> http://git.linuxtv.org/awalls/v4l-dvb.git?a=blob;f=drivers/media/video/cx238
> 85/cx23885-input.c;h=3f924e21b9575f7d67d99d71c8585d41828aabfe;hb=094fc94360c
> f01960da3311698fedfca566d4712#l49
>
> so it acts as in band signaling if anyone is looking for it, and the in
> kernel decoders happily treat it like a long space.
>
> With a little work, I could pass the actual time it took for the Rx
> timer to timeout as well (Provide the space measurement *and* the in
> band signal), if needed.

The value for LIRC_MODE2_TIMEOUT needs to be the exact value of the acutal  
time passed since the last pulse. When you just send the longest possible  
space instead, you'll make repeat detection impossible.

Christoph

--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 5/9] IR: extend interfaces to support more device settings

2010-07-29 Thread Christoph Bartelmus
Hi Maxim,

on 29 Jul 10 at 18:27, Maxim Levitsky wrote:
> On Thu, 2010-07-29 at 09:25 +0200, Christoph Bartelmus wrote:
>> Hi!
>>
>> Maxim Levitsky "maximlevit...@gmail.com" wrote:
>>
>>> Also reuse LIRC_SET_MEASURE_CARRIER_MODE as LIRC_SET_LEARN_MODE
>>> (LIRC_SET_LEARN_MODE will start carrier reports if possible, and
>>> tune receiver to wide band mode)
>>
>> I don't like the rename of the ioctl. The ioctl should enable carrier
>> reports. Anything else is hardware specific. Learn mode gives a somewhat
>> wrong association to me. irrecord always has been using "learn mode"
>> without ever using this ioctl.

> Why?

If an ioctl enables/disables measuring of the carrier, then call it  
LIRC_SET_MEASURE_CARRIER_MODE and not LIRC_SET_LEARN_MODE.

Whether we need a LIRC_ENABLE_WIDE_BAND_RECEIVER ioctl is another  
question.

> Carrier measure (if supported by hardware I think should always be
> enabled, because it can help in-kernel decoders).

That does not work in the real-world scenario. All receivers with a high  
range demodulate the signal and you won't get the carrier.

[...]
> Another thing is reporting these results to lirc.
> By default lirc shouldn't get carrier reports, but as soon as irrecord
> starts, it can place device in special mode that allows it to capture
> input better, and optionally do carrier reports.

And that's what LIRC_SET_MEASURE_CARRIER_MODE is made for.

> Do you think carrier reports are needed by lircd?

No.

Christoph
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re:

2010-07-29 Thread Christoph Bartelmus
Hi Maxim,

on 29 Jul 10 at 02:40, Maxim Levitsky wrote:
[...]
> In addition to comments, I changed helper function that processes samples
> so it sends last space as soon as timeout is reached.
> This breaks somewhat lirc, because now it gets 2 spaces in row.
> However, if it uses timeout reports (which are now fully supported)
> it will get such report in middle.
>
> Note that I send timeout report with zero value.
> I don't think that this value is importaint.

This does not sound good. Of course the value is important to userspace  
and 2 spaces in a row will break decoding.

Christoph
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 5/9] IR: extend interfaces to support more device settings

2010-07-29 Thread Christoph Bartelmus
Hi!

Maxim Levitsky "maximlevit...@gmail.com" wrote:

> Also reuse LIRC_SET_MEASURE_CARRIER_MODE as LIRC_SET_LEARN_MODE
> (LIRC_SET_LEARN_MODE will start carrier reports if possible, and
> tune receiver to wide band mode)

I don't like the rename of the ioctl. The ioctl should enable carrier
reports. Anything else is hardware specific. Learn mode gives a somewhat
wrong association to me. irrecord always has been using "learn mode"
without ever using this ioctl.

Christoph
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 1/3] IR: add core lirc device interface

2010-06-04 Thread Christoph Bartelmus
Hi Mauro,

on 04 Jun 10 at 01:10, Mauro Carvalho Chehab wrote:
> Em 03-06-2010 19:06, Jarod Wilson escreveu:
[...]
>> As for the compat bits... I actually pulled them out of the Fedora kernel
>> and userspace for a while, and there were only a few people who really ran
>> into issues with it, but I think if the new userspace and kernel are rolled
>> out at the same time in a new distro release (i.e., Fedora 14, in our
>> particular case), it should be mostly transparent to users.

> For sure this will happen on all distros that follows upstream: they'll
> update lirc to fulfill the minimal requirement at Documentation/Changes.
>
> The issue will appear only to people that manually compile kernel and lirc.
> Those users are likely smart enough to upgrade to a newer lirc version if
> they notice a trouble, and to check at the forums.

>> Christoph
>> wasn't a fan of the change, and actually asked me to revert it, so I'm
>> cc'ing him here for further feedback, but I'm inclined to say that if this
>> is the price we pay to get upstream, so be it.

> I understand Christoph view, but I think that having to deal with compat
> stuff forever is a high price to pay, as the impact of this change is
> transitory and shouldn't be hard to deal with.

I'm not against doing this change, but it has to be coordinated between  
drivers and user-space.
Just changing lirc.h is not enough. You also have to change all user-space  
applications that use the affected ioctls to use the correct types.
That's what Jarod did not address last time so I asked him to revert the  
change. And I'd also like to collect all other change request to the API  
if there are any and do all changes in one go.

Christoph
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

2009-12-08 Thread Christoph Bartelmus
Hi Andy,

on 07 Dec 09 at 23:10, Andy Walls wrote:
[...]
> (Christoph can correct me if I get anything wrong.)

Just a few additions.

[...]
>> What is the time standard for the data, where does it come from?

> I think it is usec, IIRC.

Yes, it is.

> I know that the hardware I work with has sub 100 ns resolution,

Highest IR carrier frequency I know is 500kHz. usec resolution is enough  
even for raw modulated IR pulses. But you only look at the signal after it  
has been demodulated by the IR chip, so higher resolution would be  
overkill.

[...]
>> How do you define the start and stop of sequences?

> For the end of Rx signalling:
>
> Well with the Conexant hardware I can set a maximum pulse (mark or
> space) width, and the hardware will generate an Rx Timeout interrupt to
> signal the end of Rx when a space ends up longer than that max pulse
> width.  The hardware also puts a special marker in the hardware pulse
> widht measurement FIFO (in band signalling essentially).
>
> I'm not sure anything like that gets communicated to userspace via
> lirc_dev, and I'm too tired to doublecheck right now.

There is no such thing in the protocol. Some devices cannot provide any  
end of signal marker, so lircd handles this using timers.

If there is some interest, the MODE2 protocol can be extended. We still  
have 7 bits unused...

> If you have determined the protocol you are after, it's easy to know
> what the pulse count should be and what the max pulse width should be (+
> slop for crappy hardware) so finding the end of an Rx isn't hard.  The
> button repeats intervals are *very* large.  I've never seen a remote
> rapid fire codes back to back.

I did. There are some protocols that have a gap of only 6000 us between  
signals. And the settop boxes are very picky about this. If you make it  
too long, they won't accept the command.

[...]
>> Is transmitting synchronous or queued?

> kfifo's IIRC.

No, it's synchronous.

>> How big is the transmit queue?

No queue.

[...]
> My particular gripes about the current LIRC interface:
>
> 1. The one thing that I wish were documented better were the distinction
> between LIRC_MODE_PULSE, LIRC_MODE_RAW, and LIRC_MODE2 modes of
> operation.  I think I've figured it out, but I had to look at a lot of
> LIRC drivers to do so.

No driver uses RAW until now and lircd does not support it.
PULSE is used on the transmit path, MODE2 on the receive path.
There is no special reasoning for that, it's rather historic.
MODE2 makes sense on the receive path because you can easily distinguish  
between pulse/space.

> 2. I have hardware where I can set max_pulse_width so I can optimize
> pulse timer resolution and have the hardware time out rapidly on end of
> RX.  I also have hardware where I can set a min_pulse_width to set a
> hardware low-pass/glitch filter.  Currently LIRC doesn't have any way to
> set these, but it would be nice to have.

Should be really easy to add these. The actual values could be derived  
from the config files easily.

> In band signalling of a
> hardware detected "end of Rx" may also make sense then too.

See above.

> 3. As I mentioned before, it would be nice if LIRC could set a batch of
> parameters atomically somehow, instead of with a series of ioctl()s.  I
> can work around this in kernel though.

Is there any particular sequence that you are concerned about?
Setting carrier frequency and then duty cycle is a bit problematic.
Currently it's solved by resetting the duty cycle to 50% each time you  
change the carrier frequency.
But as the LIRC interface is "one user only", I don't see a real problem.

Christoph
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

2009-12-08 Thread Christoph Bartelmus
Hi Jon,

on 08 Dec 09 at 08:34, Jon Smirl wrote:
[...]
> The point of those design review questions was to illustrate that the
> existing LIRC system is only partially designed. Subsystems need to be
> fully designed before they get merged.

I'd say that a system that has proven itself in real world applications  
for >10 years, does not deserve to be called partially designed.

> For example 36-40K and 56K IR signals are both in use. It is a simple
> matter to design a receiver (or buy two receivers)  that would support
> both these frequencies. But the current LIRC model only supports  a
> single IR receiver. Adjusting it to support two receivers is going to
> break the ABI.

Really? When we added support for multiple transmitters, we somehow  
managed to do without breaking the ABI. Do I miss something?

Your example could even now be solved by using the LIRC_SET_REC_CARRIER  
ioctl. The driver would have to choose the receiver that best fits the  
requested frequency.

[...]
> We need to think about all of these use cases before designing the
> ABI.  Only after we think we have a good ABI design should code start
> being merged. Of course we may make mistakes and have to fix the ABI,
> but there is nothing to be gained by merging the existing ABI if we
> already know it has problems.

The point is that we did not get up this morning and started to think  
about how the LIRC interface should look like. That happened 10 years ago.

I'm not saying that the interface is the nicest thing ever invented, but  
it works and is extendable. If you see that something is missing please  
bring it up.

Christoph
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

2009-12-08 Thread Christoph Bartelmus
Hi Dmitry,

on 06 Dec 09 at 23:51, Dmitry Torokhov wrote:
[...]
>>> I suppose we could add MSC_SCAN_END event so that we can transmit
>>> "scancodes" of arbitrary length. You'd get several MSC_SCAN followed by
>>> MSC_SCAN_END marker. If you don't get MSC_SCAN_END assume the code is 32
>>> bit.
>>
>> And I set a timeout to know that no MSC_SCAN_END will arrive? This is
>> broken design IMHO.
>>

> EV_SYN signals the end of state transmission.

>> Furthermore lircd needs to know the length of the scan code in bits, not
>> as a multiple of 32.

> I really do not think that LIRCD is the type of application that should
> be using evdev interface, but rather other way around.

Well, all I'm asking is that lircd can keep using the LIRC interface for  
getting the scan codes. ;-)

Christoph
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

2009-12-06 Thread Christoph Bartelmus
Hi Jon,

on 04 Dec 09 at 19:28, Jon Smirl wrote:
>> BTW, I just came across a XMP remote that seems to generate 3x64 bit
>> scan codes. Anyone here has docs on the XMP protocol?
>
> Assuming a general purpose receiver (not one with fixed hardware
> decoding), is it important for Linux to receive IR signals from all
> possible remotes no matter how old or obscure? Or is it acceptable to
[...]
> Of course transmitting is a completely different problem, but we
> haven't been talking about transmitting. I can see how we would need
> to record any IR protocol in order to retransmit it. But that's in the
> 5% of users world, not the 90% that want MythTV to "just work".  Use
> something like LIRC if you want to transmit.

I don't think anyone here is in the position to be able to tell what is  
90% or 5%. Personally I use LIRC exclusively for transmit to my settop box  
using an old and obscure RECS80 protocol.
No, I won't replace my setup just because it's old and obscure.

Cable companies tend to provide XMP based boxes to subscribers more often  
these days. Simply not supporting these setups is a no-go for me.

Christoph
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

2009-12-06 Thread Christoph Bartelmus
Hi Dmitry,

on 04 Dec 09 at 15:15, Dmitry Torokhov wrote:
[...]
>> http://lirc.sourceforge.net/remotes/lg/6711A20015N
>>
>> This is an air-conditioner remote.
>> The entries that you see in this config file are not really separate
>> buttons. Instead the remote just sends the current settings for e.g.
>> temperature encoded in the protocol when you press some up/down key.
>> You really don't want to map all possible temperature settings to KEY_*
>> events. For such cases it would be nice to have access at the raw scan
>> codes from user space to do interpretation of the data.
>> The default would still be to pass the data to the input layer, but it
>> won't hurt to have the possibility to access the raw data somehow.

> Interesting. IMHO, the better would be to add an evdev ioctl to return
> the scancode for such cases, instead of returning the keycode.

 That means you would have to set up a pseudo keymap, so that you can get
 the key event which you could than react on with a ioctl. Or are you
 generating KEY_UNKNOWN for every scancode that is not mapped?
 What if different scan codes are mapped to the same key event? How do you
 retrieve the scan code for the key event?
 I don't think it can work this way.

>>
>>> EV_MSC/MSC_SCAN.
>>
>> How would I get the 64 bit scan codes that the iMON devices generate?
>> How would I know that the scan code is 64 bit?
>> input_event.value is __s32.
>>

> I suppose we could add MSC_SCAN_END event so that we can transmit
> "scancodes" of arbitrary length. You'd get several MSC_SCAN followed by
> MSC_SCAN_END marker. If you don't get MSC_SCAN_END assume the code is 32
> bit.

And I set a timeout to know that no MSC_SCAN_END will arrive? This is  
broken design IMHO.

Furthermore lircd needs to know the length of the scan code in bits, not  
as a multiple of 32.

> FWIW there is MSC_RAW as well.

It took me some time to convice people that this is not the right way to  
handle raw timing data.

Christoph
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

2009-12-06 Thread Christoph Bartelmus
Hi Dmitry,

on 05 Dec 09 at 22:55, Dmitry Torokhov wrote:
[...]
> I do not believe you are being realistic. Sometimes we just need to say
> that the device is a POS and is just not worth it. Remember, there is
> still "lirc hole" for the hard core people still using solder to produce
> something out of the spare electronic components that may be made to
> work (never mind that it causes the CPU constantly poll the device, not
> letting it sleep and wasting electricity as a result - just hypotetical
> example here).

The still seems to be is a persistent misconception that the home-brewn  
receivers need polling or cause heavy CPU load. No they don't. All of them  
are IRQ based.
It's the commercial solutions like gpio based IR that need polling.
For transmitters it's a different story, but you don't transmit 24h/7.

Christoph
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

2009-12-04 Thread Christoph Bartelmus
Hi Dmitry,

on 04 Dec 09 at 14:07, Dmitry Torokhov wrote:
> On Fri, Dec 04, 2009 at 10:46:00PM +0100, Christoph Bartelmus wrote:
>> Hi Mauro,
>>
>> on 04 Dec 09 at 12:33, Mauro Carvalho Chehab wrote:
>>> Christoph Bartelmus wrote:
>>>>>> Consider passing the decoded data through lirc_dev.
>> [...]
>>>> Consider cases like this:
>>>> http://lirc.sourceforge.net/remotes/lg/6711A20015N
>>>>
>>>> This is an air-conditioner remote.
>>>> The entries that you see in this config file are not really separate
>>>> buttons. Instead the remote just sends the current settings for e.g.
>>>> temperature encoded in the protocol when you press some up/down key. You
>>>> really don't want to map all possible temperature settings to KEY_*
>>>> events. For such cases it would be nice to have access at the raw scan
>>>> codes from user space to do interpretation of the data.
>>>> The default would still be to pass the data to the input layer, but it
>>>> won't hurt to have the possibility to access the raw data somehow.
>>
>>> Interesting. IMHO, the better would be to add an evdev ioctl to return the
>>> scancode for such cases, instead of returning the keycode.
>>
>> That means you would have to set up a pseudo keymap, so that you can get
>> the key event which you could than react on with a ioctl. Or are you
>> generating KEY_UNKNOWN for every scancode that is not mapped?
>> What if different scan codes are mapped to the same key event? How do you
>> retrieve the scan code for the key event?
>> I don't think it can work this way.
>>

> EV_MSC/MSC_SCAN.

How would I get the 64 bit scan codes that the iMON devices generate?
How would I know that the scan code is 64 bit?
input_event.value is __s32.

BTW, I just came across a XMP remote that seems to generate 3x64 bit scan  
codes. Anyone here has docs on the XMP protocol?

Christoph
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

2009-12-04 Thread Christoph Bartelmus
Hi Mauro,

on 04 Dec 09 at 12:33, Mauro Carvalho Chehab wrote:
> Christoph Bartelmus wrote:
>>>> Consider passing the decoded data through lirc_dev.
[...]
>> Consider cases like this:
>> http://lirc.sourceforge.net/remotes/lg/6711A20015N
>>
>> This is an air-conditioner remote.
>> The entries that you see in this config file are not really separate
>> buttons. Instead the remote just sends the current settings for e.g.
>> temperature encoded in the protocol when you press some up/down key. You
>> really don't want to map all possible temperature settings to KEY_*
>> events. For such cases it would be nice to have access at the raw scan
>> codes from user space to do interpretation of the data.
>> The default would still be to pass the data to the input layer, but it
>> won't hurt to have the possibility to access the raw data somehow.

> Interesting. IMHO, the better would be to add an evdev ioctl to return the
> scancode for such cases, instead of returning the keycode.

That means you would have to set up a pseudo keymap, so that you can get  
the key event which you could than react on with a ioctl. Or are you  
generating KEY_UNKNOWN for every scancode that is not mapped?
What if different scan codes are mapped to the same key event? How do you  
retrieve the scan code for the key event?
I don't think it can work this way.

Christoph
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

2009-12-03 Thread Christoph Bartelmus
Hi Dmitry,

on 03 Dec 09 at 14:12, Dmitry Torokhov wrote:
[...]
>> Consider passing the decoded data through lirc_dev.
[...]
> I believe it was agreed that lirc-dev should be used mainly for decoding
> protocols that are more conveniently decoded in userspace and the
> results would be looped back into input layer through evdev which will
> be the main interface for consumer applications to use.

Quoting myself:
> Currently I would tend to an approach like this:
> - raw interface to userspace using LIRC

For me this includes both the pulse/space data and also the scan codes  
when hardware does the decoding.
Consider cases like this:
http://lirc.sourceforge.net/remotes/lg/6711A20015N

This is an air-conditioner remote.
The entries that you see in this config file are not really separate  
buttons. Instead the remote just sends the current settings for e.g.  
temperature encoded in the protocol when you press some up/down key. You  
really don't want to map all possible temperature settings to KEY_*  
events. For such cases it would be nice to have access at the raw scan  
codes from user space to do interpretation of the data.
The default would still be to pass the data to the input layer, but it  
won't hurt to have the possibility to access the raw data somehow.

Christoph
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

2009-12-03 Thread Christoph Bartelmus
Hi Mauro,

on 03 Dec 09 at 19:10, Mauro Carvalho Chehab wrote:
[...]
>>> So the lirc_imon I submitted supports all device types, with the
>>> onboard decode devices defaulting to operating as pure input devices,
>>> but an option to pass hex values out via the lirc interface (which is
>>> how they've historically been used -- the pure input stuff I hacked
>>> together just a few weeks ago), to prevent functional setups from
>>> being broken for those who prefer the lirc way.
>>
>> Hmm.  I'd tend to limit the lirc interface to the 'raw samples' case.

>> Historically it has also been used to pass decoded data (i.e. rc5) from
>> devices with onboard decoding, but for that in-kernel mapping + input
>> layer really fits better.

> I agree.

Consider passing the decoded data through lirc_dev.
- there's a large user base already that uses this mode through lirc and  
would be forced to switch to input layer if it disappears.
- that way all IR drivers would consistently use lirc interface and all  
PnP hooks could be implemented there in one place.
- drivers like lirc_imon that have to support both raw and decoded mode,  
currently have to implement both the lirc and the input interface.  
Complexity could be reduced in such cases. But maybe this is necessary  
anyway for lirc_imon that also includes mouse functionality. Jarod?

Christoph
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

2009-11-30 Thread Christoph Bartelmus
Hi Jon,

on 30 Nov 09 at 16:35, Jon Smirl wrote:
[...]
> It would be interesting to split the lirc daemon. Put the protocol
> decoder stuff in one daemon and the scripting support in the other.
> The scripting daemon would then be optional.  What would be the
> relative sizes of the two daemons?
>
> --
>
> The LIRC daemon always works with timing data, right?

Timing data or hex codes (if decoding is done in hardware).

> When it reads
> the config files generated by irrecord it internally converts those to
> timing data

No.

> and then matches the incoming data against it.

Pattern matching is only done with raw mode config files. The normal case  
is that lircd is decoding the incoming data using the protocol description  
found in the config file.

> Have you looked at the protocol engine idea? Running the protocol
> engines in parallel until a match is achieved. Then map the
> vendor/device/command triplet.  The protocol engine concept fixes the
> problem of Sony remotes in irrecord.

No, only rewriting irrecord would fix the problem of Sony remotes.  
irrecord tries to guess the protocol parameters without any prior  
knowledge about any protocols.
irrecord could also be rewritten to use the protocol engine concept  
without changing anything in the decoder itself. In fact partly this is  
already available. You can give irrecord a template config file and it  
will skip the protocol guessing step.

This just would have to be extended so that the template config file could  
contain several protocol descriptions to match against.
I havn't implemented this yet, because I don't care much. Sony remotes do  
work flawlessly also in raw mode. It's only a problem from the aesthetic  
view point.

> Various Sony remote buttons
> transmit  in different protocols. irrecord assumes that a remote is
> only using a single protocol. Since it can't figure out a protocol it
> always records these remotes as raw.

With manual intervention you can convert these raw config files afterwards  
with "irrecord -a".

[...]
> Button on remote programed to be Mot DVR --> protocol engine -->
> Mot/dev/command --> MythTV which is looking for Mot/dev/command
> No config files needed.

You just move complexity to the application. MythTV would have to know how  
a Motorola command set looks like.

Currently I would tend to an approach like this:
- raw interface to userspace using LIRC
- fixed set of in-kernel decoders that can handle bundled remotes

That would allow zero configuration for simple use cases and full  
flexibility for more advanced use cases.

Christoph
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

2009-11-29 Thread Christoph Bartelmus
Hi,

on 29 Nov 09 at 14:16, Jon Smirl wrote:
> On Sun, Nov 29, 2009 at 2:04 PM, Alan Cox  wrote:
>>> Jon is asking for an architecture discussion, y'know, with use cases.
[...]
> So we're just back to the status quo of last year which is to do
> nothing except some minor clean up.
>
> We'll be back here again next year repeating this until IR gets
> redesigned into something fairly invisible like keyboard and mouse
> drivers.

Last year everyone complained that LIRC does not support evdev - so I  
added support for evdev.

This year everyone complains that LIRC is not plug'n'play - we'll fix that  
'til next year.

There's progress. ;-)

Christoph
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

2009-11-29 Thread Christoph Bartelmus
Hi Krzysztof,

on 28 Nov 09 at 18:21, Krzysztof Halasa wrote:
[...]
>> This remote uses RC-5. But some of the developers must have thought that
>> it may be a smart idea to use 14 bits instead the standard 13 bits for
>> this remote. In LIRC you won't care, because this is configurable and
>> irrecord will figure it out automatically for you. In the proposed kernel
>> decoders I have seen until now, you will have to treat this case specially
>> in the decoder because you expect 13 bits for RC-5, not 14.

> Well, the 14-bit RC5 is de-facto standard for some time now. One of the
> start bits, inverted, now functions as the MSB of the command code.
> 13-bit receiver implementations (at least these aimed at "foreign"
> remotes) are obsolete.

Ah, sorry. I didn't mean the extension of the command code by inverting  
one of the start bits.

The Streamzap really uses one more bit.
In the LIRC world the RC5 start bit which is fixed to "1" is not counted  
as individual bit. So translated to your notation, the Streamzap uses 15  
bits, not 14 like the extended RC-5 or 13 like the original RC-5...

Christoph
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

2009-11-29 Thread Christoph Bartelmus
Hi Stefan,

on 28 Nov 09 at 21:29, Stefan Richter wrote:
> Jon Smirl wrote:
>> On Sat, Nov 28, 2009 at 2:45 PM, Stefan Richter
>>  wrote:
>>> Jon Smirl wrote:
 Also, how do you create the devices for each remote? You would need to
 create these devices before being able to do EVIOCSKEYCODE to them.
>>> The input subsystem creates devices on behalf of input drivers.  (Kernel
>>> drivers, that is.  Userspace drivers are per se not affected.)
>>
>> We have one IR receiver device and multiple remotes. How does the
>> input system know how many devices to create corresponding to how many
>> remotes you have?

> If several remotes are to be used on the same receiver, then they
> necessarily need to generate different scancodes, don't they?  Otherwise
> the input driver wouldn't be able to route their events to the
> respective subdevice.

Consider this case:
Two remotes use different protocols. The scancodes after decoding happen  
to overlap.
Just using the scancodes you cannot distinguish between the remotes.  
You'll need to add the protocol information to be able to solve this which  
complicates the setup.

In LIRC this is solved by having protocol parameters and scancode mapping  
in one place.

Christoph
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

2009-11-29 Thread Christoph Bartelmus
Hi Mauro,

on 28 Nov 09 at 09:21, Mauro Carvalho Chehab wrote:
> Hi Christoph,
>
> Christoph Bartelmus wrote:

>>> Maybe we decide to take the existing LIRC system as is and not
>>> integrate it into the input subsystem. But I think there is a window
>>> here to update the LIRC design to use the latest kernel features.
>>
>> If it ain't broke, don't fix it.
[...]
> So, even not being broken, the subsystem internal media API's changed
> a lot during the last years, and there are still several new changes
> on our TODO list.
>
> So, I'd say that if we can do it better, then let's do it.

I'm not against improving things.
If there are feature request that cannot be handled with an interface, it  
has to be extended or redesigned. But currently the LIRC interface  
supports all features that came up until now since many years.
I just don't want to change a working interface just because it could be  
also implemented in a different way, but having no other visible advantage  
than using more recent kernel features.

[...]
>> For devices that do the decoding in hardware, the only thing that I don't
>> like about the current kernel implementation is the fact that there are
>> mapping tables in the kernel source. I'm not aware of any tools that let
>> you change them without writing some keymaps manually.
[...]
> Still, I prefer first to migrate all drivers to use the full scancode and
> re-generate the keymaps before such step.

Good to see that this is in the works.

[...]
>> With the approach that you
>> suggested for the in-kernel decoder, this device simply will not work for
>> anything but RC-5. The devil is in all the details.

> I haven't seen such limitations on his proposal. We currently have in-kernel
> decoders for NEC, pulse-distance, RC4 protocols, and some variants. If
> non-RC5 decoders are missing, we need for sure to add them.

That was not my point. If you point a NEC remote at the Igor USB device,  
you won't be able to use a NEC decoder because the device will swallow  
half of the bits. LIRC won't care unless the resulting scancodes are  
identical.
Granted, this is an esoteric arguement, because this device is utter  
garbage.

[...]
>> If we decide to do the
>> decoding in-kernel, how long do you think this solution will need to
>> become really stable and mainline? Currently I don't even see any
>> consensus on the interface yet. But maybe you will prove me wrong and it's
>> just that easy to get it all working.

> The timeframe to go to mainline will basically depend on taking a decision
> about the API and on people having time to work on it.
>
> Providing that we agree on what we'll do, I don't see why not
> adding it on staging for 2.6.33 and targeting to have
> everything done for 2.6.34 or 2.6.35.

The problem that I see here is just that even when we have very talented  
people working on this, that put together all resources, we won't be able  
to cover all the corner cases with all the different receivers and remote  
control protocols out there. It will still require lots of fine-tuning  
which was done in LIRC over the years.

>> I also understand that people want to avoid dependency on external
>> userspace tools. All I can tell you is that the lirc tools already do
>> support everything you need for IR control. And as it includes a lot of
>> drivers that are implemented in userspace already, LIRC will just continue
>> to do it's work even when there is an alternative in-kernel.

> The point is that for simple usage, like an user plugging his new USB stick
> he just bought, he should be able to use the shipped IR without needing to
> configure anything or manually calling any daemon. This currently works
> with the existing drivers and it is a feature that needs to be kept.

Admittedly, LIRC is way behind when it comes to plug'n'play.

Christoph
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

2009-11-29 Thread Christoph Bartelmus
Hi Jon,

on 27 Nov 09 at 12:49, Jon Smirl wrote:
[...]
> Christoph, take what you know from all of the years of working on LIRC
> and design the perfect in-kernel system. This is the big chance to
> redesign IR support and get rid of any past mistakes. Incorporate any
> useful chunks of code and knowledge from the existing LIRC into the
> new design. Drop legacy APIs, get rid of daemons, etc. You can do this
> redesign in parallel with existing LIRC. Everyone can continue using
> the existing code while the new scheme is being built. Think of it as
> LIRC 2.0. You can lead this design effort, you're the most experience
> developer in the IR area.

This is a very difficult thing for me to do. I must admit that I'm very  
biased.
Because lircd is the only userspace application that uses the LIRC kernel  
interface, we never had any problems changing the interface when needed.
I can't say there's much legacy stuff inside. I'm quite happy with the  
interface.
The other thing is that I can't really move the decoder from userspace to  
kernel because there are way too many userspace drivers that do require a  
userspace decoder. LIRC also is running on FreeBSD, MacOS and even Cygwin.  
So letting the userspace drivers take advantage of a potential Linux in- 
kernel decoder is not an option for me either.
I'm having my 'LIRC maintainer' hat on mostly during this discussion and I  
do understand that from Linux kernel perspective things look different.

> Take advantage of this window to make a
> design that is fully integrated with Linux - put IR on equal footing
> with the keyboard and mouse as it should be.

That's a question that I have not answered for myself concludingly.
Is a remote control really on exactly the same level as a keyboard or  
mouse?

Christoph
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

2009-11-28 Thread Christoph Bartelmus
Hi Krzysztof and Maxim,

on 28 Nov 09 at 16:44, Krzysztof Halasa wrote:
> Maxim Levitsky  writes:

>> Generic decoder that lirc has is actually much better and more tolerant
>> that protocol specific decoders that you propose,

> Actually, it is not the case. Why do you think it's better (let alone
> "much better")? Have you at least seen my RC5 decoder?

Nobody here doubts that you can implement a working RC-5 decoder. It's  
really easy. I'll give you an example why Maxim thinks that the generic  
LIRC approach has advantages:

Look at the Streamzap remote (I think Jarod submitted the lirc_streamzap  
driver in his patchset):
http://lirc.sourceforge.net/remotes/streamzap/PC_Remote

This remote uses RC-5. But some of the developers must have thought that  
it may be a smart idea to use 14 bits instead the standard 13 bits for  
this remote. In LIRC you won't care, because this is configurable and  
irrecord will figure it out automatically for you. In the proposed kernel  
decoders I have seen until now, you will have to treat this case specially  
in the decoder because you expect 13 bits for RC-5, not 14.
Well, it can be done. But you'll have to add another IR protocol define  
for RC-5_14, which will become very ugly with many non-standard protocol  
variations.

@Maxim: I think Mauro is right. We need to find an approach that makes  
everybody happy. We should take the time now to discuss all the  
possibilities and choose the best solution. LIRC has lived so long outside  
the kernel, that we can wait another couple of weeks/months until we  
agreed on something which will be a stable API hopefully for many years to  
come.

Christoph
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

2009-11-27 Thread Christoph Bartelmus
Hi Jon,

on 27 Nov 09 at 10:57, Jon Smirl wrote:
[...]
 But I'm still a bit hesitant about the in-kernel decoding. Maybe it's
 just because I'm not familiar at all with input layer toolset.
>> [...]
>>> I hope it helps for you to better understand how this works.
>>
>> So the plan is to have two ways of using IR in the future which are
>> incompatible to each other, the feature-set of one being a subset of the
>> other?

> Take advantage of the fact that we don't have a twenty year old legacy
> API already in the kernel. Design an IR API that uses current kernel
> systems. Christoph, ignore the code I wrote and make a design proposal
> that addresses these goals...
>
> 1) Unified input in Linux using evdev. IR is on equal footing with
> mouse and keyboard.

Full support given with LIRC by using uinput.

> 2) plug and play for basic systems - you only need an external app for
> scripting

LIRC is lacking in plug and play support. But it wouldn't be very  
difficult to add some that works for all basic systems.
As I'm favouring a solution outside of the kernel, of course I can't offer  
you a solution which works without userspace tools.

> 3) No special tools - use mkdir, echo, cat, shell scripts to build
> maps

A user friendly GUI tool to configure the mapping of the remote buttons is  
essential for good user experience. I hope noone here considers that users  
learn command line or bash to configure their remotes.

> 4) Use of modern Linux features like sysfs, configfs and udev.

LIRC uses sysfs where appropriate. I have no problem using modern  
interfaces where it makes sense. But I won't change working and well  
tested interfaces just because it's possible to implement the same thing a  
different way. The interface is efficient and small. I don't see how it  
could gain much by the mentioned featues.
Tell me what exactly you don't like about the LIRC interface and we can  
work on it.

> 5) Direct multi-app support - no daemon

lircd is multi-app. I want to be in userspace, so I need a daemon.

> 6) Hide timing data from user as much as possible.

Nobody is manually writing lircd.conf files. Of course you don't want the  
user to know anything about the technical details unless you really want  
to get your hands dirty.

> What are other goals for this subsystem?
>
> Maybe we decide to take the existing LIRC system as is and not
> integrate it into the input subsystem. But I think there is a window
> here to update the LIRC design to use the latest kernel features.

If it ain't broke, don't fix it.

I'm also not against using the input layer where it makes sense.

For devices that do the decoding in hardware, the only thing that I don't  
like about the current kernel implementation is the fact that there are  
mapping tables in the kernel source. I'm not aware of any tools that let  
you change them without writing some keymaps manually.

I'm also not against in-kernel decoding in general. We already agreed last  
year that we can include an interface in lirc_dev that feeds the signal  
data to an in-kernel decoder if noone from userspace reads it. That's  
close to an one line change in lirc_dev. You won't have to change a single  
device driver for this. I think there also was common understanding that  
there will be cases where in-kernel decoding will not be possible for  
esoteric protocols and that there needs to be an interface to deliver the  
raw data to userspace.

My point just is that it took LIRC a very long time until the most common  
protocols have been fully supported, with all the toggle bits, toggle  
masks, repeat codes, sequences, headers, differing gap values, etc. Or  
take a look at crappy hardware like the Igor Cesko's USB IR Receiver. This  
device cripples the incoming signal except RC-5 because it has a limited  
buffer size. LIRC happily accepts the data because it does not make any  
assumptions on the protocol or bit length. With the approach that you  
suggested for the in-kernel decoder, this device simply will not work for  
anything but RC-5. The devil is in all the details. If we decide to do the  
decoding in-kernel, how long do you think this solution will need to  
become really stable and mainline? Currently I don't even see any  
consensus on the interface yet. But maybe you will prove me wrong and it's  
just that easy to get it all working.
I also understand that people want to avoid dependency on external  
userspace tools. All I can tell you is that the lirc tools already do  
support everything you need for IR control. And as it includes a lot of  
drivers that are implemented in userspace already, LIRC will just continue  
to do it's work even when there is an alternative in-kernel.
If LIRC is being rejected I don't have a real problem with this either,  
but we finally need a decision because for me this is definitely the last  
attempt to get this into the kernel.

Christoph
--
To unsubscribe from this list: send the line "unsubscribe linux-

Re: [RFC] Should we create a raw input interface for IR's ? - Was: Re: [PATCH 1/3 v2] lirc core device driver infrastructure

2009-11-26 Thread Christoph Bartelmus
Hi Mauro,

on 26 Nov 09 at 14:25, Mauro Carvalho Chehab wrote:
> Christoph Bartelmus wrote:
[...]
>> But I'm still a bit hesitant about the in-kernel decoding. Maybe it's just
>> because I'm not familiar at all with input layer toolset.
[...]
> I hope it helps for you to better understand how this works.

So the plan is to have two ways of using IR in the future which are  
incompatible to each other, the feature-set of one being a subset of the  
other?

When designing the key mapping in the kernel you should be aware that  
there are remotes out there that send a sequence of scan codes for some  
buttons, e.g.
http://lirc.sourceforge.net/remotes/pioneer/CU-VSX159

Christoph
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC] Should we create a raw input interface for IR's ? - Was: Re: [PATCH 1/3 v2] lirc core device driver infrastructure

2009-11-26 Thread Christoph Bartelmus
Hi Jon,

on 27 Nov 09 at 00:06, Jon Smirl wrote:
[...]
> code for the fun of it, I have no commercial interest in IR. I was
> annoyed with how LIRC handled Sony remotes on my home system.

Can you elaborate on this?
I'm not aware of any issue with Sony remotes.

Christoph
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC] Should we create a raw input interface for IR's ? - Was: Re: [PATCH 1/3 v2] lirc core device driver infrastructure

2009-11-26 Thread Christoph Bartelmus
Hi Mauro,

on 26 Nov 09 at 18:59, Mauro Carvalho Chehab wrote:
> Christoph Bartelmus wrote:
[...]
>>> lircd supports input layer interface. Yet, patch 3/3 exports both devices
>>> that support only pulse/space raw mode and devices that generate scan
>>> codes via the raw mode interface. It does it by generating artificial
>>> pulse codes.
>>
>> Nonsense! There's no generation of artificial pulse codes in the drivers.
>> The LIRC interface includes ways to pass decoded IR codes of arbitrary
>> length to userspace.

> I might have got wrong then a comment in the middle of the
> imon_incoming_packet() of the SoundGraph iMON IR patch:

Indeed, you got it wrong.
As I already explained before, this device samples the signal at a  
constant rate and delivers the current level in a bit-array. This data is  
then condensed to pulse/space data.

Christoph
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC] Should we create a raw input interface for IR's ? - Was: Re: [PATCH 1/3 v2] lirc core device driver infrastructure

2009-11-26 Thread Christoph Bartelmus
Hi Mauro,

on 26 Nov 09 at 10:36, Mauro Carvalho Chehab wrote:
[...]
> lircd supports input layer interface. Yet, patch 3/3 exports both devices
> that support only pulse/space raw mode and devices that generate scan
> codes via the raw mode interface. It does it by generating artificial
> pulse codes.

Nonsense! There's no generation of artificial pulse codes in the drivers.
The LIRC interface includes ways to pass decoded IR codes of arbitrary  
length to userspace.

Christoph
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC] Should we create a raw input interface for IR's ? - Was: Re: [PATCH 1/3 v2] lirc core device driver infrastructure

2009-11-26 Thread Christoph Bartelmus
Hi,

on 25 Nov 09 at 12:44, Jarod Wilson wrote:
[...]
> Ah, but the approach I'd take to converting to in-kernel decoding[*] would
> be this:
[...]
> [*] assuming, of course, that it was actually agreed upon that in-kernel
> decoding was the right way, the only way, all others will be shot on sight.

I'm happy to see that the discussion is getting along.
But I'm still a bit hesitant about the in-kernel decoding. Maybe it's just  
because I'm not familiar at all with input layer toolset.

1. For sure in-kernel decoding will require some assistance from userspace  
to load the mapping from IR codes to keys. So, if there needs to be a tool  
in userspace that does some kind of autodetection, why not have a tool  
that does some autodetection and autoconfigures lircd for the current  
device. Lots of code duplication in kernel saved. What's the actual  
benefit of in-kernel decoding?

2. What would be the format of the key map? lircd.conf files already exist  
for a lot of remote controls. Will we have a second incompatible format to  
map the keys in-kernel? Where are the tools that create the key maps for  
new remotes?

Maybe someone can shed some light on this.

Christoph
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC] Should we create a raw input interface for IR's ? - Was: Re: [PATCH 1/3 v2] lirc core device driver infrastructure

2009-11-26 Thread Christoph Bartelmus
Hi Gerd,

on 26 Nov 09 at 00:22, Gerd Hoffmann wrote:
[...]
>> To sum it up: I don't think this information will be useful at all for
>> lircd or anyone else.
[...]
> I know that lircd does matching instead of decoding, which allows to
> handle unknown encodings.  Thats why I think there will always be cases
> which only lircd will be able to handle (using raw samples).
>
> That doesn't make attempts to actually decode the IR samples a useless
> exercise though ;)

Well, in my opinion it is kind of useless. I don't see any use case or any  
demand for passing this kind of information to userspace, at least in the  
LIRC context.
If there's no demand, why bother?

Christoph
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC] Should we create a raw input interface for IR's ? - Was: Re: [PATCH 1/3 v2] lirc core device driver infrastructure

2009-11-25 Thread Christoph Bartelmus
Hi Gerd,

on 25 Nov 09 at 22:58, Gerd Hoffmann wrote:
[...]
> (1) ir code (say rc5) -> keycode conversion looses information.
>
> I think this can easily be addressed by adding a IR event type to the
> input layer, which could look like this:
>
>input_event->type  = EV_IR
>input_event->code  = IR_RC5
>input_event->value = 
>
> In case the 32bit value is too small we might want send two events
> instead, with ->code being set to IR__1 and IR__2
>
> Advantages:
>* Applications (including lircd) can get access to the unmodified
>  rc5/rc6/... codes.

Unfortunately with most hardware decoders the code that you get is only  
remotely related to the actual code sent. Most RC-5 decoders strip off  
start bits. Toggle-bits are thrown away. NEC decoders usually don't pass  
through the address part. Some even generate some pseudo-random code  
(Irman). There is no common standard which bit is sent first, LSB or MSB.  
Checksums are thrown away.
To sum it up: I don't think this information will be useful at all for  
lircd or anyone else. Actually lircd does not even know anything about  
actual protocols. We only distinguish between certain protocol types, like  
Manchester encoded, space encoded, pulse encoded etc. Everything else like  
the actual timing is fully configurable.

[...]
> If we keep the lirc interface for raw samples anyway, then we can keep
> it for sending too, problem solved ;)  How does sending hardware work
> btw?  Do they all accept just raw samples?  Or does some hardware also
> accept ir-codes?

Usually raw samples in some form. I've never seen any device that would  
accept just ir-codes. UIRT2 devices have some more advanced modes but also  
accept raw samples.

Christoph
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC] Should we create a raw input interface for IR's ? - Was: Re: [PATCH 1/3 v2] lirc core device driver infrastructure

2009-11-25 Thread Christoph Bartelmus
Hi,

on 25 Nov 09 at 17:53, Krzysztof Halasa wrote:
> Jarod Wilson  writes:
[...]
>> nimble. If we can come up with a shiny new way that raw IR can be
>> passed out through an input device, I'm pretty sure lirc userspace can
>> be adapted to handle that.

As Trent already pointed out, adding support for raw IR through an input  
device would require a new interface too. You just put the label "input  
device" on it. This does not make much sense for me.

> Lirc can already handle input layer. Since both ways require userspace
> changes,

I'm not sure what two ways you are talking about. With the patches posted  
by Jarod, nothing has to be changed in userspace.
Everything works, no code needs to be written and tested, everybody is  
happy.

We had exactly the same discussion around one year ago. I've seen no new  
arguements in the current discussion and nobody came up with this shiny  
new way of integrating LIRC into the input layer since last year. Maybe  
it's about time to just accept that the LIRC interface is the way to go.

Can we finally get the patch integrated, please?

Christoph
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC] Should we create a raw input interface for IR's ? - Was: Re: [PATCH 1/3 v2] lirc core device driver infrastructure

2009-11-23 Thread Christoph Bartelmus
Hi Jarod,

on 23 Nov 09 at 14:17, Jarod Wilson wrote:
>> Krzysztof Halasa wrote:
[...]
>> If you see patch 3/3, of the lirc submission series, you'll notice a driver
>> that has hardware decoding, but, due to lirc interface, the driver
>> generates pseudo pulse/space code for it to work via lirc interface.

> Historically, this is true.

No, it's not.
I think you misunderstood the code. The comment may be a bit misleading,
too.
Early iMON devices did not decode in hardware and the part of the driver
that Krzystof is referring to is translating a bit-stream of the sampled  
input data into pulse/space durations.

Christoph
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC] Should we create a raw input interface for IR's ? - Was: Re: [PATCH 1/3 v2] lirc core device driver infrastructure

2009-11-23 Thread Christoph Bartelmus
Czesc Krzysztof,

on 23 Nov 09 at 15:14, Krzysztof Halasa wrote:
[...]
> I think we shouldn't at this time worry about IR transmitters.

Sorry, but I have to disagree strongly.
Any interface without transmitter support would be absolutely unacceptable  
for many LIRC users, including myself.

Christoph
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html