Re: Can I expect in-kernel decoding to work out of box?

2010-07-29 Thread Jon Smirl
On Wed, Jul 28, 2010 at 10:36 PM, Andy Walls awa...@md.metrocast.net wrote:
 As an example of simple hardware glitch filter, here's an excerpt
 from the public CX25480/1/2/3 datasheet on the IR low-pass (glitch)
 filter that's in the hardware:

 the counter reloads using the value programmed to this register each
 time a qualified edge is detected [...]. Once the reload occurs, the
 counter begins decrementing. If the next programmed edge occurs before
 the counter reaches 0, the pulse measurement value is discarded, the
 filter modulus value is reloaded, and the next pulse measurement begins.
 Thus, any pulse measurement that ends before the counter reaches 0 is
 ignored.

You could make a small library that drivers could link in. That way we
won't get it implemented ten different ways. Devices that do the
filtering in firmware won't have to use the code.

There are lots of ways to design it. A simple one would be to sit on
each message until the next one arrives. Then make a decision to pass
the previous message up or declare the current edge a glitch and wait
for the next one.  It probably needs a timeout so that you don't sit
on long pulses forever waiting on the next one.

-- 
Jon Smirl
jonsm...@gmail.com
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Can I expect in-kernel decoding to work out of box?

2010-07-28 Thread Jon Smirl
On Wed, Jul 28, 2010 at 2:30 AM, Maxim Levitsky maximlevit...@gmail.com wrote:
 On Tue, 2010-07-27 at 22:33 -0400, Jarod Wilson wrote:
 On Tue, Jul 27, 2010 at 9:29 PM, Jon Smirl jonsm...@gmail.com wrote:
  On Tue, Jul 27, 2010 at 7:32 PM, Maxim Levitsky maximlevit...@gmail.com 
  wrote:
  On Wed, 2010-07-28 at 01:33 +0300, Maxim Levitsky wrote:
  Hi,
 
  I ported my ene driver to in-kernel decoding.
  It isn't yet ready to be released, but in few days it will be.
 
  Now, knowing about wonders of in-kernel decoding, I try to use it, but
  it just doesn't work.
 
  Mind you that lircd works with this remote.
  (I attach my lircd.conf)
 
  Here is the output of mode2 for a single keypress:
 
     8850     4350      525     1575      525     1575
      525      450      525      450      525      450
      525      450      525     1575      525      450
      525     1575      525      450      525     1575
      525      450      525      450      525     1575
      525      450      525      450      525    23625
 
  That decodes as:
  1100 0010 1010 0100
 
  In the NEC protocol the second word is supposed to be the inverse of
  the first word and it isn't. The timing is too short for NEC protocol
  too.
 No its not, its just extended NEC.

http://www.sbprojects.com/knowledge/ir/nec.htm
Says the last two bytes should be the complement of each other.

So for extended NEC it would need to be:
1100 0010 1010 0101 instead of 1100 0010 1010 0100
The last bit is wrong.

From the debug output it is decoding as NEC, but then it fails a
consistency check. Maybe we need to add a new protocol that lets NEC
commands through even if they fail the error checks. It may also be
that the NEC machine rejected it because the timing was so far off
that it concluded that it couldn't be a NEC messages. The log didn't
include the exact reason it got rejected. Add some printks at the end
of the NEC machine to determine the exact reason for rejection.

The current state machines enforce protocol compliance so there are
probably a lot of older remotes that won't decode right. We can use
some help in adjusting the state machines to let out of spec codes
through.

The timing of those pulses is exactly right for JVC. Maybe there is an
extended 4 byte version of the JVC protocol. JVC doesn't have the
error checks like NEC. The question here is, why didn't the JVC
machine get started?

User space lirc is much older. Bugs like this have been worked out of
it. It will take some time to get the kernel implementation up to the
same level.



 This lirc generic config matches that output quite well:
 NEC-short-pulse.conf:

 begin remote

  name  NEC
  bits           16
  flags SPACE_ENC|CONST_LENGTH
  eps            30
  aeps          100

  header        9000 4500
  one           563  1687
  zero          563   562
  ptrail        563
  pre_data_bits 16
 # just a guess
  gap          108000

  repeat        9000 2250

  frequency    38000
  duty_cycle   33

      begin codes
      end codes

 end remote



 
  Valid NEC...
  1100 0011 1010 0101
 
  Maybe JVC protocol but it is longer than normal.
 
  The JVC decoder was unable to get started decoding it.  I don't think
  the JVC decoder has been tested much. Take a look at it and see why it
  couldn't get out of state 0.

 Personally, I haven't really tried much of anything but RC-6(A) and
 RC-5 while working on mceusb, so they're the only ones I can really
 vouch for myself at the moment. It seems that I don't have many
 remotes that aren't an RC-x variant, outside of universals, which I
 have yet to get around to programming for various other modes to test
 any of the protocol decoders. I assume that David Hardeman already did
 that much before submitting each of the ir protocol decoders with his
 name one them (which were, if I'm not mistaken, based at least
 partially on Jon's earlier work), but its entirely possible there are
 slight variants of each that aren't handled properly just yet. That
 right there is one of the major reasons I saw for writing the lirc
 bridge driver plugin in the first place -- the lirc userspace decoder
 has been around for a LOT longer, and thus is likely to know how to
 handle more widely varying IR signals.

 In fact its dead easy to test a lot of remotes, by using an universal
 remote. These remotes are designed to tech literate persons for a
 reason

 On my remote, all I have to do is press TV + predefined number + OK to
 make remote mimic a random remote.
 Unill now, kernel decoding couldn't pick anything but one mode


 Here is a table I created long ago on my remote showing all kinds of
 protocols there:

 Heck, hardware isn't very accurate, I know, but streamzap receiver
 according to what I have heard it even worse...

 Best regards,
 Maxim Levitsky


 08 - NEC short pulse / SANYO (38 khz), [15 - NEC]
     9440     4640      620      550      620      550      620      550      
 620      550      620      550
      620      550      620   

Re: Can I expect in-kernel decoding to work out of box?

2010-07-28 Thread Mauro Carvalho Chehab
Em 28-07-2010 07:40, Jon Smirl escreveu:
 On Wed, Jul 28, 2010 at 2:30 AM, Maxim Levitsky maximlevit...@gmail.com 
 wrote:
 On Tue, 2010-07-27 at 22:33 -0400, Jarod Wilson wrote:
 On Tue, Jul 27, 2010 at 9:29 PM, Jon Smirl jonsm...@gmail.com wrote:

 No its not, its just extended NEC.
 
 http://www.sbprojects.com/knowledge/ir/nec.htm
 Says the last two bytes should be the complement of each other.
 
 So for extended NEC it would need to be:
 1100 0010 1010 0101 instead of 1100 0010 1010 0100
 The last bit is wrong.
 
 From the debug output it is decoding as NEC, but then it fails a
 consistency check. Maybe we need to add a new protocol that lets NEC
 commands through even if they fail the error checks.

Assuming that Maxim's IR receiver is not causing some bad decode at the
NEC code, it seems simpler to add a parameter at sysfs to relax the NEC
detection. We should add some way, at the userspace table for those RC's
that uses a NEC-like code.

There's another alternative: currently, the NEC decoder produces a 16 bits
code for NEC and a 24 bits for NEC-extended code. The decoder may return a
32 bits code when none of the checksum's match the NEC or NEC-extended standard.

Such 32 bits code won't match a keycode on a 16-bits or 24-bits table, so
there's no risk of generating a wrong keycode, if the wrong consistent check
is due to a reception error.

Btw, we still need to port rc core to use the new tables ioctl's, as cleaning
all keycodes on a 32 bits table would take forever with the current input
events ioctls.

 It may also be
 that the NEC machine rejected it because the timing was so far off
 that it concluded that it couldn't be a NEC messages. The log didn't
 include the exact reason it got rejected. Add some printks at the end
 of the NEC machine to determine the exact reason for rejection.

The better is to discard the possibility of a timing issue before changing
the decoder to accept NEC-like codes without consistency checks.

 The current state machines enforce protocol compliance so there are
 probably a lot of older remotes that won't decode right. We can use
 some help in adjusting the state machines to let out of spec codes
 through.

Yes, but we should take some care to avoid having another protocol decoder to
interpret badly a different protocol. So, I think that the decoders may have
some sysfs nodes to tweak the decoders to accept those older remotes.

We'll need a consistent way to add some logic at the remotes keycodes used by
ir-keycode, in order to allow it to tweak the decoder when a keycode table for
such remote is loaded into the driver.

 User space lirc is much older. Bugs like this have been worked out of
 it. It will take some time to get the kernel implementation up to the
 same level.

True.

Cheers,
Mauro

--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Can I expect in-kernel decoding to work out of box?

2010-07-28 Thread Jon Smirl
Let's be really sure it is NEC and not JVC.

 8850 4350  525 1575  525 1575
  525  450  525  450  525  450
  525  450  525 1575  525  450
  525 1575  525  450  525 1575
  525  450  525  450  525 1575
  525  450  525  450  52523625


NEC timings are 9000 4500 560 1680 560 560 etc

JVC timings are 8400 4200 525 1575 525 525

It is a closer match to the JVC timing.  But neither protocol uses a
different mark/space timing -- 450 vs 525

Also look at the repeats. This is repeating at about 25ms. NEC repeat
spacing is 110ms. JVC is supposed to be at 50-60ms. NEC does not
repeat the entire command and JVC does. The repeats are closer to
following the JVC model.

I'd say this is a JVC command. So the question is, why didn't JVC
decoder get out of state zero?

-- 
Jon Smirl
jonsm...@gmail.com
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Can I expect in-kernel decoding to work out of box?

2010-07-28 Thread Maxim Levitsky
On Wed, 2010-07-28 at 10:13 -0300, Mauro Carvalho Chehab wrote: 
 Em 28-07-2010 07:40, Jon Smirl escreveu:
  On Wed, Jul 28, 2010 at 2:30 AM, Maxim Levitsky maximlevit...@gmail.com 
  wrote:
  On Tue, 2010-07-27 at 22:33 -0400, Jarod Wilson wrote:
  On Tue, Jul 27, 2010 at 9:29 PM, Jon Smirl jonsm...@gmail.com wrote:
 
  No its not, its just extended NEC.
  
  http://www.sbprojects.com/knowledge/ir/nec.htm
  Says the last two bytes should be the complement of each other.
  
  So for extended NEC it would need to be:
  1100 0010 1010 0101 instead of 1100 0010 1010 0100
  The last bit is wrong.
  
  From the debug output it is decoding as NEC, but then it fails a
  consistency check. Maybe we need to add a new protocol that lets NEC
  commands through even if they fail the error checks.
 
 Assuming that Maxim's IR receiver is not causing some bad decode at the
 NEC code, it seems simpler to add a parameter at sysfs to relax the NEC
 detection. We should add some way, at the userspace table for those RC's
 that uses a NEC-like code.
 
 There's another alternative: currently, the NEC decoder produces a 16 bits
 code for NEC and a 24 bits for NEC-extended code. The decoder may return a
 32 bits code when none of the checksum's match the NEC or NEC-extended 
 standard.
 
 Such 32 bits code won't match a keycode on a 16-bits or 24-bits table, so
 there's no risk of generating a wrong keycode, if the wrong consistent check
 is due to a reception error.
 
 Btw, we still need to port rc core to use the new tables ioctl's, as cleaning
 all keycodes on a 32 bits table would take forever with the current input
 events ioctls.
 
  It may also be
  that the NEC machine rejected it because the timing was so far off
  that it concluded that it couldn't be a NEC messages. The log didn't
  include the exact reason it got rejected. Add some printks at the end
  of the NEC machine to determine the exact reason for rejection.
 
 The better is to discard the possibility of a timing issue before changing
 the decoder to accept NEC-like codes without consistency checks.
 
  The current state machines enforce protocol compliance so there are
  probably a lot of older remotes that won't decode right. We can use
  some help in adjusting the state machines to let out of spec codes
  through.
 
 Yes, but we should take some care to avoid having another protocol decoder to
 interpret badly a different protocol. So, I think that the decoders may have
 some sysfs nodes to tweak the decoders to accept those older remotes.
 
 We'll need a consistent way to add some logic at the remotes keycodes used by
 ir-keycode, in order to allow it to tweak the decoder when a keycode table for
 such remote is loaded into the driver.
 
  User space lirc is much older. Bugs like this have been worked out of
  it. It will take some time to get the kernel implementation up to the
  same level.
 
 True.


I more or less got to the bottom of this.


It turns out that ENE reciever has a non linear measurement error.
That is the longer sample is, the larger error it contains.
Substracting around 4% from the samples makes the output look much more
standard compliant.

You are right that my remote has  JVC protocol. (at least I am sure now
it hasn't NEC, because repeat looks differently).

My remote now actually partially works with JVC decoder, it decodes
every other keypress.

Still, no repeat is supported.

However, all recievers (and transmitters) aren't perfect.
Thats why I prefer lirc, because it makes no assumptions about protocol,
so it can be 'trained' to work with any remote, and under very large
range of error tolerances.

Best regards,
Maxim Levitsky

--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Can I expect in-kernel decoding to work out of box?

2010-07-28 Thread Andy Walls
On Wed, 2010-07-28 at 09:46 -0400, Jon Smirl wrote:
 Let's be really sure it is NEC and not JVC.
 
  8850 4350  525 1575  525 1575
   525  450  525  450  525  450
   525  450  525 1575  525  450
   525 1575  525  450  525 1575
   525  450  525  450  525 1575
   525  450  525  450  52523625
 
 
 NEC timings are 9000 4500 560 1680 560 560 etc
 
 JVC timings are 8400 4200 525 1575 525 525
 
 It is a closer match to the JVC timing.  But neither protocol uses a
 different mark/space timing -- 450 vs 525

I assume you mean different mark/space timing for the symbol for which
they are the same length (in NEC that's the '0' symbol IIRC).
  

I've noticed different mark/space timings for the '0' symbol from NEC
remotes and with some RC-5 remotes.  I usually attribute it to cheap
remote designs, weak batteries, capacitive effects, receiver pulse
measurement technique, etc.

Here's an example of NEC remote from a DTV STB remote as measured by the
CX23888 IR receiver on an HVR-1850:

8257296 ns  mark
4206185 ns  space
leader
 482926 ns  mark
 545296 ns  space
0
 481296 ns  mark
1572259 ns  space
1
 481148 ns  mark
 546333 ns  space
0
 479963 ns  mark
 551815 ns  space
0
 454333 ns  mark
1615519 ns  space
1
 435074 ns  mark
 591370 ns  space
[...]

I don't know the source of the error.  I would have to check the same
remote against my MCE USB receiver to try and determine any receiver
induced measurement errors.

But, in Maxim's case, the difference isn't bad: 450/525 ~= 86%.  I would
hope a 15% difference would still be recognizable.


 Also look at the repeats. This is repeating at about 25ms. NEC repeat
 spacing is 110ms. JVC is supposed to be at 50-60ms. NEC does not
 repeat the entire command and JVC does. The repeats are closer to
 following the JVC model.
 
 I'd say this is a JVC command. So the question is, why didn't JVC
 decoder get out of state zero?

Is JVC enabled by default?  I recall analyzing that it could generate
false positives on NEC codes.

Regards,
Andy

--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Can I expect in-kernel decoding to work out of box?

2010-07-28 Thread Jon Smirl
On Wed, Jul 28, 2010 at 10:24 AM, Maxim Levitsky
maximlevit...@gmail.com wrote:
 On Wed, 2010-07-28 at 10:13 -0300, Mauro Carvalho Chehab wrote:
 Em 28-07-2010 07:40, Jon Smirl escreveu:
  On Wed, Jul 28, 2010 at 2:30 AM, Maxim Levitsky maximlevit...@gmail.com 
  wrote:
  On Tue, 2010-07-27 at 22:33 -0400, Jarod Wilson wrote:
  On Tue, Jul 27, 2010 at 9:29 PM, Jon Smirl jonsm...@gmail.com wrote:

  No its not, its just extended NEC.
 
  http://www.sbprojects.com/knowledge/ir/nec.htm
  Says the last two bytes should be the complement of each other.
 
  So for extended NEC it would need to be:
  1100 0010 1010 0101 instead of 1100 0010 1010 0100
  The last bit is wrong.
 
  From the debug output it is decoding as NEC, but then it fails a
  consistency check. Maybe we need to add a new protocol that lets NEC
  commands through even if they fail the error checks.

 Assuming that Maxim's IR receiver is not causing some bad decode at the
 NEC code, it seems simpler to add a parameter at sysfs to relax the NEC
 detection. We should add some way, at the userspace table for those RC's
 that uses a NEC-like code.

 There's another alternative: currently, the NEC decoder produces a 16 bits
 code for NEC and a 24 bits for NEC-extended code. The decoder may return a
 32 bits code when none of the checksum's match the NEC or NEC-extended 
 standard.

 Such 32 bits code won't match a keycode on a 16-bits or 24-bits table, so
 there's no risk of generating a wrong keycode, if the wrong consistent check
 is due to a reception error.

 Btw, we still need to port rc core to use the new tables ioctl's, as cleaning
 all keycodes on a 32 bits table would take forever with the current input
 events ioctls.

  It may also be
  that the NEC machine rejected it because the timing was so far off
  that it concluded that it couldn't be a NEC messages. The log didn't
  include the exact reason it got rejected. Add some printks at the end
  of the NEC machine to determine the exact reason for rejection.

 The better is to discard the possibility of a timing issue before changing
 the decoder to accept NEC-like codes without consistency checks.

  The current state machines enforce protocol compliance so there are
  probably a lot of older remotes that won't decode right. We can use
  some help in adjusting the state machines to let out of spec codes
  through.

 Yes, but we should take some care to avoid having another protocol decoder to
 interpret badly a different protocol. So, I think that the decoders may have
 some sysfs nodes to tweak the decoders to accept those older remotes.

 We'll need a consistent way to add some logic at the remotes keycodes used by
 ir-keycode, in order to allow it to tweak the decoder when a keycode table 
 for
 such remote is loaded into the driver.

  User space lirc is much older. Bugs like this have been worked out of
  it. It will take some time to get the kernel implementation up to the
  same level.

 True.


 I more or less got to the bottom of this.


 It turns out that ENE reciever has a non linear measurement error.
 That is the longer sample is, the larger error it contains.
 Substracting around 4% from the samples makes the output look much more
 standard compliant.

Most of the protocols are arranged using power of two timings.

For example 562.5, 1125, 2250, 4500, 9000 -- NEC
525, 1050, 2100, 4200, 8400 - JVC

The decoders are designed to be much more sensitive to the power of
two relationship than the exact timing. Your non-linear error messed
up the relationship.


 You are right that my remote has  JVC protocol. (at least I am sure now
 it hasn't NEC, because repeat looks differently).

 My remote now actually partially works with JVC decoder, it decodes
 every other keypress.

 Still, no repeat is supported.

It probably isn't implemented yet. Jarod has been focusing more on
getting the basic decoders to work.

 However, all recievers (and transmitters) aren't perfect.
 Thats why I prefer lirc, because it makes no assumptions about protocol,
 so it can be 'trained' to work with any remote, and under very large
 range of error tolerances.

It's possible to build a Linux IR decoder engine that can be loaded
with the old LIRC config files.  But before doing this we should work
on getting all of the errors out of the standard decoders.


 Best regards,
 Maxim Levitsky





-- 
Jon Smirl
jonsm...@gmail.com
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Can I expect in-kernel decoding to work out of box?

2010-07-28 Thread Jarod Wilson
On Wed, Jul 28, 2010 at 10:41:27AM -0400, Jon Smirl wrote:
 On Wed, Jul 28, 2010 at 10:24 AM, Maxim Levitsky
...
  You are right that my remote has  JVC protocol. (at least I am sure now
  it hasn't NEC, because repeat looks differently).
 
  My remote now actually partially works with JVC decoder, it decodes
  every other keypress.
 
  Still, no repeat is supported.
 
 It probably isn't implemented yet. Jarod has been focusing more on
 getting the basic decoders to work.

More specifically, getting the basic decoders to work with very specific
hardware -- i.e., the mceusb transceivers, and primarily focused only on
RC-6(A) decode w/the mceusb bundled remotes. That, and getting the lirc
bridge driver working for both rx and tx.

Basically, my plan of attack has been to get enough bits in place that we
have a reference implementation, if you will, of a driver that supports
all in-kernel decoders and the lirc interface, complete with the ability
to do tx[*], and from there, then we can really dig into the in-kernel
decoders and/or work on porting additional drivers to ir-core. I'm more
focused on porting additional drivers to ir-core at the moment than I am
on testing all of the protocol decoders right now.

[*] we still don't have an ir-core native tx method, but tx on the
mceusb works quite well using the lirc bridge plugin

-- 
Jarod Wilson
ja...@redhat.com

--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Can I expect in-kernel decoding to work out of box?

2010-07-28 Thread Mauro Carvalho Chehab
Em 28-07-2010 11:53, Jon Smirl escreveu:
 On Wed, Jul 28, 2010 at 10:38 AM, Andy Walls awa...@md.metrocast.net wrote:
 On Wed, 2010-07-28 at 09:46 -0400, Jon Smirl wrote:

 Is JVC enabled by default?  I recall analyzing that it could generate
 false positives on NEC codes.
 
 Hopefully the engines should differentiate the two. If the signal is
 really messed up it may trigger a response from both engines. That
 shouldn't be fatal at the higher layers, the wrong protocol would just
 be ignored.

By default, both decoders are enabled, but if you're using the ir-keycode
userspace program at udev, it will disable all protocols but the ones associated
with the RC keytable loaded for that specific device.

Even if both JVC and NEC decoders generate scancodes, it is very unlikely that
the scancode generated by the wrong decoder would produce a valid scancode at
the RC keycode space.

 I recommend that all decoders initially follow the strict protocol
 rules. That will let us find bugs like this one in the ENE driver.

Agreed.

 After we get everything possible working under the strict rules we can
 loosen then up to allow out of spec devices. We might even end up with
 an IR-quirk driver that supports broken remotes.

I think that the better is to add some parameters, via sysfs, to relax the
rules at the current decoders, if needed.

Cheers,
Mauro
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Can I expect in-kernel decoding to work out of box?

2010-07-28 Thread Mauro Carvalho Chehab
Em 28-07-2010 11:41, Jon Smirl escreveu:

 It's possible to build a Linux IR decoder engine that can be loaded
 with the old LIRC config files.

I think it is a good idea to have a decoder that works with such files anyway.

There are some good reasons for that, as it would allow in-kernel support for
protocols that may have some patent restrictions on a few countries that allow
patents on software.

We'll need to discuss the API requirements for such decoder, in order to load
the RC decoding code into it.

Cheers,
Mauro.
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Can I expect in-kernel decoding to work out of box?

2010-07-28 Thread Andy Walls
On Wed, 2010-07-28 at 12:42 -0300, Mauro Carvalho Chehab wrote:
 Em 28-07-2010 11:53, Jon Smirl escreveu:
  On Wed, Jul 28, 2010 at 10:38 AM, Andy Walls awa...@md.metrocast.net 
  wrote:
  On Wed, 2010-07-28 at 09:46 -0400, Jon Smirl wrote:

  I recommend that all decoders initially follow the strict protocol
  rules. That will let us find bugs like this one in the ENE driver.
 
 Agreed.

Well... 

I'd possibly make an exception for the protocols that have long-mark
leaders.  The actual long mark measurement can be far off from the
protocol's specification and needs a larger tolerance (IMO).

Only allowing 0.5 to 1.0 of a protocol time unit tolerance, for a
protocol element that is 8 to 16 protocol time units long, doesn't make
too much sense to me.  If the remote has the basic protocol time unit
off from our expectation, the error will likely be amplified in a long
protocol elements and very much off our expectation.


 I think that the better is to add some parameters, via sysfs, to relax the
 rules at the current decoders, if needed.

Is that worth the effort?  It seems like only going half-way to an
ultimate end state.

crazy idea
If you go through the effort of implementing fine grained controls
(tweaking tolerances for this pulse type here or there), why not just
implement a configurable decoding engine that takes as input:

symbol definitions
(pulse and space length specifications and tolerances)
pulse train states
allowed state transitions
gap length
decoded output data length

and instantiates a decoder that follows a user-space provided
specification?

The user can write his own decoding engine specification in a text file,
feed it into the kernel, and the kernel can implement it for him.
/crazy idea

OK, maybe that is a little too much time and effort. ;)

Regards,
Andy


 Cheers,
 Mauro


--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Can I expect in-kernel decoding to work out of box?

2010-07-28 Thread Jon Smirl
On Wed, Jul 28, 2010 at 11:56 AM, Mauro Carvalho Chehab
mche...@redhat.com wrote:
 Em 28-07-2010 11:41, Jon Smirl escreveu:

 It's possible to build a Linux IR decoder engine that can be loaded
 with the old LIRC config files.

 I think it is a good idea to have a decoder that works with such files anyway.

The recorder should use the Linux IR system to record the data. It
would confusing to mix the systems. Users need to be really sure that
the standard protocol decoders don't understand their protocol before
resorting to this. Any one in this situation should post their
recorded data so we can check for driver implementation errors.

An example: if you use irrecord on Sony remotes lirc always records
them in raw mode. The true problem here is that irrecord doesn't
understand that Sony remotes mix different flavors of the Sony
protocol on a single remote. This leads you to think that the Sony
protocol engine is broken when it really isn't. It's the irrecord tool
that is broken.  The kernel IR system will decode these remotes
correctly without resorting to raw mode.

 There are some good reasons for that, as it would allow in-kernel support for
 protocols that may have some patent restrictions on a few countries that allow
 patents on software.

Are there any IR protocols less than 20 (or 17) years old? If they are
older than that the patents have expired. I expect IR use to decline
in the future, it will be replaced with RF4CE radio remotes.


 We'll need to discuss the API requirements for such decoder, in order to load
 the RC decoding code into it.

 Cheers,
 Mauro.




-- 
Jon Smirl
jonsm...@gmail.com
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Can I expect in-kernel decoding to work out of box?

2010-07-28 Thread Andy Walls
On Wed, 2010-07-28 at 13:04 -0400, Jon Smirl wrote:
 On Wed, Jul 28, 2010 at 11:56 AM, Mauro Carvalho Chehab
 mche...@redhat.com wrote:
  Em 28-07-2010 11:41, Jon Smirl escreveu:

 
 Are there any IR protocols less than 20 (or 17) years old? If they are
 older than that the patents have expired. I expect IR use to decline
 in the future, it will be replaced with RF4CE radio remotes.

UEI's XMP protocol for one, IIRC.

UEI are the folks that sell/make OneForALL branded remotes.

You can read about their patents' remaining lifetimes in this March 2010
SEC filing:

http://www.faqs.org/sec-filings/100315/UNIVERSAL-ELECTRONICS-INC_10-K/

1 to 18 years - that includes the ones they just bought from Zilog.
That is not to say that all those patents cover protocols.


Regards,
Andy

--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Can I expect in-kernel decoding to work out of box?

2010-07-28 Thread Jon Smirl
On Wed, Jul 28, 2010 at 1:02 PM, Andy Walls awa...@md.metrocast.net wrote:
 On Wed, 2010-07-28 at 12:42 -0300, Mauro Carvalho Chehab wrote:
 Em 28-07-2010 11:53, Jon Smirl escreveu:
  On Wed, Jul 28, 2010 at 10:38 AM, Andy Walls awa...@md.metrocast.net 
  wrote:
  On Wed, 2010-07-28 at 09:46 -0400, Jon Smirl wrote:

  I recommend that all decoders initially follow the strict protocol
  rules. That will let us find bugs like this one in the ENE driver.

 Agreed.

 Well...

 I'd possibly make an exception for the protocols that have long-mark
 leaders.  The actual long mark measurement can be far off from the
 protocol's specification and needs a larger tolerance (IMO).

 Only allowing 0.5 to 1.0 of a protocol time unit tolerance, for a
 protocol element that is 8 to 16 protocol time units long, doesn't make
 too much sense to me.  If the remote has the basic protocol time unit
 off from our expectation, the error will likely be amplified in a long
 protocol elements and very much off our expectation.

Do you have a better way to differentiate JVC and NEC protocols? They
are pretty similar except for the timings. What happened in this case
was that the first signals matched the NEC protocol. Then we shifted
to bits that matched JVC protocol.

The NEC bits are 9000/8400 = 7% longer. If we allow more than a 3.5%
error in the initial bit you can't separate the protocols.

In general the decoders are pretty lax and the closest to the correct
one with decode the stream. The 50% rule only comes into play between
two very similar protocols.

One solution would be to implement NEC/JVC in the same engine. Then
apply the NEC consistency checks. If the consistency check pass
present the event on the NEC interface. And then always present the
event on the JVC interface.

 I think that the better is to add some parameters, via sysfs, to relax the
 rules at the current decoders, if needed.

 Is that worth the effort?  It seems like only going half-way to an
 ultimate end state.

 crazy idea
 If you go through the effort of implementing fine grained controls
 (tweaking tolerances for this pulse type here or there), why not just
 implement a configurable decoding engine that takes as input:

        symbol definitions
                (pulse and space length specifications and tolerances)
        pulse train states
        allowed state transitions
        gap length
        decoded output data length

 and instantiates a decoder that follows a user-space provided
 specification?

 The user can write his own decoding engine specification in a text file,
 feed it into the kernel, and the kernel can implement it for him.
 /crazy idea

 OK, maybe that is a little too much time and effort. ;)

 Regards,
 Andy


 Cheers,
 Mauro






-- 
Jon Smirl
jonsm...@gmail.com
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Can I expect in-kernel decoding to work out of box?

2010-07-28 Thread Jon Smirl
On Wed, Jul 28, 2010 at 1:21 PM, Andy Walls awa...@md.metrocast.net wrote:
 On Wed, 2010-07-28 at 13:04 -0400, Jon Smirl wrote:
 On Wed, Jul 28, 2010 at 11:56 AM, Mauro Carvalho Chehab
 mche...@redhat.com wrote:
  Em 28-07-2010 11:41, Jon Smirl escreveu:


 Are there any IR protocols less than 20 (or 17) years old? If they are
 older than that the patents have expired. I expect IR use to decline
 in the future, it will be replaced with RF4CE radio remotes.

 UEI's XMP protocol for one, IIRC.

The beauty of LIRC is that you can use any remote for input.  If one
remote's protocols are patented, just use another remote.

Only in the case where we have to xmit the protocol is the patent
conflict unavoidable. In that case we could resort to sending a raw
pulse timing string that comes from user space.


 UEI are the folks that sell/make OneForALL branded remotes.

 You can read about their patents' remaining lifetimes in this March 2010
 SEC filing:

 http://www.faqs.org/sec-filings/100315/UNIVERSAL-ELECTRONICS-INC_10-K/

 1 to 18 years - that includes the ones they just bought from Zilog.
 That is not to say that all those patents cover protocols.


 Regards,
 Andy





-- 
Jon Smirl
jonsm...@gmail.com
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Can I expect in-kernel decoding to work out of box?

2010-07-28 Thread Mauro Carvalho Chehab
Em 28-07-2010 14:04, Jon Smirl escreveu:
 On Wed, Jul 28, 2010 at 11:56 AM, Mauro Carvalho Chehab
 mche...@redhat.com wrote:
 Em 28-07-2010 11:41, Jon Smirl escreveu:

 It's possible to build a Linux IR decoder engine that can be loaded
 with the old LIRC config files.

 I think it is a good idea to have a decoder that works with such files 
 anyway.
 
 The recorder should use the Linux IR system to record the data. It
 would confusing to mix the systems. Users need to be really sure that
 the standard protocol decoders don't understand their protocol before
 resorting to this. Any one in this situation should post their
 recorded data so we can check for driver implementation errors.
 
 An example: if you use irrecord on Sony remotes lirc always records
 them in raw mode. The true problem here is that irrecord doesn't
 understand that Sony remotes mix different flavors of the Sony
 protocol on a single remote. This leads you to think that the Sony
 protocol engine is broken when it really isn't. It's the irrecord tool
 that is broken.  The kernel IR system will decode these remotes
 correctly without resorting to raw mode.

A decoder like that should be a last-resort decoder, only in the
cases where there's no other option.

 There are some good reasons for that, as it would allow in-kernel support for
 protocols that may have some patent restrictions on a few countries that 
 allow
 patents on software.
 
 Are there any IR protocols less than 20 (or 17) years old?

Yes. This protocol is brand new:
https://www.smkusa.com/usa/technologies/qp/

And several new devices are starting to accept it.

 If they are
 older than that the patents have expired. I expect IR use to decline
 in the future, it will be replaced with RF4CE radio remotes.

I expect so, but it will take some time until this transition happens.

Cheers,
Mauro.
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Can I expect in-kernel decoding to work out of box?

2010-07-28 Thread Jarod Wilson
On Wed, Jul 28, 2010 at 03:08:13PM -0300, Mauro Carvalho Chehab wrote:
 Em 28-07-2010 14:04, Jon Smirl escreveu:
  On Wed, Jul 28, 2010 at 11:56 AM, Mauro Carvalho Chehab
  mche...@redhat.com wrote:
  Em 28-07-2010 11:41, Jon Smirl escreveu:
 
  It's possible to build a Linux IR decoder engine that can be loaded
  with the old LIRC config files.
 
  I think it is a good idea to have a decoder that works with such files 
  anyway.
  
  The recorder should use the Linux IR system to record the data. It
  would confusing to mix the systems. Users need to be really sure that
  the standard protocol decoders don't understand their protocol before
  resorting to this. Any one in this situation should post their
  recorded data so we can check for driver implementation errors.
  
  An example: if you use irrecord on Sony remotes lirc always records
  them in raw mode. The true problem here is that irrecord doesn't
  understand that Sony remotes mix different flavors of the Sony
  protocol on a single remote. This leads you to think that the Sony
  protocol engine is broken when it really isn't. It's the irrecord tool
  that is broken.  The kernel IR system will decode these remotes
  correctly without resorting to raw mode.
 
 A decoder like that should be a last-resort decoder, only in the
 cases where there's no other option.
 
  There are some good reasons for that, as it would allow in-kernel support 
  for
  protocols that may have some patent restrictions on a few countries that 
  allow
  patents on software.
  
  Are there any IR protocols less than 20 (or 17) years old?
 
 Yes. This protocol is brand new:
   https://www.smkusa.com/usa/technologies/qp/
 
 And several new devices are starting to accept it.

The US patent appears to have been filed in 1995 and granted in 1997, so
brand new is relative. ;)

http://www.freepatentsonline.com/5640160.html

We do have a few more years of being encumbered by it here in the US
though. :(

-- 
Jarod Wilson
ja...@redhat.com

--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Can I expect in-kernel decoding to work out of box?

2010-07-28 Thread Andy Walls
On Wed, 2010-07-28 at 13:35 -0400, Jon Smirl wrote:
 On Wed, Jul 28, 2010 at 1:02 PM, Andy Walls awa...@md.metrocast.net wrote:
  On Wed, 2010-07-28 at 12:42 -0300, Mauro Carvalho Chehab wrote:
  Em 28-07-2010 11:53, Jon Smirl escreveu:
   On Wed, Jul 28, 2010 at 10:38 AM, Andy Walls awa...@md.metrocast.net 
   wrote:
   On Wed, 2010-07-28 at 09:46 -0400, Jon Smirl wrote:
 
   I recommend that all decoders initially follow the strict protocol
   rules. That will let us find bugs like this one in the ENE driver.
 
  Agreed.
 
  Well...
 
  I'd possibly make an exception for the protocols that have long-mark
  leaders.  The actual long mark measurement can be far off from the
  protocol's specification and needs a larger tolerance (IMO).
 
  Only allowing 0.5 to 1.0 of a protocol time unit tolerance, for a
  protocol element that is 8 to 16 protocol time units long, doesn't make
  too much sense to me.  If the remote has the basic protocol time unit
  off from our expectation, the error will likely be amplified in a long
  protocol elements and very much off our expectation.
 
 Do you have a better way to differentiate JVC and NEC protocols? They
 are pretty similar except for the timings.

Yes: Invoke the 80/20 rule and don't try.  Enable NEC and disable JVC by
default.  Let the users know so as to properly manage user expectations.
(Maxim's original question was about expectation.)

When the user knows NEC isn't working, or he suspects JVC may work, he
can bind that protocol to the particular IR receiver.


Trying to solve the discrimination problem with blindly parallel
decoding all the possible protocols is a big waste of effort IMO:

a. Many remotes are sloppy and out of spec, and get worse with weak
batteries.

b. The IR receiver driver knows what remotes possibly came bundled with
the hardware.  (For the case of the MCE USB, it's almost always an RC-6
6A remote.)

c. The user can tell the kernel about his remote unambiguously.

There's no burning need to wear a blindfold, AFAICT, so let's not.

Why bother to solve a hard problem (discrimination of protocols from out
of spec remotes), when it raises the error rate of solving the simple
one (properly decoding a single protocol)?

Doing many things poorly is worse than doing one thing well.
Non-adaptive protocol discovery (i.e. blind parallel decoding) should
not be the default if it leads to problems or inflated expectations for
the user.


  What happened in this case
 was that the first signals matched the NEC protocol. Then we shifted
 to bits that matched JVC protocol.
 
 The NEC bits are 9000/8400 = 7% longer. If we allow more than a 3.5%
 error in the initial bit you can't separate the protocols.
 
 In general the decoders are pretty lax and the closest to the correct
 one with decode the stream. The 50% rule only comes into play between
 two very similar protocols.
 
 One solution would be to implement NEC/JVC in the same engine. Then
 apply the NEC consistency checks. If the consistency check pass
 present the event on the NEC interface. And then always present the
 event on the JVC interface.

It's just too simple to have the user:

a. Try NEC
b. Try JVC
c. Make a judgment and stick with the one he perceives works.


To have reliable discrimination in the general case between two
protocols, given the variables out of our control (i.e. the remote
control implementation) would require some sort of data collection and
adaptive algorithm to go on inside the kernel.  I don't think you can
get reliable discrimination in one key press.  Maybe by looking at the
key press and the repeats together might up the probability of correct
discrimination (that's one criterion you examined to make a
determination in your earlier email).

Regards,
Andy


--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Can I expect in-kernel decoding to work out of box?

2010-07-28 Thread Mauro Carvalho Chehab
Em 28-07-2010 14:02, Andy Walls escreveu:
 On Wed, 2010-07-28 at 12:42 -0300, Mauro Carvalho Chehab wrote:
 Em 28-07-2010 11:53, Jon Smirl escreveu:
 On Wed, Jul 28, 2010 at 10:38 AM, Andy Walls awa...@md.metrocast.net 
 wrote:
 On Wed, 2010-07-28 at 09:46 -0400, Jon Smirl wrote:
 
 I recommend that all decoders initially follow the strict protocol
 rules. That will let us find bugs like this one in the ENE driver.

 Agreed.
 
 Well... 
 
 I'd possibly make an exception for the protocols that have long-mark
 leaders.  The actual long mark measurement can be far off from the
 protocol's specification and needs a larger tolerance (IMO).
 
 Only allowing 0.5 to 1.0 of a protocol time unit tolerance, for a
 protocol element that is 8 to 16 protocol time units long, doesn't make
 too much sense to me.  If the remote has the basic protocol time unit
 off from our expectation, the error will likely be amplified in a long
 protocol elements and very much off our expectation.

We may adjust it as we note problems on it, but relaxing rules may cause
bad effects, so the better is to be more strict.

 I think that the better is to add some parameters, via sysfs, to relax the
 rules at the current decoders, if needed.
 
 Is that worth the effort?  It seems like only going half-way to an
 ultimate end state.

Well, let's see first if this is needed. Then, we take the decisions case by 
case.

 crazy idea
 If you go through the effort of implementing fine grained controls
 (tweaking tolerances for this pulse type here or there), why not just
 implement a configurable decoding engine that takes as input:
 
   symbol definitions
   (pulse and space length specifications and tolerances)
   pulse train states
   allowed state transitions
   gap length
   decoded output data length
 
 and instantiates a decoder that follows a user-space provided
 specification?
 
 The user can write his own decoding engine specification in a text file,
 feed it into the kernel, and the kernel can implement it for him.
 /crazy idea

It is not a crazy idea, and perhaps this is the only way to work with certain
protocols, like Quatro Pulse (see my previous email).

But I think that we should still have the proper decoders for common
protocols and that we won't have any legal restriction to implement
a decoder. A generic decoder will be less efficient than 

 OK, maybe that is a little too much time and effort. ;)

Good point. Well, we'll need some volunteer to write such driver ;)

Cheers,
Mauro
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Can I expect in-kernel decoding to work out of box?

2010-07-28 Thread Mauro Carvalho Chehab
Em 28-07-2010 14:38, Jon Smirl escreveu:
 On Wed, Jul 28, 2010 at 1:21 PM, Andy Walls awa...@md.metrocast.net wrote:
 On Wed, 2010-07-28 at 13:04 -0400, Jon Smirl wrote:
 On Wed, Jul 28, 2010 at 11:56 AM, Mauro Carvalho Chehab
 mche...@redhat.com wrote:
 Em 28-07-2010 11:41, Jon Smirl escreveu:


 Are there any IR protocols less than 20 (or 17) years old? If they are
 older than that the patents have expired. I expect IR use to decline
 in the future, it will be replaced with RF4CE radio remotes.

 UEI's XMP protocol for one, IIRC.
 
 The beauty of LIRC is that you can use any remote for input.  If one
 remote's protocols are patented, just use another remote.
 
 Only in the case where we have to xmit the protocol is the patent
 conflict unavoidable. In that case we could resort to sending a raw
 pulse timing string that comes from user space.

Well, software patents are valid only on very few Countries. People that live
on a software-patent-free Country can keep using those protocols, if they
can just upload a set of rules for a generic driver. On the other hand,
a rule-hardcoded codec for a patented protocol cannot be inside Kernel, as
this would restrict kernel distribution on those non-software-patent-free
Countries.

Cheers,
Mauro.

--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Can I expect in-kernel decoding to work out of box?

2010-07-28 Thread Mauro Carvalho Chehab
Em 28-07-2010 15:05, Jarod Wilson escreveu:
 On Wed, Jul 28, 2010 at 03:08:13PM -0300, Mauro Carvalho Chehab wrote:
 Em 28-07-2010 14:04, Jon Smirl escreveu:
 On Wed, Jul 28, 2010 at 11:56 AM, Mauro Carvalho Chehab
 mche...@redhat.com wrote:
 Em 28-07-2010 11:41, Jon Smirl escreveu:

 It's possible to build a Linux IR decoder engine that can be loaded
 with the old LIRC config files.

 I think it is a good idea to have a decoder that works with such files 
 anyway.

 The recorder should use the Linux IR system to record the data. It
 would confusing to mix the systems. Users need to be really sure that
 the standard protocol decoders don't understand their protocol before
 resorting to this. Any one in this situation should post their
 recorded data so we can check for driver implementation errors.

 An example: if you use irrecord on Sony remotes lirc always records
 them in raw mode. The true problem here is that irrecord doesn't
 understand that Sony remotes mix different flavors of the Sony
 protocol on a single remote. This leads you to think that the Sony
 protocol engine is broken when it really isn't. It's the irrecord tool
 that is broken.  The kernel IR system will decode these remotes
 correctly without resorting to raw mode.

 A decoder like that should be a last-resort decoder, only in the
 cases where there's no other option.

 There are some good reasons for that, as it would allow in-kernel support 
 for
 protocols that may have some patent restrictions on a few countries that 
 allow
 patents on software.

 Are there any IR protocols less than 20 (or 17) years old?

 Yes. This protocol is brand new:
  https://www.smkusa.com/usa/technologies/qp/

 And several new devices are starting to accept it.
 
 The US patent appears to have been filed in 1995 and granted in 1997, so
 brand new is relative. ;)

Yes, I saw the patent timestamps too ;) Yet, AFAIK, they're starting to use 
this protocol
on newer IR devices, so, we'll probably see some new devices using it.
 
 http://www.freepatentsonline.com/5640160.html
 
 We do have a few more years of being encumbered by it here in the US
 though. :(
 

:(

Cheers,
Mauro.
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Can I expect in-kernel decoding to work out of box?

2010-07-28 Thread Mauro Carvalho Chehab
Andy,

Em 28-07-2010 15:18, Andy Walls escreveu:
 On Wed, 2010-07-28 at 13:35 -0400, Jon Smirl wrote:
 On Wed, Jul 28, 2010 at 1:02 PM, Andy Walls awa...@md.metrocast.net wrote:
 On Wed, 2010-07-28 at 12:42 -0300, Mauro Carvalho Chehab wrote:
 Em 28-07-2010 11:53, Jon Smirl escreveu:
 On Wed, Jul 28, 2010 at 10:38 AM, Andy Walls awa...@md.metrocast.net 
 wrote:
 On Wed, 2010-07-28 at 09:46 -0400, Jon Smirl wrote:

 I recommend that all decoders initially follow the strict protocol
 rules. That will let us find bugs like this one in the ENE driver.

 Agreed.

 Well...

 I'd possibly make an exception for the protocols that have long-mark
 leaders.  The actual long mark measurement can be far off from the
 protocol's specification and needs a larger tolerance (IMO).

 Only allowing 0.5 to 1.0 of a protocol time unit tolerance, for a
 protocol element that is 8 to 16 protocol time units long, doesn't make
 too much sense to me.  If the remote has the basic protocol time unit
 off from our expectation, the error will likely be amplified in a long
 protocol elements and very much off our expectation.

 Do you have a better way to differentiate JVC and NEC protocols? They
 are pretty similar except for the timings.
 
 Yes: Invoke the 80/20 rule and don't try.

At the room where my computers is located, I have two wide fluorescent lamps
each with 20W. If I don't hide the IR sensors bellow my desk, those lamps
are enough to generate random flickers at the sensors. With the more relaxed
driver we used to have at saa7134, it ended by producing random scancodes,
or, even worse, random repeat codes. So, lots of false-positive events. It is
a way worse to have false-positive than having a false-negative events.

So, I don't think it is a good idea to use a relaxed mode by default.


 Enable NEC and disable JVC by
 default.  Let the users know so as to properly manage user expectations.
 (Maxim's original question was about expectation.)

We should discuss a little bit about RC subsystem evolution during LPC/2010, 
but, from my point of view, we should soon deprecate the in-kernel keymap tables
on some new kernel version, using, instead, the ir-keycode application to 
dynamically load the keycode tables via UDEV. Of course, after some time,
we may end by removing all those tables from the kernel.

So, assuming that we follow this patch, what we'll have for a newer device is:

For most devices, the keymap configuration table (rc_maps.cfg) will associate
all known devices with their corresponding keytable (we still need to generate
a default rc_maps.cfg that corresponds to the current in-kernel mapping, but
this is trivial).

As ir-keytable disables all protocols but the one(s) needed by a given device,
in practice, if the scancode table specifies a NEC keymap table, JVC will be 
disabled.
If the table is for JVC, NEC will be disabled.

So, this already happens in a practical scenario, as all decoders will be 
enabled 
only before loading a keymap (or if the user explicitly enable the other 
decoders).

So, the device will be in some sort of training mode, e. g. it will try every
possible decoder, and will be generating the scancodes for some userspace 
application
that will be learning the keycodes and creating a keymap table.

IMO, we should have a way to tell the RC and/or the decoding subsystem to work 
on a
relaxed mode only when the user (or the userspace app) detects that there's 
something
wrong with that device.

 When the user knows NEC isn't working, or he suspects JVC may work, he
 can bind that protocol to the particular IR receiver.
 
 Trying to solve the discrimination problem with blindly parallel
 decoding all the possible protocols is a big waste of effort IMO:
 
 a. Many remotes are sloppy and out of spec, and get worse with weak
 batteries.
 
 b. The IR receiver driver knows what remotes possibly came bundled with
 the hardware.  (For the case of the MCE USB, it's almost always an RC-6
 6A remote.)
 
 c. The user can tell the kernel about his remote unambiguously.
 
 There's no burning need to wear a blindfold, AFAICT, so let's not.
 
 Why bother to solve a hard problem (discrimination of protocols from out
 of spec remotes), when it raises the error rate of solving the simple
 one (properly decoding a single protocol)?
 
 Doing many things poorly is worse than doing one thing well.
 Non-adaptive protocol discovery (i.e. blind parallel decoding) should
 not be the default if it leads to problems or inflated expectations for
 the user.
 
 
  What happened in this case
 was that the first signals matched the NEC protocol. Then we shifted
 to bits that matched JVC protocol.

 The NEC bits are 9000/8400 = 7% longer. If we allow more than a 3.5%
 error in the initial bit you can't separate the protocols.

 In general the decoders are pretty lax and the closest to the correct
 one with decode the stream. The 50% rule only comes into play between
 two very similar protocols.

 One solution would be to 

Re: Can I expect in-kernel decoding to work out of box?

2010-07-28 Thread Maxim Levitsky
On Wed, 2010-07-28 at 17:13 -0300, Mauro Carvalho Chehab wrote: 
 Andy,
 
 Em 28-07-2010 15:18, Andy Walls escreveu:
  On Wed, 2010-07-28 at 13:35 -0400, Jon Smirl wrote:
  On Wed, Jul 28, 2010 at 1:02 PM, Andy Walls awa...@md.metrocast.net 
  wrote:
  On Wed, 2010-07-28 at 12:42 -0300, Mauro Carvalho Chehab wrote:
  Em 28-07-2010 11:53, Jon Smirl escreveu:
  On Wed, Jul 28, 2010 at 10:38 AM, Andy Walls awa...@md.metrocast.net 
  wrote:
  On Wed, 2010-07-28 at 09:46 -0400, Jon Smirl wrote:
 
  I recommend that all decoders initially follow the strict protocol
  rules. That will let us find bugs like this one in the ENE driver.
 
  Agreed.
 
  Well...
 
  I'd possibly make an exception for the protocols that have long-mark
  leaders.  The actual long mark measurement can be far off from the
  protocol's specification and needs a larger tolerance (IMO).
 
  Only allowing 0.5 to 1.0 of a protocol time unit tolerance, for a
  protocol element that is 8 to 16 protocol time units long, doesn't make
  too much sense to me.  If the remote has the basic protocol time unit
  off from our expectation, the error will likely be amplified in a long
  protocol elements and very much off our expectation.
 
  Do you have a better way to differentiate JVC and NEC protocols? They
  are pretty similar except for the timings.
  
  Yes: Invoke the 80/20 rule and don't try.
 
 At the room where my computers is located, I have two wide fluorescent lamps
 each with 20W. If I don't hide the IR sensors bellow my desk, those lamps
 are enough to generate random flickers at the sensors. With the more relaxed
 driver we used to have at saa7134, it ended by producing random scancodes,
 or, even worse, random repeat codes. So, lots of false-positive events. It is
 a way worse to have false-positive than having a false-negative events.
 
 So, I don't think it is a good idea to use a relaxed mode by default.
 
 
  Enable NEC and disable JVC by
  default.  Let the users know so as to properly manage user expectations.
  (Maxim's original question was about expectation.)
 
 We should discuss a little bit about RC subsystem evolution during LPC/2010, 
 but, from my point of view, we should soon deprecate the in-kernel keymap 
 tables
 on some new kernel version, using, instead, the ir-keycode application to 
 dynamically load the keycode tables via UDEV. Of course, after some time,
 we may end by removing all those tables from the kernel.
/me is very happy about it.
The reason isn't even about size or some principle.
These keymaps just increase compilation time too much...

 
 So, assuming that we follow this patch, what we'll have for a newer device is:
 
 For most devices, the keymap configuration table (rc_maps.cfg) will associate
 all known devices with their corresponding keytable (we still need to generate
 a default rc_maps.cfg that corresponds to the current in-kernel mapping, but
 this is trivial).
 
 As ir-keytable disables all protocols but the one(s) needed by a given device,
 in practice, if the scancode table specifies a NEC keymap table, JVC will be 
 disabled.
 If the table is for JVC, NEC will be disabled.
 
 So, this already happens in a practical scenario, as all decoders will be 
 enabled 
 only before loading a keymap (or if the user explicitly enable the other 
 decoders).
 
 So, the device will be in some sort of training mode, e. g. it will try 
 every
 possible decoder, and will be generating the scancodes for some userspace 
 application
 that will be learning the keycodes and creating a keymap table.
 
 IMO, we should have a way to tell the RC and/or the decoding subsystem to 
 work on a
 relaxed mode only when the user (or the userspace app) detects that there's 
 something
 wrong with that device.
 
  When the user knows NEC isn't working, or he suspects JVC may work, he
  can bind that protocol to the particular IR receiver.
  
  Trying to solve the discrimination problem with blindly parallel
  decoding all the possible protocols is a big waste of effort IMO:
  
  a. Many remotes are sloppy and out of spec, and get worse with weak
  batteries.
  
  b. The IR receiver driver knows what remotes possibly came bundled with
  the hardware.  (For the case of the MCE USB, it's almost always an RC-6
  6A remote.)
  
  c. The user can tell the kernel about his remote unambiguously.
  
  There's no burning need to wear a blindfold, AFAICT, so let's not.
  
  Why bother to solve a hard problem (discrimination of protocols from out
  of spec remotes), when it raises the error rate of solving the simple
  one (properly decoding a single protocol)?
  
  Doing many things poorly is worse than doing one thing well.
  Non-adaptive protocol discovery (i.e. blind parallel decoding) should
  not be the default if it leads to problems or inflated expectations for
  the user.
  
  
   What happened in this case
  was that the first signals matched the NEC protocol. Then we shifted
  to bits that matched JVC protocol.
 
  The NEC 

Re: Can I expect in-kernel decoding to work out of box?

2010-07-28 Thread Mauro Carvalho Chehab
Em 28-07-2010 18:01, Maxim Levitsky escreveu:
 On Wed, 2010-07-28 at 17:24 +0300, Maxim Levitsky wrote: 
 On Wed, 2010-07-28 at 10:13 -0300, Mauro Carvalho Chehab wrote: 
 Em 28-07-2010 07:40, Jon Smirl escreveu:
 On Wed, Jul 28, 2010 at 2:30 AM, Maxim Levitsky maximlevit...@gmail.com 
 wrote:
 On Tue, 2010-07-27 at 22:33 -0400, Jarod Wilson wrote:
 On Tue, Jul 27, 2010 at 9:29 PM, Jon Smirl jonsm...@gmail.com wrote:

 No its not, its just extended NEC.

 http://www.sbprojects.com/knowledge/ir/nec.htm
 Says the last two bytes should be the complement of each other.

 So for extended NEC it would need to be:
 1100 0010 1010 0101 instead of 1100 0010 1010 0100
 The last bit is wrong.

 From the debug output it is decoding as NEC, but then it fails a
 consistency check. Maybe we need to add a new protocol that lets NEC
 commands through even if they fail the error checks.

 Assuming that Maxim's IR receiver is not causing some bad decode at the
 NEC code, it seems simpler to add a parameter at sysfs to relax the NEC
 detection. We should add some way, at the userspace table for those RC's
 that uses a NEC-like code.

 There's another alternative: currently, the NEC decoder produces a 16 bits
 code for NEC and a 24 bits for NEC-extended code. The decoder may return a
 32 bits code when none of the checksum's match the NEC or NEC-extended 
 standard.

 Such 32 bits code won't match a keycode on a 16-bits or 24-bits table, so
 there's no risk of generating a wrong keycode, if the wrong consistent check
 is due to a reception error.

 Btw, we still need to port rc core to use the new tables ioctl's, as 
 cleaning
 all keycodes on a 32 bits table would take forever with the current input
 events ioctls.

 It may also be
 that the NEC machine rejected it because the timing was so far off
 that it concluded that it couldn't be a NEC messages. The log didn't
 include the exact reason it got rejected. Add some printks at the end
 of the NEC machine to determine the exact reason for rejection.

 The better is to discard the possibility of a timing issue before changing
 the decoder to accept NEC-like codes without consistency checks.

 The current state machines enforce protocol compliance so there are
 probably a lot of older remotes that won't decode right. We can use
 some help in adjusting the state machines to let out of spec codes
 through.

 Yes, but we should take some care to avoid having another protocol decoder 
 to
 interpret badly a different protocol. So, I think that the decoders may have
 some sysfs nodes to tweak the decoders to accept those older remotes.

 We'll need a consistent way to add some logic at the remotes keycodes used 
 by
 ir-keycode, in order to allow it to tweak the decoder when a keycode table 
 for
 such remote is loaded into the driver.

 User space lirc is much older. Bugs like this have been worked out of
 it. It will take some time to get the kernel implementation up to the
 same level.

 True.


 I more or less got to the bottom of this.


 It turns out that ENE reciever has a non linear measurement error.
 That is the longer sample is, the larger error it contains.
 Substracting around 4% from the samples makes the output look much more
 standard compliant.

 You are right that my remote has  JVC protocol. (at least I am sure now
 it hasn't NEC, because repeat looks differently).

 My remote now actually partially works with JVC decoder, it decodes
 every other keypress.

 Still, no repeat is supported.

 However, all recievers (and transmitters) aren't perfect.
 Thats why I prefer lirc, because it makes no assumptions about protocol,
 so it can be 'trained' to work with any remote, and under very large
 range of error tolerances.

 Best regards,
 Maxim Levitsky

 
 I think I found the reason behind some of incorrect behavior.
 
 I see that in-kernel decoding is unhappy about the way I process gaps.
 
 I do exactly the same I did in lirc driver.
 
 At the end of keypress, the driver receives series of spaces from
 hardware.
 I accumulate 'em until patience^Wtimeout runs out.
 Then I put hardware in 'idle' mode, and remember current time.
 
 As soon as I get new pulse, I send a sum of accumulated same and time
 difference to user.
 
 Therefore every keypress ends with a pulse, and starts with space.
 But in-kernel decoding isn't happy about it, it seems.. at least NEC
 decoder...
 
 How you think to solve that?
 Fix in-kernel decoders maybe?

Just send whatever you receive from hardware to the decoders. both LIRC and
decoders have already a code to handle the timeouts.

Cheers,
Mauro

--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Can I expect in-kernel decoding to work out of box?

2010-07-28 Thread Andy Walls
On Wed, 2010-07-28 at 17:13 -0300, Mauro Carvalho Chehab wrote:
 Andy,
 
 Em 28-07-2010 15:18, Andy Walls escreveu:
  On Wed, 2010-07-28 at 13:35 -0400, Jon Smirl wrote:
  On Wed, Jul 28, 2010 at 1:02 PM, Andy Walls awa...@md.metrocast.net 
  wrote:
  On Wed, 2010-07-28 at 12:42 -0300, Mauro Carvalho Chehab wrote:
  Em 28-07-2010 11:53, Jon Smirl escreveu:
  On Wed, Jul 28, 2010 at 10:38 AM, Andy Walls awa...@md.metrocast.net 
  wrote:
  On Wed, 2010-07-28 at 09:46 -0400, Jon Smirl wrote:
 
  I recommend that all decoders initially follow the strict protocol
  rules. That will let us find bugs like this one in the ENE driver.
 
  Agreed.
 
  Well...
 
  I'd possibly make an exception for the protocols that have long-mark
  leaders.  The actual long mark measurement can be far off from the
  protocol's specification and needs a larger tolerance (IMO).
 
  Only allowing 0.5 to 1.0 of a protocol time unit tolerance, for a
  protocol element that is 8 to 16 protocol time units long, doesn't make
  too much sense to me.  If the remote has the basic protocol time unit
  off from our expectation, the error will likely be amplified in a long
  protocol elements and very much off our expectation.
 
  Do you have a better way to differentiate JVC and NEC protocols? They
  are pretty similar except for the timings.
  
  Yes: Invoke the 80/20 rule and don't try.
 
 At the room where my computers is located, I have two wide fluorescent lamps
 each with 20W. If I don't hide the IR sensors bellow my desk, those lamps
 are enough to generate random flickers at the sensors. With the more relaxed
 driver we used to have at saa7134, it ended by producing random scancodes,
 or, even worse, random repeat codes. So, lots of false-positive events. It is
 a way worse to have false-positive than having a false-negative events.

So those sorts of false positiives are bad, but a glitch filter handles
those.  (Easily done in software - borrow from the LIRC userspace if
need be.)  Set the glictch filter discard pulses that are shorter than
some fraction of the expected protocol time unit.

In the cx23885-input.c file I chose to set the hardware glitch filter at
75% for RC-6 and 62.5% for NEC (I forget my reasons for those numbers
aside from being 3/4  5/8 respectively)


 So, I don't think it is a good idea to use a relaxed mode by default.

So I disagree.  We should set the default to make the most common use
case as error free as possible, reducing false detections and missed
detections, so that it just works.

I see two conflicting goals, which force optimizations one direction or
another:

1. Optimize for good protocol discrimination
(at the expense of ability to decode from remotes/receviers that don't
meet the protocol specs).

2. Optimize for good decoding within each protocol
(at the expense of discriminating between the protocols).

My assertion that goal #1, is not important in the most common use case
and the ability have an acceptable success rate in the general case is
questionable.  There is so much information available to constrain what
IR protocols will be present on a receiver, it hardly seems worth the
effort for the normal user with 1 TV capture device and the remote that
came with it.

I'll also assert that goal #2 is easier to attain and more useful to the
general case.  Cheap remotes and poor ambient light conditions are
common occurences.  Glitch filters are simpler if you can just throw out
glitches, restarting the measurment, knowing that the tolerances will
still pull you in.  One can also start to think about adaptive decoders,
that adjust to the protocol time unit the remotes appears to be using.
(In NEC, the normal mark time indicates the remote's idea of the
protocol time unit.)


What am I going to do about it all in the end?  Probably not much. :)
(I seem to have more time to gripe than do much else nowadays. :P )



  Enable NEC and disable JVC by
  default.  Let the users know so as to properly manage user expectations.
  (Maxim's original question was about expectation.)
 
 We should discuss a little bit about RC subsystem evolution during LPC/2010,

Yes.  I'll be there.


 but, from my point of view, we should soon deprecate the in-kernel keymap 
 tables
 on some new kernel version, using, instead, the ir-keycode application to 
 dynamically load the keycode tables via UDEV. Of course, after some time,
 we may end by removing all those tables from the kernel.
 
 So, assuming that we follow this patch, what we'll have for a newer device is:
 
 For most devices, the keymap configuration table (rc_maps.cfg) will associate
 all known devices with their corresponding keytable (we still need to generate
 a default rc_maps.cfg that corresponds to the current in-kernel mapping, but
 this is trivial).
 
 As ir-keytable disables all protocols but the one(s) needed by a given device,
 in practice, if the scancode table specifies a NEC keymap table, JVC will be 
 disabled.
 If the table is for JVC, NEC will 

Re: Can I expect in-kernel decoding to work out of box?

2010-07-27 Thread Jarod Wilson
On Tue, Jul 27, 2010 at 9:29 PM, Jon Smirl jonsm...@gmail.com wrote:
 On Tue, Jul 27, 2010 at 7:32 PM, Maxim Levitsky maximlevit...@gmail.com 
 wrote:
 On Wed, 2010-07-28 at 01:33 +0300, Maxim Levitsky wrote:
 Hi,

 I ported my ene driver to in-kernel decoding.
 It isn't yet ready to be released, but in few days it will be.

 Now, knowing about wonders of in-kernel decoding, I try to use it, but
 it just doesn't work.

 Mind you that lircd works with this remote.
 (I attach my lircd.conf)

 Here is the output of mode2 for a single keypress:

    8850     4350      525     1575      525     1575
     525      450      525      450      525      450
     525      450      525     1575      525      450
     525     1575      525      450      525     1575
     525      450      525      450      525     1575
     525      450      525      450      525    23625

 That decodes as:
 1100 0010 1010 0100

 In the NEC protocol the second word is supposed to be the inverse of
 the first word and it isn't. The timing is too short for NEC protocol
 too.

 Valid NEC...
 1100 0011 1010 0101

 Maybe JVC protocol but it is longer than normal.

 The JVC decoder was unable to get started decoding it.  I don't think
 the JVC decoder has been tested much. Take a look at it and see why it
 couldn't get out of state 0.

Personally, I haven't really tried much of anything but RC-6(A) and
RC-5 while working on mceusb, so they're the only ones I can really
vouch for myself at the moment. It seems that I don't have many
remotes that aren't an RC-x variant, outside of universals, which I
have yet to get around to programming for various other modes to test
any of the protocol decoders. I assume that David Hardeman already did
that much before submitting each of the ir protocol decoders with his
name one them (which were, if I'm not mistaken, based at least
partially on Jon's earlier work), but its entirely possible there are
slight variants of each that aren't handled properly just yet. That
right there is one of the major reasons I saw for writing the lirc
bridge driver plugin in the first place -- the lirc userspace decoder
has been around for a LOT longer, and thus is likely to know how to
handle more widely varying IR signals.

-- 
Jarod Wilson
ja...@wilsonet.com
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html