Jarod Wilson <ja...@wilsonet.com> writes:

> Ah, but the approach I'd take to converting to in-kernel decoding[*]
> would be this:
>
> 1) bring drivers in in their current state
>    - users keep using lirc as they always have
>
> 2) add in-kernel decoding infra that feeds input layer

Well. I think the above is fine enough.

> 3) add option to use in-kernel decoding to existing lirc drivers
>    - users can keep using lirc as they always have
>    - users can optionally try out in-kernel decoding via a modparam
>
> 4) switch the default mode from lirc decode to kernel decode for each lirc 
> driver
>    - modparam can be used to continue using lirc interface instead
>
> 5) assuming users aren't coming at us with pitchforks, because things don't 
> actually work reliably with in-kernel decoding, deprecate the lirc interface 
> in driver
>
> 6) remove lirc interface from driver, its now a pure input device

But 3-6 are IMHO not useful. We don't need lirc _or_ input. We need
both at the same time: input for the general, simple case and for
consistency with receivers decoding in firmware/hardware; input for
special cases such as mapping the keys, protocols not supported by the
kernel and so on (also for in-tree media drivers where applicable).

> [*] assuming, of course, that it was actually agreed upon that
> in-kernel decoding was the right way, the only way, all others will be
> shot on sight. ;)

I think: in-kernel decoding only as the general, primary means. Not the
only one.
-- 
Krzysztof Halasa
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to