Hello,

Could you please clarify how the "Unit Exponent" global item value should be
encoded in report descriptors, according to Device Class Definition for HID
1.11?

In my understanding it is encoded similarly to the rest of the signed integer
items, i.e. two's complement 1, 2, or 4 byte integer.

However, the "HID Descriptor Tool" available on the HID page [1] encodes it as
a two's complement nibble, limited to [-8, 7] range accordingly.

Which encoding is meant by the specification?

I'm CC'ing this message to the "linux-input" maillist where I'm trying to
raise discussion about how "Unit Exponent" value should be interpreted for the
purposes of resolution calculation for HID devices in the Linux kernel.
Could you please keep this CC in your answer, so others get posted?

Thank you very much.

Sincerely,
Nikolai Kondrashov
--
To unsubscribe from this list: send the line "unsubscribe linux-input" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to