On Thu, 16 Jun 2011, Daniel Kurtz wrote:
On Thu, Jun 16, 2011 at 12:45 AM, Derek Foreman <[email protected]> wrote:
On 06/15/11 05:12, Daniel Kurtz wrote:
On Wed, Jun 15, 2011 at 1:06 AM, Daniel Stone<[email protected]>
wrote:
From: Derek Foreman<[email protected]>
Use a smarter motion estimator that attempts to draw a best-fit line
through the history where possible, including taking acceleration into
account.
Signed-off-by: Derek Foreman<[email protected]>
Reviewed-by: Daniel Stone<[email protected]>
Reviewed-by: Simon Thum<[email protected]>
---
src/synaptics.c | 56
+++++++++++++++++++++++++++++++++++++++++++++++++++---
1 files changed, 52 insertions(+), 4 deletions(-)
v2: Split a few hunks out into separate patches.
NOTE: I realise 'outlier' is unused; I've already fixed this in my local
tree.
diff --git a/src/synaptics.c b/src/synaptics.c
index 23751f4..cfd1384 100644
--- a/src/synaptics.c
+++ b/src/synaptics.c
@@ -1803,6 +1803,54 @@ get_edge_speed(SynapticsPrivate *priv, const
struct SynapticsHwState *hw,
}
}
First of all, in the Synaptics case, I believe the touchpad firmware
is generating samples at a very consistent 80 Hz rate.
The reporting rate is only variable due to the way the firmware does
multitouch reporting with its "Advanced Gesture Mode".
Therefore, in the single-touch case, this math mostly just adds in
jitter from PS/2 bus contention and kernel processing latencies.
In that respect, the previous fixed-time approach may have actually
been more accurate.
There may be simpler ways to compensate for AGM-induced variable
timing... but, that would be a different patch set.
Do you have reason to believe that PS/2 bus contention and kernel processing
latencies will add up to more than a few microseconds? I'm not convinced
there will be enough timing jitter to cause perceptible errant motion.
Yes, on some hardware the latencies can add up to several milliseconds.
Especially when there is keyboard contention on PS/2 (like shift/ctrl + drag).
Ouch! :(
I've now run some tests on 3 laptops I have on hand. One of them shows no
such latency. One of them has about 3ms of jitter constantly, and the
last shows significant jitter while a key is held - in fact, judging by
the stream of input events, it's quite likely this laptop is actually
dropping trackpad input while I hold down the shift key, as I see a lot
of 20+mS report intervals, but no short intervals.
While I can see these numbers in a test app, I'm unable to detect a change
in trackpad behaviour when I hold down keys. The new motion interpolation
code is probably the reason why the cursor motion with the lossy trackpad
doesn't look choppy when its report rate dips randomly below the screen
refresh rate with a key held.
I think the new code is either beneficial or benign on the few trackpads I
have access to.
This driver also handles non-synaptics devices, like the USB based ones in
some Apple laptops. I think they report at a different rate (8ms comes to
mind.).
The kernel input layer "defuzz" code also filters events out completely from
time to time, and while I'm currently proposing a slight change to those
semantics on the linux-input list, they're the parameters we have to work
with.
In this case, does userspace not even get the EV_SYN?
Yeah, (for MT type B drivers) if no state changes (after defuzz), no
EV_SYN is sent.
As for the implementation below, I'm a little confused, since it
appears to have two different behaviors depending on
count_packet_finger:
(1) If == 1: Return actual delta for last sample time, using current
position.
(2) If> 1: Return estimated delta for last sample time, ignoring
current position.
Why do you use the current position in the == 1 case, but ignore it in
the> 1 case?
That's because during our initial testing you raised the concern that
"really short" interactions with the pad could be lost waiting for the
motion history to fill up. We tried passing up the first delta without
passing it through the regression routine, and it subjectively felt very
responsive, even on a synaptics AGM device in its 40Hz reporting mode.
In the > 1 case we only ever output predicted motion. This is so (when a
subsequent patch adds an "error filter") we can test if the current hardware
input deviates radically from the regression line, and not generate motion
based on that.
This sounds reasonable. Please add a comment to this affect. It is a
quite subtle implementation detail.
Will do.
I believe the original estimate_delta did, in fact use the current
position (hw->x,y). However, I seem to recall that this was a source
of some error, since I believe this position is pre-scaling (whereas
the positions stored in the HISTORY buffer are post-scaling), and the
delta estimate was taken before it was determined that this new
position was really part of the same gesture.
+/*
+ * Fit a line through the three most recent points in the motion
+ * history and return relative co-ordinates.
Maybe:
(1) compute the average velocity over the past N positions (for N up
to 3) [including the current hw position?]
(2) given this average velocity, return the expected position delta
over the most recent sample time:
delta = (HIST(0).millis - hw->millis) * velocity_estimate;
That's another potentially valid approach. It loses the ability to filter
based on a maximum deviation from the predicted line, and I'm not sure it
will track an accelerating finger as accurately as the linear-regression
approach.
It's hard to guess how that would feel without anyone having written it
though. Have you written and tested an estimator based on that code?
Sorry for the confusion, I'm suggesting another way of explaining what
this function actually does, not an alternative implementation. The
computed "linear regression slope of position vs. time" is the average
velocity.
I'll try to beef up the comments a bit here too.
Thanks,
Derek
_______________________________________________
[email protected]: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel