https://bugzilla.wikimedia.org/show_bug.cgi?id=52287

--- Comment #5 from Matthew Flaschen <[email protected]> ---
(In reply to nuria from comment #4)
> Commenting in more detail here what we have been talking about on IRC:
> 
> Introducing client side delays is not optimal because it produces a slower
> UX that users might otherwise have but, not only that, it is
> double-worrisome(?) cause from a performance data standpoint those delays
> are impossible to detect.

This does not introduce a fixed or minimum client-side delay.  On the contrary,
it sets a maximum possible delay due to logging.  With the current EventLogging
code, the choices are:

1. Block the normal link navigation, wait for the logging to finish, then go to
the target URL when the logging is done.  This works fine in a performant
scenario, but if the server hangs, or takes a long time to return, they will
not make it to the next page (not sure what happens if the browser itself
detects it as a timeout internally).

2. Don't block normal link navigation (logging might not work if link
navigation happens before the logging network request).

3. Use a compromise solution like this, where it tries to log normally, and if
logging is taking longer than expected (500 ms in the code sketched above), do
the navigation anyway.

Looking at the patch (which I think is based on prior code at
https://git.wikimedia.org/blob/mediawiki%2Fextensions%2FGettingStarted.git/b66689ef823c759067d4f16c8a894758b727db60/resources%2Fext.gettingstarted.logging.js#L64),
note that logEvent returns a promise, which is then chained to the dfd
deferred.

So in a normal scenario, EventLogging will finish logging (as fast as it is
currently), then it will resolve, which will chain to dfd.resolve (which
triggers the always callback) and trigger URL navigation immediately after the
logging.

In a bad scenario, the code says (if the setTimeout is hit, meaning they still
haven't changed pages) "EventLogging still hasn't finished logging the event
after 500 ms.  Forget it, I'll go to the target URL anyway and not wait for
logging to be confirmed complete".

500 ms is a maximum delay/fail-safe if EventLogging is not responding normally,
not a minimum delay.

> The preferable option I can see would be storing those events locally and
> polling as needed to report them. This works well for logging internal page
> clickthrough.

By "internal page" do you mean "without leaving the site" or "without leaving
the page"?

I agree this is preferable, but browser support is an important issue.  Also,
if they leave the site, it won't work, but so far we've been concerned with
same-site links.

> Even for browsers that do not have localStorage (IE8) events
> can be logged into a in-memory hashmap. 

An in-memory hashmap will only work if they don't leave the page, right?  The
current logging mechanism already works fine if there is not a full page load
due to a link; an enhancement is not necessary for same-page actions.

-- 
You are receiving this mail because:
You are on the CC list for the bug.
_______________________________________________
Wikibugs-l mailing list
[email protected]
https://lists.wikimedia.org/mailman/listinfo/wikibugs-l

Reply via email to