Forwarded at the request of Trey Jones
When we first started looking at zero results rate (ZRR), it was an easy
metric to calculate, and it was surprisingly high. We still look at ZRR
<https://searchdata.wmflabs.org/metrics/#failure_rate> because it is so
easy to measure, and anything that improves it is probably a net positive
(note the big dip when the new completion suggester was deployed!!), but we
have more complex metrics that we prefer. There's user engagement
clickthroughs, which combines clicks and dwell time and other user
activity. We also use historical click data in a metric that improves when
we move clicked-on results higher in the results list, which we use with
the Relevance Forge
And I didn't mean to give the impression that *most* zero-results queries
are gibberish, though many, many are. And that was something we didn't
really know a year ago. There are also non-gibberish results that correctly
get zero results, like most DOI
many media player
We also see a lot of non-notable (not-yet-notable?) public figures (local
bands, online artists, youtube musicians), and sometimes just random names.
The discussion in response to Dan's original comment in Phab mentions some
approaches to reduce the risk of automatically releasing private info, but
I still take an absolute stand against unreviewed release. If I can get a
few hundred people to click on a link like this
I can get any message I want on that list. (Curious? Did you click?) The
message could be less anonymous and much more obnoxious, obviously.
50 character limits won't stop emails and phone numbers from making the
list (which invites spam and cranks). Those can be filtered, but not
I've only looked at these top lists by day in the past, but on that time
scale the top results are usually under 1000 count (and that includes IP
duplicates), so the list of queries with 100 IPs might also be very small.
As I said, I'm happy to do the data slogging to try this in a better
fashion if this task is prioritized, and I'd be happy to be wrong about the
quality of the results, but I'm still not hopeful.
Software Engineer, Discovery
On Fri, Jul 15, 2016 at 10:19 AM, James Heilman <jmh...@gmail.com> wrote:
> The "jurrasic world" example is a good one as it was "fixed" by User:Foxj
> adding a redirect
> Agree we would need to be careful. The chance of many different IPs all
> searching for "DF198671E" is low but I agree not zero and we would need
> to have people run the results before they are displayed.
> I guess the question is how much work would it take to look at this sort
> of data for more examples like "jurrasic world"?
> On Fri, Jul 15, 2016 at 10:05 AM, Dan Garry <dga...@wikimedia.org> wrote:
>> On 15 July 2016 at 08:44, James Heilman <jmh...@gmail.com> wrote:
>> > Thanks for the in depth discussion. So if the terms people are using
>> > result in "zero search results" are typically gibberish why do we care
>> > 30% of our searches result in "zero search results"? A big deal was made
>> > about this a while ago.
>> Good question! I originally used to say that it was my aspiration that
>> users should never get zero results when searching Wikipedia. As a result
>> of Trey's analysis, I don't say that any more. ;-) There are many
>> legitimate cases where users should get zero results. However, there are
>> still tons of examples of where giving users zero results is incorrect;
>> "jurrasic world" was a prominent example of that.
>> It's still not quite right to say that *all* the terms that people use to
>> get zero results are gibberish. There is an extremely long tail
>> <https://en.wikipedia.org/wiki/Long_tail> of zero results queries that
>> aren't gibberish, it's just that the top 100 are dominated by gibberish.
>> This would mean we'd have to release many, many more than the top 100,
>> which significantly increases the risk of releasing personal information.
>> > If one was just to look at those search terms that more than 100 IPs
>> > searched for would that not remove the concerns about anonymity? One
>> > also limit the length of the searches displaced to 50 characters. And
>> > provide the first 100 with an initial human review to make sure we are
>> > miss anything.
>> The problem with this is that there are still no guarantees. What if you
>> saw the search query "DF198671E"? You might not think anything of it, but
>> would recognise it as an example of a national insurance number
>> <https://en.wikipedia.org/wiki/National_Insurance_number>, the British
>> equivalent of a social security number . There's always going to be the
>> potential that we accidentally release something sensitive when we release
>> arbitrary user input, even if it's manually examined by humans.
>> So, in summary:
>> - The top 100 zero results queries are dominated by gibberish.
>> - There's a long tail of zero results queries, meaning we'd have to
>> reduce many more than the top 100.
>> - Manually examining the top zero results queries is not a foolproof
>> of eliminating personal data since it's arbitrary user input.
>> I'm happy to answer any questions. :-)
>> : Don't panic, this example national insurance number is actually
>> invalid. ;-)
>> Dan Garry
>> Lead Product Manager, Discovery
>> Wikimedia Foundation
>> Wikimedia-l mailing list, guidelines at:
>> New messages to: Wikimediaemail@example.com
>> Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l,
> James Heilman
> MD, CCFP-EM, Wikipedian
> The Wikipedia Open Textbook of Medicine
MD, CCFP-EM, Wikipedian
The Wikipedia Open Textbook of Medicine
Wikimedia-l mailing list, guidelines at:
New messages to: Wikimediafirstname.lastname@example.org