[issue37966] is_normalized is much slower than the standard's algorithm

2019-08-27 Thread Greg Price


Change by Greg Price :


--
keywords: +patch
pull_requests: +15231
stage:  -> patch review
pull_request: https://github.com/python/cpython/pull/15558

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37966] is_normalized is much slower than the standard's algorithm

2019-08-27 Thread Greg Price


New submission from Greg Price :

In 3.8 we add a new function `unicodedata.is_normalized`.  The result is 
equivalent to `str == unicodedata.normalize(form, str)`, but the implementation 
uses a version of the "quick check" algorithm from UAX #15 as an optimization 
to try to avoid having to copy the whole string.  This was added in issue 
#32285, commit 2810dd7be.

However, it turns out the code doesn't actually implement the same algorithm as 
UAX #15, and as a result we often miss the optimization and end up having to 
compute the whole normalized string after all.

Here's a quick demo on my desktop.  We pass a long string made entirely out of 
a character for which the quick-check algorithm always says `NO`, it's not 
normalized:

$ build.base/python -m timeit -s 'import unicodedata; s = "\uf900"*50' -- \
'unicodedata.is_normalized("NFD", s)'
50 loops, best of 5: 4.39 msec per loop

$ build.base/python -m timeit -s 'import unicodedata; s = "\uf900"*50' -- \
's == unicodedata.normalize("NFD", s)'
50 loops, best of 5: 4.41 msec per loop

That's the same 4.4 ms (for a 1 MB string) with or without the attempted 
optimization.

Here it is after a patch that makes the algorithm run as in the standard:

$ build.dev/python -m timeit -s 'import unicodedata; s = "\uf900"*50' -- \
'unicodedata.is_normalized("NFD", s)'
500 loops, best of 5: 58.2 nsec per loop

Nearly 5 orders of magnitude faster -- the difference between O(N) and O(1).

The root cause of the issue is that our `is_normalized` static helper, which 
the new function relies on, was never written as a full implementation of the 
quick-check algorithm.  The full algorithm can return YES, MAYBE, or NO; but 
originally this helper's only caller was the implementation of 
`unicodedata.normalize`, which only cares about YES vs. MAYBE-or-NO.  So the 
helper often returns MAYBE when the standard algorithm would say NO.

(More precisely, perhaps: it's fine that this helper was never a full 
implementation... but it didn't say that anywhere, even while referring to the 
standard algorithm, and as a result set us up for future confusion.)

That's exactly what's happening in the example above: the standard quick-check 
algorithm would say NO, but our helper says MAYBE.  Which for 
`unicodedata.is_normalized` means it has to go compute the whole normalized 
string.

--
messages: 350651
nosy: Greg Price
priority: normal
severity: normal
status: open
title: is_normalized is much slower than the standard's algorithm
versions: Python 3.8

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com