yannvgn added the comment:
Hey Matthew,
we decided to go for this, which is simpler and straightforward:
def _uniq(items):
return list(dict.fromkeys(items))
(see https://github.com/python/cpython/pull/15030)
--
___
Python tracker
<ht
yannvgn added the comment:
> Indeed, it was not expected that the character set contains hundreds of
> thousands items. What is its size in your real code?
> Could you please show benchmarking results for different implementations and
> different sizes?
I can't precisely
Change by yannvgn :
--
keywords: +patch
pull_requests: +14788
stage: -> patch review
pull_request: https://github.com/python/cpython/pull/15030
___
Python tracker
<https://bugs.python.org/issu
New submission from yannvgn :
On complex cases, parsing regular expressions takes much, much longer on Python
>= 3.7
Example (ipython):
In [1]: import re
In [2]: char_list = ''.join([chr(i) for i in range(0x)])
In [3]: long_char_list = char_list * 10
In [4]: pattern =