Tim Peters added the comment:
LGTM! Ship it :-)
--
___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue28201>
___
___
Python-bugs-list
Tim Peters added the comment:
Good catch! I agree - and I wrote this code to begin with, so my opinion
should count ;-)
--
nosy: +tim.peters
___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/i
Tim Peters added the comment:
Let me clarify something about the extended algorithm: the starting guess is
no longer the most significant source of error. It's the `mul(D(x), pow(D2,
e))` part. `D(x)` is exact, but `pow(D2, e)` may not be exactly representable
with 26 decimal digits
Tim Peters added the comment:
Lucas, I largely agree, but it is documented that the various combinatorial
generators emit items in a particular lexicographic order. So that is
documented, and programs definitely rely on it.
That's why, in an earlier comment, Terry suggested that perhaps
Tim Peters added the comment:
I see nothing wrong with combinatorial generators materializing their inputs
before generation. Perhaps it should be documented clearly. It's certainly
not limited to `product()`. For example,
>>> for i in itertools.combinations(itertools.c
Tim Peters added the comment:
Let me give complete code for the last idea, also forcing the scaling
multiplication to use the correct context:
import decimal
c = decimal.DefaultContext.copy()
c.prec = 25
c.Emax = decimal.MAX_EMAX
c.Emin = decimal.MIN_EMIN
def erootn(x
Tim Peters added the comment:
Oops! The `D2**e` in that code should be `pow(D2, e)`, to make it use the
correct decimal context.
--
___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/i
Tim Peters added the comment:
Mark, thanks for the counterexample! I think I can fairly accuse you of
thinking ;-)
I expect the same approach would be zippy for scaling x by 2**e, provided that
the scaled value doesn't exceed the dynamic range of the decimal context. Like
so:
def erootn(x
Tim Peters added the comment:
Mark, the code I showed in roots.py is somewhat more accurate and highly
significantly faster than the code you just posted. It's not complicated at
all: it just uses Decimal to do a single Newton correction with extended
precision.
Since it doesn't use
Tim Peters added the comment:
The only sane way to do things "like this" is to allow types to define their
own special methods (like `__isnan__()`), in which case the math module defers
to such methods when they exist. For example, this is how
`math.ceil(Fraction)` works, by
Tim Peters added the comment:
I'm at best -0 on the idea: very easy to get the effect without it, and hard
to imagine it's needed frequently. `sorted()` is also very easy to mimic, but
is used often by all sorts of code. For example, to display output in a `for
key in sorted(dict):` loop
Changes by Tim Peters <t...@python.org>:
--
resolution: -> not a bug
stage: -> resolved
status: open -> closed
versions: +Python 3.2 -Python 3.4
___
Python tracker <rep...@bugs.python.org>
<http://bu
Tim Peters added the comment:
BTW, add this other way of writing a native-precision Newton step to see that
it's much worse (numerically) than writing it in the "guess + small_correction"
form used in roots.py. Mathematically they're identical, but numerically they
behave differen
Tim Peters added the comment:
Attched file "roots.py" you can run to get a guess as to how bad pow(x, 1/n)
typically is on your box.
Note that it's usually "pretty darned good" the larger `n` is. There's a
reason for that. For example, when n=1000, all x satisfying 1
Tim Peters added the comment:
Let's spell one of these out, to better understand why sticking to native
precision is inadequate. Here's one way to write the Newton step in "guess +
relatively_small_correction" form:
def plain(x, n):
g = x**(1.0/n)
return g - (g
Tim Peters added the comment:
As I said, the last code I posted is "fast enough" - I can't imagine a real
application can't live with being able to do "only" tens of thousands of roots
per second. A geometric mean is typically an output summary statistic, not a
tr
Tim Peters added the comment:
Steven, you certainly _can_ ;-) check first whether `r**n == x`, but can you
prove `r` is the best possible result when it's true? Offhand, I can't. I
question it because it rarely seems to _be_ true (in well less than 1% of the
random-ish test cases I tried
Tim Peters added the comment:
That's clever, Serhiy! Where did it come from? It's not Newton's method, but
it also appears to enjoy quadratic convergence.
As to speed, why are you asking? You should be able to time it, yes? On my
box, it's about 6 times slower than the last code I posted
Tim Peters added the comment:
Victor, happy to add comments, but only if there's sufficient interest in
actually using this. In the context of this issue report, it's really only
important that Mark understands it, and he already does ;-)
For example, it starts with float `**` because that's
Tim Peters added the comment:
Adding one more version of the last code, faster by cutting the number of extra
digits used, and by playing "the usual" low-level CPython speed tricks.
I don't claim it's always correctly rounded - although I haven't found a
specific case where it is
Tim Peters added the comment:
Serhiy, I don't know what you're thinking there, and the code doesn't make much
sense to me. For example, consider n=2. Then m == n, so you accept the
initial `g = x**(1.0/n)` guess. But, as I said, there are cases where that
doesn't give the best result
Tim Peters added the comment:
I don't care about correct rounding here, but it is, e.g., a bit embarrassing
that
>>> 64**(1/3)
3.9996
Which you may or may not see on your box, depending on your platform pow(), but
which you "should" see: 1/3 is not a t
Tim Peters added the comment:
Note that `Pool` grew `starmap()` and `starmap_async()` methods in Python 3.3
to (mostly) address this.
The signature difference from the old builtin `map()` remains regrettable.
Note that the `Pool` version differs from the `concurrent.futures` version of
`map
Tim Peters added the comment:
Looks to me like this is what the docs are talking about when they say:
"""
As mentioned above, if a child process has put items on a queue (and it has not
used JoinableQueue.cancel_join_thread), then that process will not terminate
until all buff
Tim Peters added the comment:
Noting that `floor_nroot` can be sped a lot by giving it a better starting
guess. In the context of `nroot`, the latter _could_ pass `int(x**(1/n))` as
an excellent starting guess. In the absence of any help, this version figures
that out on its own
Tim Peters added the comment:
Thanks, Mark! I had worked out the `floor_nroot` algorithm many years ago, but
missed the connection to the AM-GM inequality. As a result, instead of being
easy, proving correctness was a pain that stretched over pages. Delighted to
see how obvious it _can_
Tim Peters added the comment:
A meta-note: one iteration of Newton's method generally, roughly speaking,
doubles the number of "good bits" in the initial approximation.
For floating n'th root, it would an astonishingly bad libm pow() that didn't
get more than half the leading bit
Changes by Tim Peters <t...@python.org>:
--
resolution: -> rejected
stage: -> resolved
___
Python tracker <rep...@bugs.python.org>
<http://bugs.
Tim Peters added the comment:
Note that "iterable" covers a world of things that may not support indexing
(let alone slicing). For example, it may be a generator, or a file open for
reading.
--
nosy: +tim.peters
___
Python tr
Tim Peters added the comment:
Serhiy's objection is a little subtler than that. The Python expression
`math.log(math.e)` in fact yields exactly 1.0, so IF it were the case that x**y
were implemented as
math.exp(math.log(x) * y)
THEN math.e**500 would be computed as math.exp(math.log(math.e
Tim Peters added the comment:
For those insisting that tau is somehow unnatural, just consider that the
volume of a sphere with radius r is 2*tau/3*r**3 - the formula using pi instead
is just plain impossible to remember ;-)
--
___
Python tracker
Tim Peters added the comment:
Hmm. I'd test that tau is exactly equal to 2*pi. All Python platforms (past,
present, and plausible future ones) have binary C doubles, so the only
difference between pi and 2*pi _should_ be in the exponent (multiplication by 2
is exact). Else we screwed up
Changes by Tim Peters <t...@python.org>:
--
stage: -> resolved
status: open -> closed
___
Python tracker <rep...@bugs.python.org>
<http://bugs.
Tim Peters added the comment:
Well, some backslash escapes are processed in the "replacement" argument to
`.sub()`. If your replacement text contains a substring of the form
\g
not immediately followed by
<
that will raise the exception you're seeing. The parser
Tim Peters added the comment:
If you don't show us the regular expression, it's going to be darned hard to
guess what it is ;-)
--
nosy: +tim.peters
___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/i
Tim Peters added the comment:
FYI, I'm seeing the same kind of odd truncation Steve sees - but it goes away
if I refresh the page.
--
___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/i
Tim Peters added the comment:
About: "The notion of categorically refusing to let a process end perhaps
overreaches in certain situations." threading.py addressed that all along: if
the programmer _wants_ the process to exit without waiting for a particular
threading.Thread, t
Tim Peters added the comment:
About ""No parents, no children", that's fine so far as it goes. But Python
isn't C, a threading.Thread is not a POSIX thread, and threading.py _does_ have
a concept of "the main thread". There's no conceptual problem _in Python_ with
Tim Peters added the comment:
Devin, a primary point of `threading.py` is to provide a sane alternative to
the cross-platform thread mess. None of these reports are about making it
easier for threads to go away "by magic" when the process ends. It's the
contrary: they
Tim Peters added the comment:
This came up again today as bug 27508. In the absence of "fixing it", we
should add docs to multiprocessing explaining the high-level consequences of
skipping "normal" exit processing (BTW, I'm unclear on why it's skipped).
I've cer
Tim Peters added the comment:
Ah - good catch! I'm closing this as a duplicate of bug18966. The real
mystery now is why the threads _don't_ terminate early under Windows 3.5.2 -
heh.
--
resolution: -> duplicate
status: open -> closed
superseder: -> Threads within multip
Tim Peters added the comment:
Curious: under Python 2.7.11 on Windows, the threads also terminate early
(they run "forever" - as intended - under 3.5.2).
--
___
Python tracker <rep...@bugs.python.org>
<http://bugs.py
Changes by Tim Peters <t...@python.org>:
--
components: +Library (Lib)
type: -> behavior
___
Python tracker <rep...@bugs.python.org>
<http://bugs.pyt
Tim Peters added the comment:
Note: this started on stackoverflow:
https://stackoverflow.com/questions/38356584/python-multiprocessing-threading-code-exits-early
I may be missing something obvious, but the only explanation I could think of
for the behavior seen on Ubuntu is that the threads
Tim Peters added the comment:
Note that the same is true in Python 2.
I don't want to document it, though. In `math.floor(44/4.4)`, the
subexpression `44/4.4` by itself wholly rules out that "[as if] with infinite
precision [throughout the larger expression]" may be in play. `44/4
Tim Peters added the comment:
Python's floats are emphatically not doing symbolic arithmetic - they use the
platform's binary floating point facilities, which can only represent a subset
of rationals exactly. All other values are approximated.
In particular, this shows the exact value
Tim Peters added the comment:
Note that the very popular TI graphics calculators have had a distinct nth-root
function at least since the TI-83. It's a minor convenience there.
I'm +0 on adding it to Python's math module, which means not enough to do any
work ;-)
Note that if it is added
Tim Peters added the comment:
I think it's clear Guido would say "#1". The thrust of all his comments to
date is that it was a mistake to change the semantics of os.urandom() on Linux
(and one other platform? don't really care), and that in 3.6+ only `secrets`
should _try_ to suppl
Tim Peters added the comment:
Christian, you should really be the first to vote to close this. The title of
this bug report is about whether it would be good to reduce the _number_ of
bytes Random initialization consumes from os.urandom(), not whether to stop
using os.urandom() entirely
Tim Peters added the comment:
It was a primary purpose of `secrets` to be a place where security best
practices could be implemented, and changed over time, with no concern about
backward compatibility for people who don't use it.
So if `secrets` needs to supply a class with all the methods
Tim Peters added the comment:
Raymond, while I'm in general agreement with you, note that urandom() doesn't
deliver "random" bytes to begin with. A CSPRNG is still a PRNG.
For example, if the underlying urandom() generator is ChaCha20, _it_ has "only"
512 bits of state.
Tim Peters added the comment:
Ah! Yes, .getrandbits(N) outputs remain vulnerable to equation-solving in
Python 3, for any value of N. I haven't seen any code where that matters (may
be "a security hole"), but would bet some _could_ be found.
There's no claim of absolute sec
Tim Peters added the comment:
> Searching github pulls up a number of results of people
> calling it, but I haven't looked through them to see
> how/why they're calling it.
Sorry, I don't know what "it" refers to. Surely not to a program exposing the
output of .getst
Tim Peters added the comment:
Donald, your script appears to recreate the state from some hundreds of
consecutive outputs of getrandbits(64). Well, sure - but what of it? That
just requires inverting the MT's tempering permutation. You may as well note
that the state can be recreated from
Tim Peters added the comment:
Donald, it does matter. The code you found must be using some older version of
Python, because the Python 3 version of randint() uses _randbelow(), which is
an accept/reject method that consumes an _unpredictable_ number of 32-bit
Twister outputs. That utterly
Tim Peters added the comment:
Didn't anyone here follow the discussion about the `secrets` module? PHP was
crucified by security wonks for its horridly naive ways of initializing its
PRNGs:
https://media.blackhat.com/bh-us-12/Briefings/Argyros/BH_US_12_Argyros_PRNG_WP.pdf
Please don't even
Tim Peters added the comment:
Ya, this annoyance has been there forever. As I recall, the source of the
problem is the Tk text widget (which slows horribly when displaying long lines).
--
nosy: +tim.peters
___
Python tracker <rep...@bugs.python.
Tim Peters added the comment:
All versions of cmd.exe want backslashes in paths for the commands implemented
_by_ cmd.exe - those interpret a forward slash as indicating an option. For
example, here on Win10 Pro:
C:\WINDOWS\system32>dir c:\Windows\System32\xwreg.dll
Volume in drive C is
Tim Peters added the comment:
Just noting that the `multiprocessing` module can be used instead. In the
example, add
import multiprocessing as mp
and change
with concurrent.futures.ProcessPoolExecutor() as executor:
to
with mp.Pool() as executor:
That's all it takes
Tim Peters added the comment:
Do note that `.match()` is constrained to match starting at the first byte.
`.search()` is not (it can start matching at any position), and your example
works fine if `.search()` is used instead.
This is all expected, and intended, and documented
Tim Peters added the comment:
Right, these macros were in the original module (by Vladimir Marangozov).
They've never done anything - never been tested. Over the years I removed
other layers of macro indirection (while other people added more ;-) ), but
left these alone because they point
Tim Peters added the comment:
If that's the actual code you're using, it has a bug: the "if k2[1] is None"
test is useless, since regardless of whether it's true or false, the next `if`
suite overwrites `retval`. You probably meant
elif k1[1] ...
^^
instead of
Tim Peters added the comment:
+1 from me. Julian, you have the patience of a saint ;-)
--
___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/i
Tim Peters added the comment:
My opinion doesn't change: I'd rather see an exception. I see no use case for
inserting "into the middle" of a full bounded queue. If I had one, it would
remain trivial to force the specific behavior
Tim Peters added the comment:
I'd raise an exception when trying to insert into a bounded deque that's
already full. There's simply no way to guess what was _intended_; it's dead
easy for the user to implement what they _do_ intend (first make room by
deleting the specific item
Tim Peters added the comment:
If it were treating doubles as floats, you'd get a lot more failures than this.
Many of these look like clear cases of treating _denormal_ doubles as 0.0,
though. I have no experience with ICC, but a quick Google search suggests ICC
flushes denormals to 0.0
Tim Peters added the comment:
Do note that this is not an "edit distance" (like Levenshtein) algorithm. It
works as documented instead ;-) , searching (in effect recursively) for the
leftmost longest contiguous matching blocks. Both "leftmost" and "contiguous"
Tim Peters added the comment:
BTW, the "leftmost longest contiguous" bit is messy to explain, so the main
part of the docs don't explain it all (it's of no interest to 99.9% of users).
Instead it's formally defined in the .find_longest_match() docs:
"""
If i
Changes by Tim Peters <t...@python.org>:
--
components: +Library (Lib) -Extension Modules, ctypes
resolution: -> not a bug
stage: -> resolved
status: open -> closed
___
Python tracker <rep...@bugs.python.org>
<http://bu
Tim Peters added the comment:
This is just hard to believe. The symptom you describe is exactly what's
expected if you got the new test suite but did not compile the new C code, both
added by the fix for:
http://bugs.python.org/issue23600
Since we have numerous buildbots on which
Tim Peters added the comment:
What's your objection? Here's your original example:
>>> from bisect import *
>>> L = [1,2,3,3,3,4,5]
>>> x = 3
>>> i = bisect_left(L, x)
>>> i
2
>>> all(val < x for val in L[:i])
True
>>&g
Changes by Tim Peters <t...@python.org>:
--
nosy: +tim.peters
___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue24773>
___
__
Tim Peters added the comment:
Thank you for your persistence and patience, Peter! It shouldn't have been
this hard for you :-(
--
___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/i
Tim Peters added the comment:
Patch looks good to me! Thanks :-)
--
___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue23600>
___
___
Tim Peters added the comment:
Afraid that's a question for python-dev - I lost track of the active branches
over year ago :-(
--
___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/i
Tim Peters added the comment:
I expect Peter is correct: the C fromutc() doesn't match the logic of the
Python fromutc(), and there are no comments explaining why the C version
changed the logic.
The last 4 lines of his `time_issues.py` show the difference. The simplified
UKSummerTime
Tim Peters added the comment:
The only way to be certain you're never going to face re-entrancy issues in the
future is to call malloc() directly - and hope nobody redefines that too with
some goofy macro ;-)
In the meantime, stick to PyMem_Malloc(). That's the intended way for code
holding
Tim Peters added the comment:
BTW, I find this very hard to understand:
"it’s possible for x//y to be one larger than" ...
This footnote was written long before "//" was defined for floats. IIRC, the
original version must have said something like:
"it's possible
Tim Peters added the comment:
Stare at footnote 2 for the Reference Manual's "Binary arithmetic operations"
section:
"""
[2] If x is very close to an exact integer multiple of y, it’s possible for
x//y to be one larger than (x-x%y)//y due to rounding. In such cases, P
Tim Peters added the comment:
> What is the rounding mode used by true division,
For binary floats? It inherits whatever the platform C's x/y double division
uses. Should be nearest/even on "almost all" platforms now, unless the user
fiddles with their FPU's r
[Tim]
>> It depends on how expensive .utcoffset()
>> is, which in turn depends on how the tzinfo author implements it.
[Alex]
> No, it does not. In most time zones, UTC offset in seconds can be computed
> by C code as a 4-byte integer
Which is a specific implementation of .utcoffset(). Which
[Random832 ]
> A) I'm still not sure why, but I was talking about adding an int, not a
> timedelta and a string.
>
> B) Older python versions can't make use of either utcoffset or fold, but
> can ignore either of them. I don't even see why they couldn't ignore a
> timedelta
[Tim]
>> Because all versions of Python expect a very specific pickle layout
>> for _every_ kind of pickled object (including datetimes).. Make any
>> change to the pickle format of any object, and older Pythons will
>> simply blow up (raise an exception) when trying to load the new pickle
>> -
[Random832 ]
> Would allowing a 16-byte string in the future have increased the storage
> occupied by a 10-byte string today? Would allowing a third argument in
> the future have increased the storage occupied by two arguments today?
> As far as I can tell the pickle format
[Tim]
>> Sorry, I'm not arguing about this any more. Pickle doesn't work at
>> all at the level of "count of bytes followed by a string".
[Random832 ]
> The SHORT_BINBYTES opcode consists of the byte b'C', followed by *yes
> indeed* "count of bytes followed by a string".
[Random832 ]
Whether or not datetimes stored tm_gmtoff and tm_zone workalikes has
no effect on semantics I can see. If, in your view, they're purely an
optimization, they're just a distraction for now. If you're proposing
to add them _instead_ of adding `fold`, no, that
[Tim]
>> It would be nice to have! .utcoffset() is an expensive operation
>> as-is, and being able to rely on tm_gmtoff would make that dirt-cheap
>> instead.
[Alex]
> If it is just a question of optimization,
Yes. If it's more than just that, then 495 doesn't actually solve the
problem of
[Tim]
>> pytz solves it by _never_ creating a hybrid tzinfo. It only uses
>> eternally-fixed-offset tzinfos. For example, for a conceptual zone
>> with two possible total UTC offsets (one for "daylight", one for
>> "standard"), there two distinct eternally-fixed-offset tzinfo objects
>> in pytz.
[Guido]
>> Wouldn't it be sufficient for people in Creighton to set their timezone to
>> US/Central? IIUC the Canadian DST rules are the same as the US ones. Now,
>> the question may remain how do people know what to set their timezone to.
>> But neither pytz nor datetime can help with that -- it
[Tim]
>> So, on your own machine, whenever daylight time starts or ends, you
>> manually change your TZ environment variable to specify the newly
>> appropriate eternally-fixed-offset zone? Of course not.
[Random832 ]
> No, but the hybrid zone isn't what gets attached to
[Alex]
>>I will try to create a zoneinfo wrapping prototype as well, but I will
>>probably "cheat" and build it on top of pytz.
[Laura Creighton]
> My question, is whether it will handle Creighton, Saskatchewan, Canada?
> Creighton is an odd little place. Like all of Saskatchewan, it is
> in
[Tim]
>> Hi, Laura! By "zoneinfo" here, we mean the IANA (aka "Olson") time
>> zone database, which is ubiquitous on (at least) Linux:
>>
>>https://www.iana.org/time-zones
>>
>>So "will a wrapping of zoneinfo handle XYZ?" isn't so much a question
>>about the wrapping as about what's in the
[Laura]
>>> But I am not sure how it is that a poor soul who just wants to print a
>>> railway schedule 'in local time' is supposed to know that Creighton is
>>> using Winnipeg time.
[Tim]
>> I'm not sure how that poor soul would get a railway schedule
>> manipulable in Python to begin with ;-)
[Tim]
>> Whatever time zone the traveler's railroad schedule uses, so long as
>> it sticks to just one
[Laura]
> This is what does not happen. Which is why I have written a python
> app to perform conversions for my parents, in the past.
So how did they get the right time zone rules for
[Guido]
> Wouldn't it be sufficient for people in Creighton to set their timezone to
> US/Central? IIUC the Canadian DST rules are the same as the US ones. Now,
> the question may remain how do people know what to set their timezone to.
> But neither pytz nor datetime can help with that -- it is
[]
> My context is that I am working on an idea to include utc offsets in
> datetime objects (or on a similar object in a new module), as an
> alternative to something like a "fold" attribute. and since "classic
> arithmetic" is apparently so important,
Love it or hate it,
>>> If there are not, maybe the intended semantics should go
>> > by the wayside and be replaced by what pytz does.
>> Changing anything about default arithmetic behavior is not a
>> possibility. This has been beaten to death multiple times on this
>> mailing list already, and I'm not
> I was trying to find out how arithmetic on aware datetimes is "supposed
> to" work, and tested with pytz. When I posted asking why it behaves this
> way I was told that pytz doesn't behave correctly according to the way
> the API was designed.
You were told (by me) that its implementation of
[Tim]
>> Me too - except I think acceptance of 495 should be contingent upon
>> someone first completing a fully functional (if not releasable)
>> fold-aware zoneinfo wrapping.
[Alex]
> Good idea. How far are you from completing that?
In my head, it was done last week ;-) In real life, I'm
[Guido]
>> Those pytz methods work for any (pytz) timezone -- astimezone() with a
>> default argument only works for the local time zone.
{Alex]
> That's what os.environ['TZ'] = zonename is for. The astimezone() method
> works for every timezone installed on your system. Try it - you won't
801 - 900 of 1679 matches
Mail list logo