Vajrasky Kok added the comment:
I isolate the bug. It happens in these lines:
# Always try reading and writing directly on the tty first.
fd = os.open('/dev/tty', os.O_RDWR|os.O_NOCTTY)
tty = os.fdopen(fd, 'w+', 1)
So to produce the bug more specifically, you can try
Vajrasky Kok added the comment:
I have investigated this problem and come up with the patch to fix the problem.
This patch does the job. Caution: only for Python 3.4. But translating this
patch to Python 3.3 should be straightforward.
I hope this patch could be the foundation for better
Vajrasky Kok added the comment:
Sorry,
My previous patch breaks the test. This one should pass the test and fix the
bug.
Still, there are ugly code in the patch that I hope better programmers could
fix.
--
Added file:
New submission from Gavan Schneider:
There is a missing symlink.
Context:
Installed package:
http://www.python.org/ftp/python/3.3.2/python-3.3.2-macosx10.6.dmg
with no apparent problems onto a 'clean' system, i.e., no other python packages
other than OS X 10.8.3 defaults.
Found the following
Changes by Gavan Schneider pythonbug-...@snkmail.com:
--
title: Missing symlink:Currnet after Mac OS X 3.3.2 package installation -
Missing symlink:Current after Mac OS X 3.3.2 package installation
___
Python tracker rep...@bugs.python.org
Lukas Lueg added the comment:
I was investigating a callgrind dump of my code, showing how badly
unicode_hash() was affecting my performance. Using google's cityhash instead
of the builtin algorithm to hash unicode objects improves overall performance
by about 15 to 20 percent for my case -
Antoine Pitrou added the comment:
I was investigating a callgrind dump of my code, showing how badly
unicode_hash() was affecting my performance.
Can you tell us about your use case?
There are several CityHash variants, which one did you use? CityHash64?
--
Raymond Hettinger added the comment:
I'm -1 on expanding this API further. It already is pushing the limits with
the dual signature and with the key-function.
Many languages have min/max functions. AFAICT, none of them have an API with a
default argument. This suggests that this isn't an
Lukas Lueg added the comment:
It's a cache sitting between an informix db and and an internal web service.
Stuff comes out of db, processed, json'ifed, cached and put on the wire. 10**6s
of strings pass this process per request if uncached...
I use CityHash64WithSeed, the seed being cpython's
New submission from helmut:
Consider the test case below.
#!/usr/bin/python
# -*- encoding: utf8 -*-
import curses
def wrapped(screen):
screen.addstr(0, 0, ä)
screen.addstr(0, 1, ö)
screen.addstr(0, 2, ü)
screen.getch()
if __name__ == __main__:
curses.wrapper(wrapped)
New submission from Shuhei Takahashi:
When urllib.FancyURLopener encounters 302 redirection to a URL with fragments,
it sends wrong URL to servers.
For example, if we run:
urllib.urlopen('http://example.com/foo')
and the server responds like following.
HTTP/1.1 302 Found
Location:
R. David Murray added the comment:
That's a good point about the __lt__. It occurred to me as well just before I
read your post :).
Raymond, do any other languages have an iterator protocol as a core language
feature? It's the fact that it is in Python, and that it is not simple to LBYL
R. David Murray added the comment:
Oh, and I don't think Haskell counts, since you'd expect them to stick strictly
to the mathematical definition, with no consideration of practicality :)
Note that I'm not saying I'm +1 on adding this (I haven't decided), I'm just
trying to clarify the
R. David Murray added the comment:
I believe this is one of a class of bugs that are fixed in Python3, and that
are unlikely to be fixed in Python2. I'll defer to Victor, though, who made a
number of curses unicode fixes in Python3.
--
nosy: +haypo, r.david.murray
title: curses utf8
Stefan Krah added the comment:
I'd use foldl() in functional languages, where the default is part
of foldl() and not of max().
Translated to Python, I'm thinking of:
it = iter([328, 28, 2989, 22])
functools.reduce(max, it, next(it, None))
2989
I agree with Raymond that a default arg in
Matthias Klose added the comment:
what's the status on this one? Can the proposed patch be applied until the
decision whether to backout the original change, or not?
--
nosy: +doko, georg.brandl, larry
priority: normal - release blocker
___
Python
Serhiy Storchaka added the comment:
I'm working on tests. No need to rush.
--
stage: patch review - test needed
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue17998
___
Shriramana Sharma added the comment:
I came upon this too. In Python 2 it used to expect a one character string.
Apparently the same error message has been carried forward to Python 3 too,
though now the actual expected input is either a one character bytes type and
not a str type, or an int
Lukas Lueg added the comment:
Here are some benchmarks for a arm7l on a rk30-board. CityHash was compiled
with -mcpu=native -O3.
CityHash is around half as fast as the native algorithm for small strings and
way, way slower on larger ones. My guess would be that the complex arithmetic
in
Antoine Pitrou added the comment:
Here are some benchmarks for a arm7l on a rk30-board. CityHash was
compiled with -mcpu=native -O3.
The results look unbelievable. If you take Length 10 ** 4, it means
arm7l is able to hash 20 GB/s using the default unicode hash function.
(did you disable
Ned Deily added the comment:
That behavior of the OS X installer is by design. Currently, the Current link
is only set for Python 2 installations, not Python 3 ones. While that may have
made sense in the early days of Python 3 (assuming there would be mixed
installations of both Python 3 and
Lukas Lueg added the comment:
The 10**4-case is an error (see insane %), I've never been able to reproduce.
Having done more tests with fixed cpu frequency and other daemons' process
priority reduced, cityhash always comes out much slower on arm7l.
--
New submission from spresse1:
[Code demonstrating issue attached]
When overloading multiprocessing.Process and using pipes, a reference to a pipe
spawned in the parent is not properly garbage collected in the child. This
causes the write end of the pipe to be held open with no reference to
Changes by Matthias Lee matthias.a@gmail.com:
--
nosy: +madmaze
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue18120
___
___
Python-bugs-list
spresse1 added the comment:
Now also tested with source-built python 3.3.2. Issue still exists, same
example files.
--
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue18120
___
New submission from Christian Heimes:
$ ./python
Python 3.4.0a0 (default:801567d6302c+, May 23 2013, 14:22:00)
[GCC 4.7.2] on linux
Type help, copyright, credits or license for more information.
import gc
gc.set_debug(gc.DEBUG_UNCOLLECTABLE)
import antigravity
Fontconfig warning:
Roundup Robot added the comment:
New changeset e9d0fb934b46 by Senthil Kumaran in branch '2.7':
Fix #17967 - Fix related to regression on Windows.
http://hg.python.org/cpython/rev/e9d0fb934b46
New changeset f5906026a7e9 by Senthil Kumaran in branch '3.3':
Fix #17967 - Fix related to regression
Changes by Senthil Kumaran sent...@uthcode.com:
--
status: open - closed
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue17967
___
___
Richard Oudkerk added the comment:
The way to deal with this is to pass the write end of the pipe to the child
process so that the child process can explicitly close it -- there is no reason
to expect garbage collection to make this happen automatically.
You don't explain the difference
spresse1 added the comment:
The difference is that nonfunctional.py does not pass the write end of the
parent's pipe to the child. functional.py does, and closes it immediately
after breaking into a new process. This is what you mentioned to me as a
workaround. Corrected code (for
New submission from Armin Rigo:
A new bug, introduced in recent Python 2.7 (2.7.3 passes, 2.7 trunk fails):
With the attached x.py, running python -c 'import x' fails with RuntimeError:
not holding the import lock.
It occurs when doing a fork() while holding the import lock, if the child
New submission from anatoly techtonik:
http://docs.python.org/2/library/glob.html
and
http://docs.python.org/2/library/fnmatch.html
both lack ability to do case-insensitive search for filenames. Due to this
difference, scripts that work ok on Windows start produce surprises on Linux.
STINNER Victor added the comment:
Is your Python curses module linked to libncurses.so.5 or libncursesw.so.5?
Example:
$ ldd /usr/lib/python2.7/lib-dynload/_cursesmodule.so |grep curses
libncursesw.so.5 = /lib/libncursesw.so.5 (0x00375000)
libncursesw has a much better support of
helmut added the comment:
All reproducers confirmed that their _cursessomething.so is linked against
libncursesw.so.5.
--
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue18118
___
Changes by Amaury Forgeot d'Arc amaur...@gmail.com:
--
nosy: +pitrou
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue18122
___
___
Python-bugs-list
Richard Oudkerk added the comment:
The write end of that pipe goes out of scope and has no references in the
child thread. Therefore, per my understanding, it should be garbage
collected (in the child thread). Where am I wrong about this?
The function which starts the child process by
STINNER Victor added the comment:
uäöü encoded to utf-8 gives '\xc3\xa4\xc3\xb6\xc3\xbc'
\303\303\303\274 is '\xc3\xc3\xc3\xbc'.
I guess that curses considers that '\xc3\xa4' is a string of 2 characters:
screen.addstr(0, 1, ö) replaces the second character, '\xa4'.
I suppose that
spresse1 added the comment:
So you're telling me that when I spawn a new child process, I have to deal with
the entirety of my parent process's memory staying around forever? I would
have expected this to call to fork(), which gives the child plenty of chance to
clean up, then call exec()
Gavan Schneider added the comment:
Appreciate the comment about potential problems with mixed installations of
python3 and python2. And note that along these lines there is no attempt by the
installer to symlink python - python3 (which could have nasty side effects if
the full path was not
Richard Oudkerk added the comment:
So you're telling me that when I spawn a new child process, I have to
deal with the entirety of my parent process's memory staying around
forever?
With a copy-on-write implementation of fork() this quite likely to use less
memory than starting a fresh
Changes by Giampaolo Rodola' g.rod...@gmail.com:
--
nosy: +giampaolo.rodola
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue18123
___
___
Richard Oudkerk added the comment:
Presumably this is caused by the fact that Popen.__del__() ressurects self by
appending self to _active if the process is still alive.
On Windows this is unnecessary. On Unix it would be more sensible to just
append the *pid* to _active.
--
nosy:
R. David Murray added the comment:
See also issue 5993.
--
nosy: +r.david.murray
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue18121
___
___
spresse1 added the comment:
So you're telling me that when I spawn a new child process, I have to
deal with the entirety of my parent process's memory staying around
forever?
With a copy-on-write implementation of fork() this quite likely to use
less memory than starting a fresh process
anatoly techtonik added the comment:
https://gist.github.com/techtonik/5694830
--
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue18123
___
___
Richard Oudkerk added the comment:
What I'm still trying to grasp is why Python explicitly leaves the
parent processes info around in the child. It seems like there is
no benefit (besides, perhaps, speed) and that this choice leads to
non-intuitive behavior - like this.
The Windows
spresse1 added the comment:
I'm actually a nix programmer by trade, so I'm pretty familiar with that
behavior =p However, I'm also used to inheriting some way to refer to these
fds, so that I can close them. Perhaps I've just missed somewhere a call to
ask the process for a list of open
Terry J. Reedy added the comment:
The patch does two things.
1. It replaces the existing direct rebinding of messagebox functions as
methods, such as
self.showerror = tkMessageBox.showerror
with binding of a double wrapping of the functions. The middle layer is useless
and only serves to
Raymond Hettinger added the comment:
Thanks Stephan. I'm going to close this one. The case for adding it is too
weak and isn't worth making the API more complex.
If someone wants a default with an iterable of arbitrary size including zero,
there are already a number of ways to do it (using
Julian Berman added the comment:
I don't really care to push this much harder, but I'll just repeat that I've
already made an argument against catching the exception. Calling this making
the API too complex also seems quite silly to me. It's a thing that someone
looking for would find and
New submission from Dmi Baranov:
As a part of issue #18109
$ echo hât | sudo tee /proc/sys/kernel/hostname
$ hostname #Yes, I know about RFC952;-)
hât
$ locale
LANG=en_US.UTF-8
LANGUAGE=
LC_CTYPE=en_US.UTF-8
LC_NUMERIC=en_US.UTF-8
LC_TIME=en_US.UTF-8
LC_COLLATE=en_US.UTF-8
Dmi Baranov added the comment:
Thanks Charles - I'm reproduced Dominik's issue at default branch:
$ python -c 'import os, sys;print(sys.version);print(os.uname())'
3.4.0a0 (default:adfec512fb32, Jun 3 2013, 08:09:43)
[GCC 4.6.3]
Traceback (most recent call last):
File string, line 1, in
Dmi Baranov added the comment:
Looks like old history from issue 7242
--
nosy: +dmi.baranov, gregory.p.smith
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue18122
___
53 matches
Mail list logo