New submission from Sebastian Kreft:
ImportError now supports the keyword arguments name and path. However, when
passing invalid keyword arguments, the reported error is misleading, as shown
below.
In [1]: ImportError('lib', name='lib')
Out[1]: ImportError('lib
Sebastian Kreft added the comment:
And what would the new API be?
There is nothing pointing to it either in the documentation
https://docs.python.org/3.4/library/email.header.html or source code.
--
___
Python tracker
<http://bugs.python.
New submission from Sebastian Kreft:
The return type of email.header.decode_header is not consistent. When there are
encoded parts the return type is a list of (bytes, charset or None) (Note that
the documentation says it is a list of (str, charset)). However, when there are
no encoded parts
Sebastian Kreft added the comment:
Disregard the last messages, It seems to be a deadblocking due to subprocess.
--
___
Python tracker
<http://bugs.python.org/issue20
Sebastian Kreft added the comment:
After more testing I finally found that in fact the process is not being
killed. That means that there is no problem with the futures. But instead it is
probably related with subprocess deadlocking, as the problematic process does
not consume any CPU.
Sorry
New submission from Sebastian Kreft:
With Python 3.4.1 compiled from source, I'm having an issue in which every now
and then some Futures are not marked as completed even though the underlying
workload is done.
My workload is launching two subprocess in parallel, and whenever one is
Sebastian Kreft added the comment:
@glangford: Is that really your recommendation, to switch to celery? Python
3.4.1 should be production quality and issues like this should be addressed.
Note that I've successfully run millions of tasks using the same method, the
only difference being
Sebastian Kreft added the comment:
I'm running actually millions of tasks, so sending them all at once will
consume much more resources than needed.
The issue happens no only with 2 tasks in parallel but with higher numbers
as well.
Also your proposed solution, has the problem that whe
Sebastian Kreft added the comment:
Any ideas how to debug this further?
In order to overcome this issue I have an awful workaround that tracks the
maximum running time of a successful task, and if any task has been running
more than x times that maximum I consider it defunct, and increase the
Sebastian Kreft added the comment:
I was able to recreate the issue again, and now i have some info about the
offending futures:
State: RUNNING, Result: None, Exception: None, Waiters: 0, Cancelled: False,
Running: True, Done: False
The information does not seem very relevant. However, I can
Sebastian Kreft added the comment:
LGTM.
--
___
Python tracker
<http://bugs.python.org/issue21596>
___
___
Python-bugs-list mailing list
Unsubscribe:
Sebastian Kreft added the comment:
The Executor is still working (but I'm using a ThreadPoolExcutor). I can
dynamically change the number of max tasks allowed, which successfully fires
the new tasks.
After 2 days running, five tasks are in this weird state.
I will change the co
Sebastian Kreft added the comment:
@haypo: I've reproduced the issue with both 2 and 3 processes in parallel.
@glangford: the wait is actually returning after the 15 seconds, although
nothing is reported as finished. So, it's getting stuck in the while loop.
However, I imagine th
Sebastian Kreft added the comment:
I'm using the Python 3.4.1 compiled from source and I'm may be hitting this
issue.
My workload is launching two subprocess in parallel, and whenever one is ready,
launches another one. In one of the runs, the whole process got stuck after
launchin
New submission from Sebastian Kreft:
Although it is already explained that the default mode of the opened tempfiles
is 'w+b' a warning/notice section should be included to make it clearer.
I think this is important as the default for the open function is to return
strings and not
Sebastian Kreft added the comment:
I agree that blocking is not ideal, however there are already some other
methods that can eventually block forever, and for such cases a timeout is
provided. A similar approach could be used here.
I think this method should retry until it can actually access
New submission from Sebastian Kreft:
Passing an empty list/set of futures to asyncio.wait raises an Exception, which
is a little annoying in some use cases.
Probably this was the intended behavior as I see there's a test case for that.
If such, then I would propose to document that beh
New submission from Sebastian Kreft:
Using the asyncio.create_subprocess_exec, generates lost of internal error
messages. These messages are:
Exception ignored when trying to write to the signal wakeup fd:
BlockingIOError: [Errno 11] Resource temporarily unavailable
Getting the messages
New submission from Sebastian Kreft:
In some cases asyncio.create_subprocess_exec raises an OSError because there
are no file descriptors available.
I don't know if that is expected, but IMO I think it would be better to just
block until the required numbers of fds are available. Othe
Sebastian Kreft added the comment:
The docs don't say anything about it. However the code is there (docs bug
probably).
See the following lines in glob.py:
57 if pattern[0] != '.':
58 names = [x for x in names if x[0] != '.']
59 return fnma
New submission from Sebastian Kreft:
Please find attached a patch to improve the test cases for the glob module. It
adds test cases for files starting with '.'.
--
components: Tests
files: python.patch
keywords: patch
messages: 177345
nosy: Sebastian.Kreft
priority: norma
21 matches
Mail list logo