[issue25142] Misleading error when initing ImportError

2015-09-16 Thread Sebastian Kreft

New submission from Sebastian Kreft:

ImportError now supports the keyword arguments name and path. However, when 
passing invalid keyword arguments, the reported error is misleading, as shown 
below.

In [1]: ImportError('lib', name='lib')
Out[1]: ImportError('lib')

In [2]: ImportError('lib', name='lib', foo='foo')
---
TypeError Traceback (most recent call last)
 in ()
> 1 ImportError('lib', name='lib', foo='foo')

TypeError: ImportError does not take keyword arguments

--
messages: 250850
nosy: Sebastian Kreft
priority: normal
severity: normal
status: open
title: Misleading error when initing ImportError
versions: Python 3.4, Python 3.5, Python 3.6

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue25142>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue24797] email.header.decode_header return type is not consistent

2015-08-17 Thread Sebastian Kreft

Sebastian Kreft added the comment:

And what would the new API be?

There is nothing pointing to it either in the documentation 
https://docs.python.org/3.4/library/email.header.html or source code.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue24797
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue24797] email.header.decode_header return type is not consistent

2015-08-05 Thread Sebastian Kreft

New submission from Sebastian Kreft:

The return type of email.header.decode_header is not consistent. When there are 
encoded parts the return type is a list of (bytes, charset or None) (Note that 
the documentation says it is a list of (str, charset)). However, when there are 
no encoded parts the return type is [(str, None)]. Note that, at the end of the 
function, there is a routine that converts everything to bytes.

Compare:
In [01]: email.header.decode_header('=?UTF-8?Q?foo?=bar')
Out[01]: [(b'foo', 'utf-8'), (b'bar', None)]

In [02]: email.header.decode_header('foobar')
Out[02]: [('foobar', None)]

--
messages: 248047
nosy: Sebastian Kreft
priority: normal
severity: normal
status: open
title: email.header.decode_header return type is not consistent
versions: Python 3.4, Python 3.5, Python 3.6

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue24797
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21899] Futures are not marked as completed

2014-07-16 Thread Sebastian Kreft

Sebastian Kreft added the comment:

After more testing I finally found that in fact the process is not being 
killed. That means that there is no problem with the futures. But instead it is 
probably related with subprocess deadlocking, as the problematic process does 
not consume any CPU.

Sorry for the false report.

--
nosy:  -bquinlan
status: open - closed

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue21899
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue20319] concurrent.futures.wait() can block forever even if Futures have completed

2014-07-16 Thread Sebastian Kreft

Sebastian Kreft added the comment:

Disregard the last messages, It seems to be a deadblocking due to subprocess.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue20319
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21899] Futures are not marked as completed

2014-07-01 Thread Sebastian Kreft

New submission from Sebastian Kreft:

With Python 3.4.1 compiled from source, I'm having an issue in which every now 
and then some Futures are not marked as completed even though the underlying 
workload is done.

My workload is launching two subprocess in parallel, and whenever one is ready, 
launches another one. In one of the runs, the whole process got stuck after 
launching about 3K subprocess, and the underlying processes had in fact 
finished.

To wait for the finished subprocesses, I'm using FIRST_COMPLETED. Below is the 
core of my workload:

for element in element_generator:
while len(running) = max_tasks:
done, pending = concurrent.futures.wait(running, timeout=15.0, 
return_when=concurrent.futures.FIRST_COMPLETED)
process_results(done)
running = pending

running.add(executor.submit(exe_subprocess, element)) 
 

Replicating the issue takes time, but I've been able to successfully reproduce 
it with 2 and 3 processes in parallel.

Note: this was already posted in comments to http://bugs.python.org/issue20319, 
however it did not receive the proper atention as that issue is already closed.

As suggested there I printed the status of the never finished futures and this 
is the result:

State: RUNNING, Result: None, Exception: None, Waiters: 0, Cancelled: False, 
Running: True, Done: False

The information does not seem very relevant. However, I can attach a console 
and debug from there.

--
messages: 222034
nosy: Sebastian.Kreft.Deezer
priority: normal
severity: normal
status: open
title: Futures are not marked as completed

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue21899
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue20319] concurrent.futures.wait() can block forever even if Futures have completed

2014-06-18 Thread Sebastian Kreft

Sebastian Kreft added the comment:

@glangford: Is that really your recommendation, to switch to celery? Python 
3.4.1 should be production quality and issues like this should be addressed.

Note that I've successfully run millions of tasks using the same method, the 
only difference being that in that case the tasks weren't launching 
subprocesses. So I think that may be a starting point for debugging.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue20319
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue20319] concurrent.futures.wait() can block forever even if Futures have completed

2014-06-16 Thread Sebastian Kreft

Sebastian Kreft added the comment:

Any ideas how to debug this further?

In order to overcome this issue I have an awful workaround that tracks the 
maximum running time of a successful task, and if any task has been running 
more than x times that maximum I consider it defunct, and increase the number 
of concurrent allowed tasks. However, if the problem persist, I eventually will 
have lot of zombie tasks, which will expose some additional problems.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue20319
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue20319] concurrent.futures.wait() can block forever even if Futures have completed

2014-06-16 Thread Sebastian Kreft

Sebastian Kreft added the comment:

I'm running actually millions of tasks, so sending them all at once will
consume much more resources than needed.

The issue happens no only with 2 tasks in parallel but with higher numbers
as well.

Also your proposed solution, has the problem that when you are waiting for
the last tasks to finish you lose some parallelism.

In any case, it seems to me that there's some kind of race condition
preventing the task to finish, so if that's true the same could happen with
as_completed.
On Jun 16, 2014 2:00 PM, Glenn Langford rep...@bugs.python.org wrote:


 Glenn Langford added the comment:

  Any ideas how to debug this further?

 Wherever the cause of the problem might live, and to either work around it
 or gain additional information, here is one idea to consider.

 Do you need to submit your Futures just two at a time, and tightly loop
 every 15s? Why not submit a block of a larger number and wait for the block
 with as_completed(), logging for each completion. Then submit another block
 when they are all done. To control how many run at one time, create the
 Executor with max_workers=2 for example. (I had an app that ran  1,000
 futures in this way, which worked fine).

 In general I suggest to only timeout when there is really a problem, not
 as an expected event.

 --

 ___
 Python tracker rep...@bugs.python.org
 http://bugs.python.org/issue20319
 ___


--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue20319
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue20319] concurrent.futures.wait() can block forever even if Futures have completed

2014-06-10 Thread Sebastian Kreft

Sebastian Kreft added the comment:

I was able to recreate the issue again, and now i have some info about the 
offending futures:

State: RUNNING, Result: None, Exception: None, Waiters: 0, Cancelled: False, 
Running: True, Done: False

The information does not seem very relevant. However, I can attach a console 
and debug from there.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue20319
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21596] asyncio.wait fails when futures list is empty

2014-06-06 Thread Sebastian Kreft

Sebastian Kreft added the comment:

LGTM.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue21596
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue20319] concurrent.futures.wait() can block forever even if Futures have completed

2014-06-05 Thread Sebastian Kreft

Sebastian Kreft added the comment:

The Executor is still working (but I'm using a ThreadPoolExcutor). I can 
dynamically change the number of max tasks allowed, which successfully fires 
the new tasks.

After 2 days running, five tasks are in this weird state.

I will change the code as suggested and post my results.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue20319
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue20319] concurrent.futures.wait() can block forever even if Futures have completed

2014-06-04 Thread Sebastian Kreft

Sebastian Kreft added the comment:

@haypo: I've reproduced the issue with both 2 and 3 processes in parallel.

@glangford: the wait is actually returning after the 15 seconds, although 
nothing is reported as finished. So, it's getting stuck in the while loop. 
However, I imagine that without timeout, the call would block forever.

What kind of debug information from the futures would be useful?

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue20319
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue20319] concurrent.futures.wait() can block forever even if Futures have completed

2014-06-03 Thread Sebastian Kreft

Sebastian Kreft added the comment:

I'm using the Python 3.4.1 compiled from source and I'm may be hitting this 
issue.

My workload is launching two subprocess in parallel, and whenever one is ready, 
launches another one. In one of the runs, the whole process got stuck after 
launching about 3K subprocess, and the underlying processes had in fact 
finished.

To wait for the finished subprocesses, I'm using FIRST_COMPLETED. Below is the 
core of my workload:

for element in element_generator:
while len(running) = max_tasks:
done, pending = concurrent.futures.wait(running, timeout=15.0, 
return_when=concurrent.futures.FIRST_COMPLETED)
process_results(done)
running = pending

running.add(executor.submit(exe_subprocess, element)) 
 

I don't really know what's the best way to reproduce this, as I've run the same 
workload with different executables, more concurrency and faster response 
times, and I haven't seen the issue.

--
nosy: +Sebastian.Kreft.Deezer

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue20319
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21594] asyncio.create_subprocess_exec raises OSError

2014-06-02 Thread Sebastian Kreft

Sebastian Kreft added the comment:

I agree that blocking is not ideal, however there are already some other 
methods that can eventually block forever, and for such cases a timeout is 
provided. A similar approach could be used here.

I think this method should retry until it can actually access the resources, 
because knowing when and how many files descriptors are going to be used is 
very implementation dependent. So handling the retry logic on the application 
side, would be probably very inefficient as lot os information is missing, as 
the subprocess mechanism is a black box.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue21594
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21637] Add a warning section exaplaining that tempfiles are opened in binary mode

2014-06-02 Thread Sebastian Kreft

New submission from Sebastian Kreft:

Although it is already explained that the default mode of the opened tempfiles 
is 'w+b' a warning/notice section should be included to make it clearer.

I think this is important as the default for the open function is to return 
strings and not bytes.

I just had to debug an error with traceback, as traceback.print_exc expects a 
file capable of handling unicode.

--
assignee: docs@python
components: Documentation
messages: 219585
nosy: Sebastian.Kreft.Deezer, docs@python
priority: normal
severity: normal
status: open
title: Add a warning section exaplaining that tempfiles are opened in binary 
mode
versions: Python 3.4

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue21637
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21594] asyncio.create_subprocess_exec raises OSError

2014-05-28 Thread Sebastian Kreft

New submission from Sebastian Kreft:

In some cases asyncio.create_subprocess_exec raises an OSError because there 
are no file descriptors available.

I don't know if that is expected, but IMO I think it would be better to just 
block until the required numbers of fds are available. Otherwise one would need 
to do this handling, which is not a trivial task.

This issue is happening in Debian 7, with a 3.2.0-4-amd64 kernel, and python 
3.4.1 compiled from source.

--
messages: 219285
nosy: Sebastian.Kreft.Deezer
priority: normal
severity: normal
status: open
title: asyncio.create_subprocess_exec raises OSError
versions: Python 3.4

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue21594
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21595] Creating many subprocess generates lots of internal BlockingIOError

2014-05-28 Thread Sebastian Kreft

New submission from Sebastian Kreft:

Using the asyncio.create_subprocess_exec, generates lost of internal error 
messages. These messages are:

Exception ignored when trying to write to the signal wakeup fd:
BlockingIOError: [Errno 11] Resource temporarily unavailable

Getting the messages depeneds on how many subprocesses are active at the same 
time. In my system (Debian 7, kernel 3.2.0-4-amd64, python 3.4.1), with 3 or 
less processes at the same time I don't see any problem, but with 4 or more I 
got lot of messages.

On the other hand, these error messages seem to be innocuous, as no exception 
seems to be raised.

Attached is a test script that shows the problem.

It is run as:
bin/python3.4 test_subprocess_error.py MAX_PROCESSES ITERATIONS

it requires to have the du command.


Let me know if there are any (conceptual) mistakes in the attached code.

--
files: test_subprocess_error.py
messages: 219288
nosy: Sebastian.Kreft.Deezer
priority: normal
severity: normal
status: open
title: Creating many subprocess generates lots of internal BlockingIOError
versions: Python 3.4
Added file: http://bugs.python.org/file35385/test_subprocess_error.py

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue21595
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21596] asyncio.wait fails when futures list is empty

2014-05-28 Thread Sebastian Kreft

New submission from Sebastian Kreft:

Passing an empty list/set of futures to asyncio.wait raises an Exception, which 
is a little annoying in some use cases.

Probably this was the intended behavior as I see there's a test case for that. 
If such, then I would propose to document that behavior.

--
assignee: docs@python
components: Documentation
messages: 219290
nosy: Sebastian.Kreft.Deezer, docs@python
priority: normal
severity: normal
status: open
title: asyncio.wait fails when futures list is empty
versions: Python 3.4

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue21596
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue16664] [PATCH] Test Glob: files starting with .

2012-12-13 Thread Sebastian Kreft

Sebastian Kreft added the comment:

The docs don't say anything about it. However the code is there (docs bug 
probably). 

See the following lines in glob.py:
57 if pattern[0] != '.':
58 names = [x for x in names if x[0] != '.']
59 return fnmatch.filter(names, pattern)

The documentation is even harder to follow.

The glob docs say:
The pattern may contain simple shell-style wildcards a la fnmatch.

but the fnmatch docs say:
Similarly, filenames starting with a period are not special for this module, 
and are matched by the * and ? patterns.


The posix standard states that if a filename begins with a period ( '.' ), the 
period shall be explicitly matched
(http://pubs.opengroup.org/onlinepubs/009695399/utilities/xcu_chap02.html#tag_02_13_03).

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue16664
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue16664] [PATCH] Test Glob: files starting with .

2012-12-11 Thread Sebastian Kreft

New submission from Sebastian Kreft:

Please find attached a patch to improve the test cases for the glob module. It 
adds test cases for files starting with '.'.

--
components: Tests
files: python.patch
keywords: patch
messages: 177345
nosy: Sebastian.Kreft
priority: normal
severity: normal
status: open
title: [PATCH] Test Glob: files starting with .
versions: Python 3.4, Python 3.5
Added file: http://bugs.python.org/file28281/python.patch

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue16664
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com