[issue40249] __import__ doesn't honour globals

2020-04-14 Thread Stefan Seefeld


Stefan Seefeld  added the comment:

I'm not entirely sure, but have to admit that the sentence

"The function imports the module name, potentially using the given globals and 
locals to determine how to interpret the name in a package context."

is a bit obscure. What does "determine how to interpret the name" actually mean 
? Is the algorithm described anywhere in detail ? In that case, a simple 
reference might be enough.

--

___
Python tracker 
<https://bugs.python.org/issue40249>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40249] __import__ doesn't honour globals

2020-04-10 Thread Stefan Seefeld


Stefan Seefeld  added the comment:

OK, thanks for the clarification. I think this is obscure enough to warrant at 
least a paragraph or two to clarify the semantics of these arguments.
I changed the issue "components" and "type" to reflect that.

--
assignee:  -> docs@python
components: +Documentation -Interpreter Core
nosy: +docs@python
type: behavior -> enhancement

___
Python tracker 
<https://bugs.python.org/issue40249>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40249] __import__ doesn't honour globals

2020-04-10 Thread Stefan Seefeld


New submission from Stefan Seefeld :

I'm trying to import custom scripts that expect the presence of certain 
variables in the global namespace.
It seems `__import__('script', globals=dict(foo='bar'))` doesn't have the 
expected effect of injecting "foo" into the namespace and making it accessible 
to the script being loaded.

Is this a bug in `__import__` or am I missing something ? If the behaviour is 
expected, how can I achieve the desired behaviour ?

--
components: Interpreter Core
messages: 366160
nosy: stefan
priority: normal
severity: normal
status: open
title: __import__ doesn't honour globals
type: behavior
versions: Python 3.6

___
Python tracker 
<https://bugs.python.org/issue40249>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35830] building multiple (binary) packages from a single project

2019-03-08 Thread Stefan Seefeld


Change by Stefan Seefeld :


--
resolution:  -> works for me
stage:  -> resolved
status: open -> closed

___
Python tracker 
<https://bugs.python.org/issue35830>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35830] building multiple (binary) packages from a single project

2019-01-25 Thread Stefan Seefeld


Stefan Seefeld  added the comment:

Yes. Depending on the answer to my question(s), the request either becomes: 
"please add support for this use-case", or "this use-case isn't documented 
properly", i.e. a feature request or a bug report. You choose. :-)

--

___
Python tracker 
<https://bugs.python.org/issue35830>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35830] building multiple (binary) packages from a single project

2019-01-25 Thread Stefan Seefeld


New submission from Stefan Seefeld :

I'm working on a project that I'd like to split into multiple separately 
installable components. The main component is a command-line tool without any 
external dependencies. Another component is a GUI frontend that adds some 
third-party dependencies.
Therefore, I'd like to distribute the code in a single source package, but 
separate binary packages (so users can install only what they actually need).

I couldn't find any obvious way to support such a scenario with either 
`distutils` nor `setuptools`. Is there an easy solution to this ? (I'm 
currently thinking of adding two `setup()` calls to my `setup.py` script. That 
would then call all commands twice, so I'd need to override the `sdist` command 
to only build a single (joint) source package.
Is there a better way to achieve what I want ?

--
assignee: docs@python
components: Distutils, Documentation
messages: 334381
nosy: docs@python, dstufft, eric.araujo, stefan
priority: normal
severity: normal
status: open
title: building multiple (binary) packages from a single project
type: behavior

___
Python tracker 
<https://bugs.python.org/issue35830>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35635] asyncio.create_subprocess_exec() only works in main thread

2019-01-07 Thread Stefan Seefeld


Stefan Seefeld  added the comment:

> The limitation is a consequence of how Linux works.
> Unix has no cross-platform API for non-blocking waiting for child process 
> finish except handling SIGCHILD signal.

Why does the `wait()` have to be non-blocking ? We can call it once in 
response to the reception of a `SIGCHILD`, where we know the call 
wouldn't block. Then we can pass the `pid` to whatever event loop 
created the subprocess to do the cleanup there...

> On the other hand signal handlers in Python should work in the main thread.

That's fine.

> Your trick with a loop creation in the main thread and actual running in 
> another thread can work, but asyncio doesn't guarantee it.
> The behavior can be broken in next releases, sorry.

Yeah, I observed some strange issues that looked like they could be 
fixed by someone intimately familiar with `asyncio`. But given the 
documented limitation, it seemed wise not to descend into that rabbit 
hole, and so I (at least temporarily) abandoned the entire approach.

--

Stefan

   ...ich hab' noch einen Koffer in Berlin...

--

___
Python tracker 
<https://bugs.python.org/issue35635>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35635] asyncio.create_subprocess_exec() only works in main thread

2019-01-07 Thread Stefan Seefeld


Stefan Seefeld  added the comment:

OK, so while I have been able to work around the issues (by using `quamash` to 
bridge between `asyncio` and `Qt`), I'd still like to understand the rationale 
behind the limitation that any subprocess-managing event-loop has to run in the 
main thread. Is this really an architectural limitation or a limit of the 
current implementation ?

And to your question: As I wasn't really expecting such a limitation, I would 
have expected 
"To handle signals and to execute subprocesses, the event loop must be run 
in the main thread."

to be written much more prominently (as a warning admonition even, perhaps).

--

___
Python tracker 
<https://bugs.python.org/issue35635>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35635] asyncio.create_subprocess_exec() only works in main thread

2019-01-04 Thread Stefan Seefeld


Stefan Seefeld  added the comment:

That's quite an unfortunate limitation ! I'm working on a GUI frontend to a 
Python tool I wrote using asyncio, and the GUI (Qt-based) itself insists to run 
its own event loop in the main thread.

I'm not sure how to work around this limitation, but I can report that my 
previously reported strategy appears to be working well (so far).

What are the issues I should expect to encounter running an asyncio event loop 
in a worker thread ?

--

___
Python tracker 
<https://bugs.python.org/issue35635>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35635] asyncio.create_subprocess_exec() only works in main thread

2019-01-01 Thread Stefan Seefeld


New submission from Stefan Seefeld :

This is an addendum to issue35621:

To be able to call `asyncio.create_subprocess_exec()` from another thread, A 
separate event loop needs to be created. To make the child watcher aware of 
this new loop, I have to call `asyncio.get_child_watcher().attach_loop(loop)`. 
However, in the current implementation this call needs to be made by the main 
thread (or else the `signal` module will complain as handlers may only be 
registered in the main thread).

So, to work around the above limitations, the following workflow needs to be 
used:

1) create a new loop in the main thread
2) attach it to the child watcher
3) spawn a worker thread
4) set the previously created event loop as default loop

After that, I can run `asyncio.create_subprocess_exec()` in the worker thread. 
However, I suppose the worker thread will be the only thread able to call that 
function, given the child watcher's limitation to a single loop.

Am I missing something ? Given the complexity of this, I would expect this to 
be better documented in the sections explaining how `asyncio.subprocess` and 
`threading` interact.

--
components: asyncio
messages: 332855
nosy: asvetlov, stefan, yselivanov
priority: normal
severity: normal
status: open
title: asyncio.create_subprocess_exec() only works in main thread
type: behavior

___
Python tracker 
<https://bugs.python.org/issue35635>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35621] asyncio.create_subprocess_exec() only works with main event loop

2019-01-01 Thread Stefan Seefeld


Change by Stefan Seefeld :


--
nosy: +stefan

___
Python tracker 
<https://bugs.python.org/issue35621>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35449] documenting objects

2018-12-10 Thread Stefan Seefeld


Stefan Seefeld  added the comment:

ad 3) sorry, I picked a bad example - I didn't mean to suggest that immutable 
objects should in fact become mutable by modifying their `__doc__` attribute.

ad 1) good, glad to hear that.

ad 2) fine. In fact, I'm not even proposing that per-instance docstring 
generation should be "on" by default. I'm merely asking whether the Python 
community can't (or even shouldn't) agree on a single convention for how to 
represent them, such that special tools can then support them, rather than 
different tools supporting different syntax / conventions.

--

___
Python tracker 
<https://bugs.python.org/issue35449>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35449] documenting objects

2018-12-09 Thread Stefan Seefeld


Stefan Seefeld  added the comment:

On 2018-12-09 19:48, Karthikeyan Singaravelan wrote:
> There was a related proposal in 
> https://www.python.org/dev/peps/pep-0258/#attribute-docstrings

Right, but that was rejected (for unrelated reasons). The idea itself
was rejected by Guido
(https://www.python.org/dev/peps/pep-0224/#comments-from-our-bdfl), and
I'm not aware whether anyone has addressed his concerns by proposing a
different syntax.

It's sad, as right now there doesn't appear to be any way to address
this need...

Stefan

-- 

  ...ich hab' noch einen Koffer in Berlin...

--

___
Python tracker 
<https://bugs.python.org/issue35449>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35449] documenting objects

2018-12-09 Thread Stefan Seefeld


Stefan Seefeld  added the comment:

On 2018-12-09 18:35, Steven D'Aprano wrote:
> Steven D'Aprano  added the comment:
>
>> Is there any discussion concerning what syntax might be used for 
>> docstrings associated with objects ?
> I don't know about PyDoc in general, but I would expect help(obj) to 
> just use obj.__doc__ which will return the instance docstring if it 
> exists, and if not, the type docstring (if it exists). No new syntax is 
> required, the standard ``help(obj)`` is sufficient.

That's why I distinguished between points 1) and 2) in my original mail:
The syntax is about how certain tokens in the parse tree are associated
as "docstring" with a given object (i.e., point 1), while the pydoc's
behaviour (to either accept any `__doc__` attributes, or only those of
specific types of objects) is entirely orthogonal to that (thus point 2).

I now understand that the current `pydoc` behaviour is considered
erroneous, and it sounds like a fix would be simple and focused in scope.

>> (There seem to be some partial 
>> solutions added on top of the Python parser (I think `epydoc` offered 
>> one), but it would be nice to have a built-in solution to avoid having 
>> to re-invent wheels.
> Are you suggesting we need new syntax to automatically assign docstrings 
> to instances? I don't think we do.

No, I'm not suggesting that. I'm suggesting that within the current
syntax, some additional semantic rules might be required to bind
comments (or strings) to objects as "docstrings". For example:

```

foo = 123

"""This is foo's docstring"""

```

might be one convention to add a docstring to a variable.

```

foo = 123

# This is foo's docstring

```

might be another.

None of this is syntactically new, but the construction of the AST from
the parse tree is. (I have seen both of these conventions used in custom
tools to associate documentation to variables, which of course requires
hacking into the parser internals, to add the given docstring to the
object's `__doc__` attribute.

It would be great to establish a convention for this, so in the future
tools don't have to invent their own (non-portable) convention.

--

___
Python tracker 
<https://bugs.python.org/issue35449>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35449] documenting objects

2018-12-09 Thread Stefan Seefeld


Stefan Seefeld  added the comment:

Exactly ! I'm fully aware of the ubiquity of objects in Python, and it is for 
that reason that I had naively expected `pydoc` to simply DoTheRightThing when 
encountering an object containing a `__doc__` attribute. rather than only 
working for types and function objects.

OK, assuming that this is a recognized bug / limitation, it seems easy to 
address.

Is there any discussion concerning what syntax might be used for docstrings 
associated with objects ? (There seem to be some partial solutions added on top 
of the Python parser (I think `epydoc` offered one), but it would be nice to 
have a built-in solution to avoid having to re-invent wheels.

--

___
Python tracker 
<https://bugs.python.org/issue35449>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35449] documenting objects

2018-12-09 Thread Stefan Seefeld


New submission from Stefan Seefeld :

On multiple occasions I have wanted to add documentation not only to Python 
classes and functions, but also instance variables. This seems to involve (at 
least) two orthogonal questions:

1) what is the proper syntax to associate documentation (docstrings ?) to 
objects ?
2) what changes need to be applied to Python's infrastructure (e.g., the help 
system) to support it ?


I have attempted to work around 1) in my custom code by explicitly setting an 
object's `__doc__` attribute. However, calling `help()` on such an object would 
simply ignore that attribute, and instead list the documentation associated 
with the instance type.

Am I missing something here, i.e. am I approaching the problem the wrong way, 
or am I the first to want to use object-specific documentation ?

--
messages: 331443
nosy: stefan
priority: normal
severity: normal
status: open
title: documenting objects
type: enhancement

___
Python tracker 
<https://bugs.python.org/issue35449>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue33389] argparse redundant help string

2018-04-29 Thread Stefan Seefeld

New submission from Stefan Seefeld <ste...@seefeld.name>:

I'm using Python's `argparse` module to define optional arguments.
I'm calling the `argparse.add_argument` method to add both short and long 
arguments, but I notice that the generated help message lists some information 
twice. For example:
```
argparse.add_argument('-s', '--service',...)
```
will generate

```
-s SERVICE, --service SERVICE
```
and when I add a `choices` argument, even the choices list is repeated. I think 
it would be more useful to suppress the repetition to produce output such as
```
-s|--service SERVICE ...
```
instead with both the meta var as well as choices etc. printed only once.

--
components: Library (Lib)
messages: 315917
nosy: stefan
priority: normal
severity: normal
status: open
title: argparse redundant help string
type: enhancement

___
Python tracker <rep...@bugs.python.org>
<https://bugs.python.org/issue33389>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue32036] error mixing positional and non-positional arguments with `argparse`

2017-11-15 Thread Stefan Seefeld

Stefan Seefeld <ste...@seefeld.name> added the comment:

It looks like https://bugs.python.org/issue14191 is a conversation about the 
same inconsistent behaviour. It is set to "fixed". Can you comment on this ? 
Should I follow the advice mentioned there about how to work around the issue ?

--

___
Python tracker <rep...@bugs.python.org>
<https://bugs.python.org/issue32036>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue32036] error mixing positional and non-positional arguments with `argparse`

2017-11-15 Thread Stefan Seefeld

Stefan Seefeld <ste...@seefeld.name> added the comment:

On 15.11.2017 12:54, R. David Murray wrote:
> Can you reproduce this without your PosArgsParser?
I can indeed (by simply commenting out the `action` argument to the
`add_argument()` calls).
That obviously results in all positional arguments being accumulated in
the `goal` member, as there is no logic to distinguish `a` from `a=b`
semantically.

--

___
Python tracker <rep...@bugs.python.org>
<https://bugs.python.org/issue32036>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue32036] error mixing positional and non-positional arguments with `argparse`

2017-11-15 Thread Stefan Seefeld

New submission from Stefan Seefeld <ste...@seefeld.name>:

I'm trying to mix positional and non-positional arguments with a script using 
`argparse`, but I observe inconsistent behaviour.
The attached test runs fine when invoked with

test_argparse.py --info a a=b
test_argparse.py a a=b --info

but produces the error `error: unrecognized arguments: a=b` when invoked as

test_argparse.py a --info a=b

Is this intended behaviour ? If yes, is this documented ? If not, is there a 
way to make this work with existing `argparse` versions ?

--
components: Library (Lib)
files: test_argparse.py
messages: 306283
nosy: stefan
priority: normal
severity: normal
status: open
title: error mixing positional and non-positional arguments with `argparse`
type: behavior
Added file: https://bugs.python.org/file47268/test_argparse.py

___
Python tracker <rep...@bugs.python.org>
<https://bugs.python.org/issue32036>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue30496] Incomplete traceback with `exec` on SyntaxError

2017-05-29 Thread Stefan Seefeld

Stefan Seefeld added the comment:

OK, fair enough, that makes sense. As I said, in my last message, I was mainly 
simply trying to figure out the exact location of the error in the executed 
script, which I got from inspecting the SyntaxError.
So if all of this is expected behaviour, I think we can close this issue.

Many thanks for following up.

--
resolution:  -> not a bug
stage: needs patch -> resolved
status: open -> closed

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue30496>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue30496] Incomplete traceback with `exec` on SyntaxError

2017-05-29 Thread Stefan Seefeld

Stefan Seefeld added the comment:

Answering my own question:

It appears I can get the location of a syntax error by inspecting the
raised `SyntaxError`, which solves my specific use-case. 
The bug remains, though: The traceback is incomplete if it stems from a syntax 
error.

--

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue30496>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue30496] Incomplete traceback with `exec` on SyntaxError

2017-05-29 Thread Stefan Seefeld

Stefan Seefeld added the comment:

Some further experiements:

Replacing the `exec(f.read(), env)` line by
```
code = compile(f.read(), 'script', 'exec')
exec(code, env)
```
exhibits the same behaviour. If I remove the `try...except`, the correct
(full) traceback is printed out. So it looks like the issue is with the 
traceback propagation through exception handlers when the error happens during 
parsing.

--

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue30496>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue30496] Incomplete traceback with `exec`

2017-05-28 Thread Stefan Seefeld

New submission from Stefan Seefeld:

The following code is supposed to catch and report errors encountered during 
the execution of a (python) script:

```
import traceback
import sys

try:
env = {}
with open('script') as f:
exec(f.read(), env)
except:
type_, value_, tb = sys.exc_info()
print (traceback.print_tb(tb))
```
However, depending on the nature of the error, the traceback may contain the 
location of the error *within* the executed `script` file, or it may only 
report the above `exec(f.read(), env)` line.

The attached tarball contains both the above as well as a 'script' that exhibit 
the problem.

Is this a bug or am I missing something ? Are there ways to work around this, 
i.e. determine the correct (inner) location of the error ?

(I'm observing this with both Python 2.7 and Python 3.5)

--
files: pyerror.tgz
messages: 294645
nosy: stefan
priority: normal
severity: normal
status: open
title: Incomplete traceback with `exec`
type: behavior
Added file: http://bugs.python.org/file46908/pyerror.tgz

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue30496>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26481] unittest discovery process not working without .py source files

2016-03-04 Thread Stefan Seefeld

New submission from Stefan Seefeld:

The unittest test discovery right now only looks into sub-packages if they 
contain a `__init__.py` file. That's an unnecessary requirement, as packages 
are also importable if only `__init__.pyc` is present.

--
components: Library (Lib)
messages: 261192
nosy: stefan
priority: normal
severity: normal
status: open
title: unittest discovery process not working without .py source files

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue26481>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue25520] unittest load_tests protocol not working as documented

2016-01-21 Thread Stefan Seefeld

Stefan Seefeld added the comment:

I believe what I actually want is for the discovery mechanism to be fully 
implicit. It turns out that already *almost* works right now.

What doesn't work (and what this bug report really was about initially), is the 
use of the 'discover' command with the '-p "*.py"' argument, which for some 
reason makes certain tests (all ?) count twice. It looks like packages are 
visited twice, once as modules, and once via their contained '__init__.py' 
file...

(For the implicit discovery to work better, I believe, the discovery-specific 
options need to be made available through the main parser, so they can be used 
even without the 'discover' command.)

--

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue25520>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue25520] unittest load_tests protocol not working as documented

2016-01-20 Thread Stefan Seefeld

Stefan Seefeld added the comment:

Hi, I'm investigating more issues related to test loading, and thus I have 
discovered issue #16662.

I have found quite a number of inconsistencies and bugs in the loading of 
tests. But without getting a sense of what the supposed behavior is I find it 
difficult to come up with workarounds and fixes.

Is there a place where these issues can be discussed (rather than just looking 
at each bug individually) ?

I'd ultimately like to be able to invoke

  `python -m unittest my.test.subpackage` 

and have unittest pick up all the tests within that recursively, with and 
without load_tests() functions.

(On a tangential note, I would also like to have a mode where the found tests 
are listed without being executed. I have hacked a pseudo-TestRunner that does 
that, but I'm not sure this is the best approach.)

Is there any place where the bigger picture can be discussed ?

Thanks,

--

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue25520>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26033] distutils default compiler API is incomplete

2016-01-06 Thread Stefan Seefeld

New submission from Stefan Seefeld:

I'm trying to use the distutil compiler to preprocess some header (to be used 
with the cffi package).
The code is

compiler = distutils.ccompiler.new_compiler()
compiler.add_include_dir(join(sys.prefix, 'include'))
compiler.preprocess(source)

This raises this exception (on Linux):

  File ".../distutils/unixccompiler.py", line 88, in preprocess
pp_args = self.preprocessor + pp_opts
TypeError: unsupported operand type(s) for +: 'NoneType' and 'list'

caused by 'set.preprocessor' to be set to None (with the preceding comment:

# The defaults here
# are pretty generic; they will probably have to be set by an outsider
# (eg. using information discovered by the sysconfig about building
# Python extensions).

Seems that code never got fully implemented.
Further, the MSVC version of the compiler (msvccompiler.py) doesn't even 
implement a "preprocess()" method, so this falls back to the 
CCompiler.preprocess() default, which does nothing !

--
components: Distutils
messages: 257663
nosy: dstufft, eric.araujo, stefan
priority: normal
severity: normal
status: open
title: distutils default compiler API is incomplete
type: behavior
versions: Python 2.7, Python 3.5

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue26033>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue25726] sys.setprofile / sys.getprofile asymetry

2015-11-24 Thread Stefan Seefeld

New submission from Stefan Seefeld:

I'm using the `cProfile` module to profile my code.

I tried to temporarily disable the profiler by using:

  prof = sys.getprofile()
  sys.setprofile(None)
  ...
   sys.setprofile(prof)

resulting in an error.

The reason is that with `cProfile`, `sys.getprofile` returns the profile object 
itself, which isn't suitable as argument for `sys.setprofile` (which expects a 
callable).

Notice that if I use the `profile` module instead of `cProfile`, the above 
works fine.

--
messages: 255301
nosy: stefan
priority: normal
severity: normal
status: open
title: sys.setprofile / sys.getprofile asymetry
type: behavior
versions: Python 3.4

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue25726>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue25520] unittest load_tests protocol not working as documented

2015-10-30 Thread Stefan Seefeld

New submission from Stefan Seefeld:

As described in the README contained in the attached tarball, I'm observing 
wrong behavior. I have based this code on my understanding of 
https://docs.python.org/2/library/unittest.html#load-tests-protocol, but the 
effect isn't as expected (I see duplicate appearances of tests whenever I use 
the load_tests() mechanism.)

--
files: unittest_bug.tgz
messages: 253757
nosy: stefan
priority: normal
severity: normal
status: open
title: unittest load_tests protocol not working as documented
type: behavior
versions: Python 2.7, Python 3.4
Added file: http://bugs.python.org/file40905/unittest_bug.tgz

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue25520>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue2101] xml.dom documentation doesn't match implementation

2008-02-13 Thread Stefan Seefeld

New submission from Stefan Seefeld:

The docs at http://docs.python.org/lib/dom-element-objects.html
claim that removeAttribute(name) silently ignores the attempt to
remove an unknown attribute. However, the current implementation
in the minidom module (part of _xmlplus) raises an xml.dom.NotFoundErr
exception.

--
components: Documentation, XML
messages: 62359
nosy: stefan
severity: normal
status: open
title: xml.dom documentation doesn't match implementation
type: behavior
versions: Python 2.5

__
Tracker [EMAIL PROTECTED]
http://bugs.python.org/issue2101
__
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue2041] __getslice__ still called

2008-02-07 Thread Stefan Seefeld

Stefan Seefeld added the comment:

Mark,

thanks for the quick follow-up.
OK, i now understand the situation better. The documentation I had read 
originally didn't talk about special-casing built-in objects. (And since 
I want to extend a tuple, I do have to override __getslice__ since I 
want to make sure the returned object still has the derived type.)

Yes, I believe this issue can be closed as invalid.
(Though I believe the docs could be a bit more clear about this.)

Thanks,
Stefan

-- 

   ...ich hab' noch einen Koffer in Berlin...

__
Tracker [EMAIL PROTECTED]
http://bugs.python.org/issue2041
__
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue2041] __getslice__ still called

2008-02-07 Thread Stefan Seefeld

New submission from Stefan Seefeld:

The python documentation states that since python 2.0 __getslice__ is
obsoleted by __getitem__. However, testing with python 2.3 as well as
2.5, I find the following surprising behavior:

class Tuple(tuple):

  def __getitem__(self, i): print '__getitem__', i
  def __getslice__(self, i): print '__getslice__', i

t = Tuple()
t[0] # __getitem__ called with type(i) == int
t[0:2] # __getslice__ called with type(i) == slice
t[0:2:1] # __getitem__ called with type(i) == slice

--
components: Interpreter Core
messages: 62162
nosy: stefan
severity: major
status: open
title: __getslice__ still called
type: behavior
versions: Python 2.3, Python 2.5

__
Tracker [EMAIL PROTECTED]
http://bugs.python.org/issue2041
__
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Announcement: Synopsis 0.8

2005-06-09 Thread Stefan Seefeld
I'm pleased to announce the release of Synopsis 0.8.


Synopsis is a multi-language source code introspection tool that
provides a variety of representations for the parsed code, to
enable further processing such as documentation extraction,
reverse engineering, and source-to-source translation.

While this release mainly focusses on an internal redesign,
there are nevertheless a number of major enhancements and
new features:


* Synopsis contains a new C Parser.
* The C++ parser contains lots of bug fixes.
* The HTML formatter contains a number of bug fixes.


Synopsis' internals have been largely redesigned, to eventually
expose public C++ and python APIs to the lower-level code
representations such as parse trees or symbol tables.
It currently contains experimental code to use synopsis as a
scriptable source-to-source compiler, though these APIs are
still in development.


Source and documentation is available at http://synopsis.fresco.org.
For contact information, see http://synopsis.fresco.org/contact.html.

-- 
http://mail.python.org/mailman/listinfo/python-list


None module reference

2005-05-21 Thread Stefan Seefeld
hello,

I'v run into a bug that I find hard to understand:

In a python module of mine I import system modules
('sys', say) and then use them from within some functions.

However, during program termination I'm calling
one such function and the module reference ('sys')
is 'None' !

What does that mean ? Have those modules already
been unloaded ? If so, why, given that my
current module still references them ?

Any help is highly appreciated,

Stefan
-- 
http://mail.python.org/mailman/listinfo/python-list



Re: domain specific UI languages

2005-04-13 Thread Stefan Seefeld
max(01)* wrote:
hi.
in a previous thread, mr lundh talks about the possibility to create 
domain specific UI languages using tkinter.

can he (or anyone else who pleases) explain what they are? give some 
examples (simple is better)?
Without having read the original thread, I imagine such a beast to
be used to easily script common UI tasks such as dialogs.
Things to express in such a context are
* GUI structure
* simple callbacks and their relation to events
* some styling
I'v done some GUI prototyping in a former life using a combination of
Python, XML, and CSS.
Popular examples of such languages include XUL (mozilla), Javascript,
and similar.
HTH,
Stefan
--
http://mail.python.org/mailman/listinfo/python-list


Re: Missing Module when calling 'Import' _omnipy

2005-04-12 Thread Stefan Seefeld
[EMAIL PROTECTED] wrote:
I found the fike _omnipy.pyd is that what you mean?
yep.
Here is my Lib Path:
[...]
As a first step you may consider adding the directory containing '_omnipy.pyd'
to your PYTHONPATH variable. Second, you may read any documentation you can
find at http://omniorb.sourceforge.net/, and finally ask any unanswered question
on the omniORB ML.
Sorry, it has been a long while since I last used omniORB.
Regards,
Stefan
--
http://mail.python.org/mailman/listinfo/python-list


Re: Problem with import from omniORB import CORBA, PortableServer

2005-04-11 Thread Stefan Seefeld
[EMAIL PROTECTED] wrote:
Thank for the reply, I think your right  my problem is the what version
to us, I want to write some Python scripts calling Corba Objects what
do i need ?
omnipython-1.5.2-1
this looks like *really* old (and obsolete) stuff...
omniORBpy 2.5
omniORB 4.0.5
Python 2.4.1
These look fine. Make sure to either have a binary package for omniORBpy
that was compiled for the version of python you are actually using, or
alternatively compile it yourself.
HTH,
Stefan
--
http://mail.python.org/mailman/listinfo/python-list


Re: Missing Module when calling 'Import' _omnipy

2005-04-11 Thread Stefan Seefeld
[EMAIL PROTECTED] wrote:
I am trying to run an Corba example using Python and i get the follwing
error:
import _omnipy
ImportError: No module named _omnipy
Where can i find this Module ?
This is an extension module (i.e. the file is named _omnipy.so or 
_omnipy.dll, I believe),
and it is in the library path, not the python path. Thus you will have to add 
the library
path to the PYTHONPATH variable.
For details see 
http://omniorb.sourceforge.net/omnipy2/omniORBpy/omniORBpy001.html#toc2
or the omniORB specific mailing list.
HTH,
Stefan
--
http://mail.python.org/mailman/listinfo/python-list


Re: Stylistic question about inheritance

2005-03-31 Thread Stefan Seefeld
Andrew Koenig wrote:
Of course, there are reasons to have a base class anyway.  For example, I 
might want it so that type queries such as isinstance(foo, Expr) work.  My 
question is: Are there other reasons to create a base class when I don't 
really need it right now?
Coming from C++ myself, I still prefer to use inheritance even if Python
doesn't force me to do it. It's simply a matter of mapping the conceptual
model to the actual design/implementation, if ever possible.
Regards,
Stefan
--
http://mail.python.org/mailman/listinfo/python-list


Re: Which is easier? Translating from C++ or from Java...

2005-03-28 Thread Stefan Seefeld
[EMAIL PROTECTED] wrote:
Patrick Useldinger wrote:
cjl wrote:

Implementations of what I'm trying to accomplish are available
(open
source) in C++ and in Java.
Which would be easier for me to use as a reference?
I'm not looking for automated tools, just trying to gather opinions
on
which language is easier to understand / rewrite as python.

Depends on what language you know best. But Java is certainly easier
to
read than C++.

There's certainly some irony in those last two sentences. However, I
agree with the former. It depends on which you know better, the style
of those who developed each and so forth. Personally, I'd prefer C++.
I don't think the OP was asking for personal preference, and so I happen
to agree with the reply: parsing Java is definitely *much* simpler than
parsing C++, no matter how well you know either. As far as manual translations
go, that is much less a matter of ease of parsing but instead how closely
programming idioms match between the two languages that are involved.
And that obviously also depends on the specific code that needs to
be rewritten and the style it is written in (i.e. for example OO vs. templates,
etc.).
Regards,
Stefan
--
http://mail.python.org/mailman/listinfo/python-list


Re: exec src in {}, {} strangeness

2005-03-21 Thread Stefan Seefeld
Do Re Mi chel La Si Do wrote:
Hi !
Try :
exec f in globals(),locals()
or
exec(f,globals(),locals())
or
exec f in globals(),globals()
or
exec(f,globals(),globals())
Indeed, using 'globals()' and 'locals()' works. However,
both report the same underlaying object, which is a bit
confusing. (Under what circumstances does 'locals()' return
not the same object as 'globals()' ?)
The problem appears to be that
exec f in a, b
where a and b are distinct dictionaries, does not look up
symbols in 'a' when in local scope.
I filed a bug report (#1167300).
Regards,
Stefan
--
http://mail.python.org/mailman/listinfo/python-list


Re: exec src in {}, {} strangeness

2005-03-21 Thread Stefan Seefeld
Peter Hansen wrote:
Stefan Seefeld wrote:
Indeed, using 'globals()' and 'locals()' works. However,
both report the same underlaying object, which is a bit
confusing. (Under what circumstances does 'locals()' return
not the same object as 'globals()' ?)

When you aren't at the interactive prompt...  there are
no locals there, so locals() just maps through to globals().
(Probably this applies to all code at the module level,
as oppsed to code inside any callable, but I haven't
verified... you can easily enough.)
Does this information invalidate your bug report?
No, but that's possibly only because I don't (yet) understand
the implications of what you are saying.
Is there anything wrong with 'exec source in a, b' where
a and b are distinc originally empty dictionaries ? Again,
my test code was
class Foo: pass
class Bar:
  foo = Foo
and it appears as if 'Foo' was added to 'a', but when evaluating
'foo = Foo' the interpreter only looked in 'b', not 'a'.
Thanks,
Stefan
--
http://mail.python.org/mailman/listinfo/python-list


Re: exec src in {}, {} strangeness

2005-03-21 Thread Stefan Seefeld
Bernhard Herzog wrote:
Stefan Seefeld [EMAIL PROTECTED] writes:

Is there anything wrong with 'exec source in a, b' where
a and b are distinc originally empty dictionaries ? Again,
my test code was
class Foo: pass
class Bar:
  foo = Foo
and it appears as if 'Foo' was added to 'a', but when evaluating
'foo = Foo' the interpreter only looked in 'b', not 'a'.

No, it's the other way round.  Foo is added in b since bindings are done
in the local scope.  Your case is a bit more complicated, though.
Here's what I think happens:
class Foo is bound in b, the locals dictionary, so there is no reference
to Foo in the globals dictionary.  The body of class B is executed with
it's own new locals dictionary.  That locals dictionary will effectively
be turned into Bar.__dict__ when the class object is created.
When foo = Foo is executed, Foo is first looked up in that new locals
dictionary.  That fails, so it's also looked up in the globals
dictionary a.  That fails as well because Foo was bound in b.  The final
lookup in the builtins also fails, and thus you get an exception.
Thanks for the explanation ! I'm still unable to make a conclusion:
What is wrong ? Am I doing something stupid (I did try various things
such as inserting __builtin__ into the dictionary, etc.) ?
Or is that really a bug ?
Thanks,
Stefan
--
http://mail.python.org/mailman/listinfo/python-list


exec src in {}, {} strangeness

2005-03-20 Thread Stefan Seefeld
hi there,
I have trouble running some python code with 'exec':
t.py contains:
class Foo: pass
class Bar:
f = Foo
From a python shell I do:
 f = ''.join(open('t.py').readlines())
 exec f in {}, {}
Traceback (most recent call last):
  File stdin, line 1, in ?
  File string, line 2, in ?
  File string, line 3, in Bar
NameError: name 'Foo' is not defined
However, when I use the current global and local scope, i.e.
simply 'exec f', everything works fine. What am I missing ?
Thanks,
Stefan
--
http://mail.python.org/mailman/listinfo/python-list


Re: lies about OOP

2004-12-13 Thread Stefan Seefeld
Craig Ringer wrote:
On Tue, 2004-12-14 at 16:02, Mike Thompson wrote:

I would pick the publication of Design Patterns in 1995 by the Gang of
Four (Gamma, Helm, Johnson, and Vlissides),  to be the herald of when the
Joy of OOP would be widely known.  DP formalized a taxonomy for many of
the heuristics that had evolved only intuitively up until then.  Its
emergence reflects a general maturation of concept and practice, sufficient
to say that the Joy of OOP could be said to be widely known.

In actual fact, virtually all the design patterns came from the 
Interviews C++ GUI toolkit written in the early '90s. What an utterly 
brilliant piece of work that was.

As somebody who has just been bowled over by how well Qt works, and how
it seems to make OOP in C++ work right (introspection, properties,
etc), I'd be interested in knowing what the similarities or lack thereof
between Qt and Interviews are.
Qt provides widgets that a client app. can compose into a GUI.
InterViews provides 'glyphs' [*] that form a scene graph in a display
server. Although InterViews usually was compiled into a client-side
library, it provided all the functionality required by a display server
such as redisplay and pick traversals. Indeed, the X Consortium
supported InterViews (and its successor Fresco) for a while as the next
generation for its 'X Windowing System', until it was dropped (for
mostly political reasons, as usual) about '95.
(Fresco had been nominated, together with OpenDoc, as candidates for an
'Compound Document Architecture' RFP on the Object Management Group.
OpenDoc won.)
[*] The term 'glyph' reflexts the fact that the scene graph nodes in
InterViews are extremely fine-grained, i.e. glyphs can represent
individual characters or elements of vector graphics such as paths.
That's unlike any conventional 'toolkit' such as Qt, where a 'widget'
is quite coarse-grained, and the display of such 'widgets' is typically
not that of a structured graphic, but procedural.
Regards,
Stefan
--
http://mail.python.org/mailman/listinfo/python-list