[issue7744] Allow site.addsitedir insert to beginning of sys.path

2014-09-10 Thread Michael R. Bernstein

Michael R. Bernstein added the comment:

And in case it isn't clear how such a method would help, here is what the 
earlier code would look like:

import os
import site

dirname = 'lib'
dirpath = os.path.join(os.path.dirname(__file__), dirname)

site.insertsitedir(1, dirpath)

But (other than the imports) it could even be reduced to a one-liner:

site.insertsitedir(1, os.path.join(os.path.dirname(__file__), 'lib'))

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7744
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7744] Allow site.addsitedir insert to beginning of sys.path

2014-09-09 Thread Michael R. Bernstein

Michael R. Bernstein added the comment:

The lack of a left-append option for site.addsitedir(path), or an 
site.insertsitedir(index, path) (which is what I would consider a better 
solution), causes quite a few contortions on some hosted platforms, notably 
Google App Engine, for vendored packages.

I've been trying to find a more elegant solution than the following code, but 
haven't been able to find anything:

import os
import site
import sys

dirname = 'lib'
dirpath = os.path.join(os.path.dirname(__file__), dirname)

sys.path, remainder = sys.path[:1], sys.path[1:]
site.addsitedir(dirpath)
sys.path.extend(remainder)

And even asked on StackOverflow: 
http://stackoverflow.com/questions/25709832/what-is-the-cleanest-way-to-add-a-directory-of-third-party-packages-to-the-begin

Having a site.insertsitedir(index, path) available would make things so much 
simpler.

--
nosy: +webmaven
versions: +Python 3.3, Python 3.4, Python 3.5

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7744
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: How to get atexit hooks to run in the presence of execv?

2009-01-29 Thread R. Bernstein
Mark Wooding m...@distorted.org.uk writes:

 ro...@panix.com (R. Bernstein) writes:

 Recently, I added remote debugging via TCP sockets. (Well, also FIFO's
 as well but closing sockets before restarting is what's of concern.)

 I noticed that execv in Python 2.5.2 doesn't arrange exit hooks to get
 called. Should it?

 I'd consider that to be highly unusual.  Certainly, the C atexit(3)
 function is called only in response to a call to exit(3) (possibly
 implicitly by returning from main), and not by execve(2) or any of its
 little friends.

 Your specific problem is to do with file descriptors, so it's probably
 best dealt with using the close-on-exec flag:

 from fcntl import fcntl, F_GETFD, F_SETFD, F_CLOEXEC

 sk = socket(...)
 ## ...
 fcntl(sk.fileno(), F_SETFD,
   fcntl(sk.fileno(), F_GETFD) | FD_CLOEXEC)

 Now the socket will be magically closed when you exec.. another program.

Thanks for the wealth of information. Alas, somehow I think this begs
the question. I *know* how to arrange in the debugger for it to clean
up after itself. But it's the program being debugged which I have no
control over. And I would like to give it an opportunity to clean up
after itself. 


 Finally, can I urge against TCP sockets in an application like this?

By all means, I hope people will offer thoughts, concerns and ideas. 

 Certainly without adequate authentication, it will simply be insecure
 even within a single multiuser host (e.g., using localhost only) -- and
 assuming that even a home machine has only a single user is becoming
 less realistic.  Unix-domain sockets live in the filesystem, and access
 to them is limited using the standard filesystem mechanisms.

 If you're expecting inter-host communications (e.g., remote debugging),
 it's a good idea to protect the session using TLS or something (much as
 I dislike the TLS certification/public-key- distribution model it's way
 better than nothing at all).

Well, I also started coding FIFO's as well as TCP sockets, partly
because I could, and partly to try to try to keep the interface
generic. And in the back of my mind, I'd like to add serial devices as
well - I don't see a reason not to.

Initially, I probably won't add authentication or encryption. I'm
having enough of a time trying to get this much working. (cpickling
over sockets seems to still require knowing how many messages were
sent and unpickling each of those, and TCP_NODELAY isn't allowed and
doesn't seem the right thing either.)

However what I really would like to see is authentication and
encription added as an independent plugin layer much as I view whether
one is debugging locally or not. So I don't see this as an issue per
se with TCP sockets.


 -- [mdw]
--
http://mail.python.org/mailman/listinfo/python-list


How to get atexit hooks to run in the presence of execv?

2009-01-28 Thread R. Bernstein
As a hobby I've been (re)writing a debugger. One of the commands,
restart, works by calling an execv(). You may need to do this when
the program you are debugging is threaded or when one needs to ensure
that all program state is reinitialized.

Recently, I added remote debugging via TCP sockets. (Well, also FIFO's
as well but closing sockets before restarting is what's of concern.)

I noticed that execv in Python 2.5.2 doesn't arrange exit hooks to get
called. Should it?  Furthermore, I don't seen any atexit routine that
would let me initiate such finalization. Should there be one?

Or perhaps I'm missing something. Is there a way to arrange atexit
hooks to get run before issuing an execv-like call? 

Thanks.

--
http://mail.python.org/mailman/listinfo/python-list


Re: How do I DRY the following code?

2008-12-30 Thread R. Bernstein
Steven D'Aprano st...@remove-this-cybersource.com.au writes:

 On Mon, 29 Dec 2008 21:13:55 -0500, R. Bernstein wrote:

 How do I DRY the following code? 

 class C():
 [snip code]

 Move the common stuff into methods (or possibly external functions). If 
 you really need to, make them private by appending an underscore to the 
 front of the name.


 class C():
   def common1(self, *args):
 return common1
   def common2(self, *args):
 return common2
   def _more(self, *args):  # Private, don't touch!
 return more stuff

   def f1(self, arg1, arg2=None, globals=None, locals=None):
   ... unique stuff #1 ...
   self.common1()
   ret = eval(args, globals, locals)
   self._more()
   return retval

   def f2(self, arg1, arg2=None, *args, **kwds):
   ... unique stuff #2 ...
   self.common1()
   ret = arg1(args, *args, **kwds)
   self.common2
   return retval

   def f3(self, arg1, globals=None, locals=None):
   ... unique stuff #3 ...
   self.common1()
   exec cmd in globals, locals
   self.common2()
   return None  # unneeded

   def f4(self, arg1, globals=None, locals=None):
   ... unique stuff #4 ...
   self.common1()
   execfile(args, globals, locals)
   self._more()


I realize I didn't mention this, but common1 contains try: and more contains
except ... finally. 

So for example there is 

self.start()
try: 
res = func(*args, **kwds)   # this line is different
except 
   ...
finally:
   ...

Perhaps related is the fact that common1 and common2 are really
related and therefore breaking into the function into 3 parts sort of
breaks up the flow of reading one individually. 

I had also considered say passing an extra parameter and having a case
statement around the one line that changes, but that's ugly and
complicates things too.



 An explicit return None is probably not needed. Python functions and 
 methods automatically return None if they reach the end of the function.

return None is a perhaps a stylistic idiom. I like to put returns
at the end of a function because it helps (and Emacs) me keep
indentation straight. Generally, I'll put return None if the
function otherwise returns a value and just return if I don't think
there is  a useable return value.

I've also noticed that using pass at the end of blocks also helps me
and Emacs keep indntation straight. For example:

if foo():
   bar()
else
   baz()
   pass
while True:
  something
  pass






 Above there are two kinds of duplication: that within class C and that
 outside which creates an instance of the class C and calls the
 corresponding method.

 Do you really need them? 

Perhaps, because they may be the more common idioms. And by having
that function there a temporary complex object can be garbage
collected and doesn't pollute the parent namespace. Is this a big
deal? I don't know but it doesn't hurt.

 If so, you're repeating yourself by definition. 
 That's not necessarily a problem, the stand-alone functions are just 
 wrappers of methods. You can decrease (but not eliminate) the amount of 
 repeated code with a factory function:

 def build_standalone(methodname, docstring=None):
 def function(arg, arg2=None, globals=None, locals=None):
 c = C()
 return c.getattr(methodname)(arg, arg2, globals, locals)
 function.__name__ = methodname
 function.__doc__ = docstring
 return function

 f1 = build_standalone('f1', Docstring f1)
 f2 = build_standalone('f2', Docstring f2)
 f3 = build_standalone('f3', Docstring f3)

Yes, this is better than the lambda. Thanks!


 There's no way to avoid the repeated f1 etc.

 But honestly, with only three such functions, I'd consider that overkill.

Yeah, I think so too.

 Lest the above be too abstract, those who want to look at the full
 (and redundant) code:

   http://code.google.com/p/pydbg/source/browse/trunk/api/pydbg/api/
 debugger.py


 You have parameters called Globals and Locals. It's the usual Python 
 convention that names starting with a leading uppercase letter is a 
 class. To avoid shadowing the built-ins, it would be more conventional to 
 write them as globals_ and locals_. You may or may not care about 
 following the convention :)

Ok. Yes, some earlier code used globals_ and locals_. Thanks. 

 I notice you have code like the following:

 if Globals is None:
 import __main__
 Globals = __main__.__dict__


 I would have written that like:

 if Globals is None:
 Globals = globals()

Yes, that's better. Thanks.

 or even

 if Globals is None:
 from __main__ import __dict__ as Globals

 You also have at least one unnecessary pass statement:

 if not isinstance(expr, types.CodeType):
 expr = expr+'\n'
 pass

 The pass isn't needed.


 In your runcall() method, you say:

 res = None
 self.start(start_opts)
 try:
 res = func(*args, **kwds)
 except DebuggerQuit:
 pass
 finally:
 self.stop()
 return res

 This is probably better

Re: How do I DRY the following code?

2008-12-30 Thread R. Bernstein
Patrick Mullen saluk64...@gmail.com writes:
 f1(...):
  Docstring f1
  c = C()
  return c.f1(...)

 f2(...):
  Docstring f2
  c = C()
  return c.f2(...)

 Why not just do either this:
 C().f2(..) where you need f2

Yes, this is a little better. Thanks!
--
http://mail.python.org/mailman/listinfo/python-list


How do I DRY the following code?

2008-12-29 Thread R. Bernstein
How do I DRY the following code? 

class C():

  def f1(self, arg1, arg2=None, globals=None, locals=None):
  ... unique stuff #1 ...
  ... some common stuff #1 ...
  ret = eval(args, globals, locals)
  ... more stuff #2 ...
  return retval

  def f2(self, arg1, arg2=None, *args, **kwds):
  ... unique stuff #2 ...
  ... some common stuff #1 ...
  ret = arg1(args, *args, **kwds)
  ... more common stuff #2 ...
  return retval

  def f3(self, arg1, globals=None, locals=None):
  ... unique stuff #3 ...
  ... some common stuff #1 ...
  exec cmd in globals, locals
  ... more common stuff #2 ...
  return None

  def f4(self, arg1, globals=None, locals=None):
  ... unique stuff #4 ...
  ... some common stuff #1 ...
  execfile(args, globals, locals)
  ... more stuff #2 ...
  return None



f1(...):
  Docstring f1
  c = C()
  return c.f1(...)

f2(...):
  Docstring f2
  c = C()
  return c.f2(...)

f3(...):
  Docstring f3
  c = C()
  return c.f3(...)


Above there are two kinds of duplication: that within class C and that
outside which creates an instance of the class C and calls the
corresponding method.

For the outside duplication, I considered trying:

_call_internal = lambda name, *args, **kwds \
c = C() \
fn = getattr(c, name) \
return fn(*args, **kwds)

f1 = lambda arg, arg2=None, globals=None, locals=None: _call_internal('f1', ...)
f1.__doc__ =  Docstring f1 

f2= lambda arg, arg1=None, arg2, *args, **kwds: _call_internal('f2', ...)
f2.__doc__ =  Docstring f2 

However this strikes me as a little bit cumbersome, and harder
understand and maintain than the straightforward duplicated code. 

Thoughts? 

Lest the above be too abstract, those who want to look at the full
(and redundant) code:

  http://code.google.com/p/pydbg/source/browse/trunk/api/pydbg/api/debugger.py

--
http://mail.python.org/mailman/listinfo/python-list


Re: linecache vs egg - reading line of a file in an egg

2008-12-21 Thread R. Bernstein
Robert Kern robert.k...@gmail.com writes:

 R. Bernstein wrote:
 Does linecache work with source in Python eggs? If not, is it
 contemplated that this is going to be fixed or is there something else
 like linecache that currently works?

 linecache works with eggs and other zipped Python source, but it had
 to extend the API in order to do so. Some of the debuggers don't use
 the extended API. This will be fixed in the next 2.6.x bugfix release,
 but not in 2.5.3.

Ok. I have committed a change in pydb sources to deal with the 2 and 3
argument linecache.getline interface which should cover Python
releases both before and after version 2.5.


 http://bugs.python.org/issue4201

Many thanks! I should have dug deeper myself.

For pdb/bdb though isn't there still a problem in reporting the file
location? There is that weird build name that seems to be stored in
func_code.co_filename mentioned in the original post. 

I just tried patching pdb/bdb along from the current 2.6 svn sources
and here is what I see:

  $ pdb /tmp/lc.py 
   /tmp/lc.py(1)module()
  - import tracer
  (Pdb) s
  --Call--
   /src/external-vcs/python/build/bdist.linux-i686/egg/tracer.py(6)module()

The above filename is wrong. It's very possible I did something wrong,
so I'd be grateful if someone else double checked. 

Furthermore, if there is a problem I'm not sure I see how to fix this. 

I can think of heuristics to tell if module lives an inside an egg,
but is there a reliable way? Is there a standard convention for
reporting a file location inside of an egg? 

Thanks again.


 -- 
 Robert Kern

 I have come to believe that the whole world is an enigma, a harmless enigma
  that is made terrible by our own mad attempt to interpret it as though it had
  an underlying truth.
   -- Umberto Eco
--
http://mail.python.org/mailman/listinfo/python-list


linecache vs egg - reading line of a file in an egg

2008-12-20 Thread R. Bernstein
Does linecache work with source in Python eggs? If not, is it
contemplated that this is going to be fixed or is there something else
like linecache that currently works?

Right now, I think pdb and pydb (and probably other debuggers) are
broken when they try to trace into code that is part of an egg.

Here's what I tried recently:

Using this egg:
   http://pypi.python.org/packages/2.5/t/tracer/tracer-0.1.0-py2.5.egg

I install that and look for the filename of one of the
functions. Here's a sample session:

 import tracer
 tracer
module 'tracer' from 
'/usr/lib/python2.5/site-packages/tracer-0.1.0-py2.5.egg/tracer.pyc'
 tracer.size
function size at 0xb7c39a74
 tracer.size.func_code.co_filename
'build/bdist.linux-i686/egg/tracer.py'
 tracer.size.func_code.co_firstlineno
216
 

To read the source for tracer.py, information from Accessing Package
Resources
(http://peak.telecommunity.com/DevCenter/PythonEggs#accessing-package-resources)
suggests:

 from pkg_resources import resource_string
 print resource_string('tracer', 'tracer.py')

This gives me one long string which I can split and then index. 

Note that I used tracer.py above, not
build/bdist.linux-8686/egg/tracer.py How do tracebacks and things that
read frame information deal with the discrepency?

Before reinventing the wheel and trying to extend linecache to do
something like the above, has someone already come up with a solution?

Thanks




--
http://mail.python.org/mailman/listinfo/python-list


Re: Deeper tracebacks?

2008-12-12 Thread R. Bernstein
Gabriel Genellina gagsl-...@yahoo.com.ar writes:
..

 No, last_traceback is the last *printed* traceback in the interactive
 interpreter. 

Well more precisely the traceback that is passed to sys.excepthook()
when an unhandled exception occcurs, since the hook that might not
decide to print anything ;-)

 Use the third element in sys.exc_info() instead:

Hmm...  I'm not sure what I was thinking when I read that way back
when, but you are correct and caught a bug in my code. I really do
need to do better about writing tests. Maybe next
incarnation... Thanks.
--
http://mail.python.org/mailman/listinfo/python-list


Re: trace module and doctest

2008-12-11 Thread R. Bernstein
On Nov 22 2007, 4:31 pm, [EMAIL PROTECTED] (John J. Lee) wrote:
 [EMAIL PROTECTED] writes:
  I am trying to use thetracemodulewith the --count flag to test for
  statement coverage in my doctests.  Thetracecoverage listing is very
  weird, marking many statements as unexecuted which were clearly executed
  during the run (I can see their output on the screen).  Is this some known
  interaction between thetraceanddoctestmodules?
 
 Yes:
 
 http://python-nose.googlecode.com/svn/trunk/nose/plugins/doctests.py
 
 http://svn.zope.org/Zope3/trunk/src/zope/testing/doctest.py?rev=28679...
 
 John

Interesting. Is this problem caused or ascerbated by the fact that
sys.settrace() is this sort of global? 

I've been looking at redoing pdb from scratch, from the bottom up. As
Obama might say in a different context, line by line.

I sort of don't like the global-ness of pdb, or for that matter the
sys.settrace's trace hook. So as a start, I wrote a tracer module
(http://code.google.com/p/pytracer/ and on Pypi) which just a wrapper
around the global trace hook to allow different applications to
register trace events and not clobber one-another.

Possibly in an Ideal world, Python would itself have done the trace
hook chaining. (I believe that the case in Ruby ;-)

But Python 2.6 does now allow you *query* to find out if a trace hook
has been installed, chaining so it is now possible, even though tracer
doesn't use the 2.6 sy.gettrace () to allow it chain outside of
routines that use it directly.

But as I wonder if this kind of problem would have been avoided if
there had been a wrapper to allow settrace chaining? (And if that is s
-- unlikely as it is to happen -- various applications like tracer,
doctest and pdb could be rewritten to use tracer.)
--
http://mail.python.org/mailman/listinfo/python-list


Re: Deeper tracebacks?

2008-12-11 Thread R. Bernstein
brooklineTom [EMAIL PROTECTED] writes:

 I want my exception handler to report the method that originally
 raised an exception, at the deepest level in the call-tree. Let give
 an example.

 import sys, traceback
 class SomeClass:
 def error(self):
 Raises an AttributeError exception.
 int(3).zork()

 def perform_(self, aSelector):
 try:
 aMethod = getattr(self, aSelector, None)
 answer = apply(aMethod, [], {})
 except: AttributeError, anAttributeErrorException:
 aRawStack = traceback.extract_stack()
 answer = None

 When I call perform_ (... SomeClass().perform_('error')), I want to
 collect and report the location *within the method (error) that
 failed*. The above code reports the location of perform_, and no
 deeper in the call tree.

 Anybody know how to accomplish this?

extract_stack() without any arguments is getting this from the
*current frame* which as you noted doesn't have the last exception
info included which has been popped; variable sys.last_traceback has the frames
at the time of the exception, I think. 

So in your code try changing:
 aRawStack = traceback.extract_stack()
to
 aRawStack = traceback.extract_stack(sys.last_traceback)
--
http://mail.python.org/mailman/listinfo/python-list


Re: sys.settrace 'call' event behavior

2008-12-11 Thread R. Bernstein
On Jun 21, 8:47 am, Michal Kwiatkowski [EMAIL PROTECTED] wrote:
 I'm building a tool to trace all function calls usingsys.settrace
 function from the standard library. One of the awkward behaviors of
 this facility is that the class definitions are reported as 'call'
 events.[1] Since I don't want to catch class definitions, only
 function calls, I'm looking for a way to differentiate between those
 two. So far I have only vague clues about how to do that.
 
 At the bottom of this mail is a simple script that prints all
 attributes (except for the bytecode) of the traced code. In the sample
 code Bar class is defined and foo function is called after that. The
 following trace output is reported:
 
 Bar, 0, 0, (), (), (), (None,), ('__name__', '__module__', 'None'),
 foo.py, 21, , 1, 66
 foo, 0, 0, (), (), (), (None,), (), foo.py, 25, , 1, 67
 
 Class definition and functioncalldiffers on four attributes. Two of
 them, co_name and co_firstlineno are not very helpful. Other two are
 co_names and co_flags. The latter differs only by the CO_OPTIMIZED
 flag, which is for internal use only[2]. So we're left with
 co_names, which is a tuple containing the names used by the
 bytecode. Is that helpful in distinguishing between class definitions
 and function calls? Do you have any other ideas on how to tell them
 apart?
 
 Source of the sample script I used follows.
 
 def trace(frame,event, arg):
 ifevent== 'call':
 print ', '.join(map(str, [frame.f_code.co_name,
   frame.f_code.co_argcount,
   frame.f_code.co_nlocals,
   frame.f_code.co_varnames,
   frame.f_code.co_cellvars,
   frame.f_code.co_freevars,
   frame.f_code.co_consts,
   frame.f_code.co_names,
   frame.f_code.co_filename,
   frame.f_code.co_firstlineno,
   frame.f_code.co_lnotab,
   frame.f_code.co_stacksize,
   frame.f_code.co_flags]))
 return trace
 
 import syssys.settrace(trace)
 
 class Bar(object):
 None
 pass
 
 def foo():
 pass
 
 foo()
 
 [1] It is strange for me, but documented 
 properly.http://docs.python.org/lib/debugger-hooks.htmlsays thatcallevent
 happens when a function is called (or some other code block
 entered).
 
 [2]http://docs.python.org/ref/types.html#l2h-145
 
 Cheers,
 mk

Perhaps you could filter based on the type of the frame.f_code.co_name ? 
e.g. or type(eval(frame.f_code.co_name))

(Sorry for the delayed reply - I don't generally read the newsgroup
and stumbled across this looking for something else. But I noticed no
replies, so...)
--
http://mail.python.org/mailman/listinfo/python-list


Custom debugger trace hooks

2008-12-11 Thread R. Bernstein
A colleague recently asked this:
 
Is there a cleaner way to dump a trace/track of running a python script. 
 With Pydb I made work-around with
 
   import pydb
   pydb.debugger(dbg_cmds=['bt', 'l', 's']*300 + ['c'])
  
   So now I have a dump of 300 steps with backtraces, so I can easily
   compare two executions of the same script to find where they diverged. I
   think it is a really nice feature.

pydb and pdb inherit from the cmd module which does allow pre- and
post-command hooks. 

Neither pdb nor pydb make it easy to add one's own hook. However

If there's something you want to run before stepping you can do that
by monkey-patching Pdb.precmd:

 snip
import pydb
def myprecmd(obj, debug_cmd):
Custom Hook method executed before issuing a debugger command.
global _pydb_trace
obj.do_list('')
obj.do_where('10')  # limit stack to at most 10 entries
return obj.old_precmd(debug_cmd)  # is always string 's' in example below

_pydb_trace = pydb.Pdb()
pydb.Pdb.old_precmd = pydb.Pdb.precmd
pydb.Pdb.precmd = myprecmd
pydb.debugger(dbg_cmds=['s'] * 30)
# 


I believe the same is applicable to pdb, although I haven't
investigated.
--
http://mail.python.org/mailman/listinfo/python-list


Re: pydb 1.24

2008-12-11 Thread R. Bernstein
J Kenneth King ja...@agentultra.com writes:

 
 I watched the demo video, look forward to working with it. Any links to
 that emacs front-end being used in the video?
 
 Cheers and thanks!

In short, the emacs code is bundled in with the tar and should be
installed when you run make install

However if you install from a Debian distribution, Alex Moskalenko has
done the work to make sure it will automatically be autoloaded from
emacs. And it looks like Manfred Tremmel for SuSE (packman) did so as
well. Mandriva gets kudos for being the first to make a package (RPM)
for version 1.24 -- the day of the release!

I'll dropping just a couple more names -- which is really my way to
say thank you to all these kind people. The Emacs code got improved
for Python as the result of Emacs some code and Python patches by
Alberto Griggio. When pydb annotation is set to level 3, what you
see is a bit more sophisticated than that shown in the demo.

Finally, largely through the efforts of Anders Lindgren, the most
sophisticated integration is in the emacs interface for the Ruby
debugger, ruby-debug. (Although to my mind that code is still little
bit incomplete.)

There's no reason that code couldn't be modified to work for Python as
well since the command interfaces between the two debuggers are very
much the same. Ideally common code would be pulled out and could be
used in other gdb-like debuggers as well (kshdb, zshdb, or
bashdb). All's it takes is someone to do the work! :-)

--
http://mail.python.org/mailman/listinfo/python-list


pydb 1.24

2008-12-10 Thread R. Bernstein
This release is to clear out some old issues. It contains some
bugfixes, document corrections, and enhancements. Tests were
revised for Python 2.6 and Python without readline installed. A bug
involving invoking from ipython was fixed. The frame command is a
little more like gdb's. Exceptions are now caught in runcall().

This is the last release contemplated before a major rewrite.

download:
https://sourceforge.net/project/showfiles.php?group_id=61395package_id=175827

bug reports:
https://sourceforge.net/tracker/?group_id=61395atid=497159
--
http://mail.python.org/mailman/listinfo/python-list


Re: handling uncaught exceptions with pdb?

2008-09-12 Thread R. Bernstein
Paul Rubin http://[EMAIL PROTECTED] writes:

 I think I've asked about this before, but is there a way to set up
 Python to handle uncaught exceptions with pdb?  I know about setting
 sys.except_hook to something that calls pdb, but this is normally done
 at the outer level of a program, and by the time that hook gets
 called, the exception has already unwound the stack to the outermost
 level.  My situation is I run a multi-hour or multi-day computation
 that eventually crashes due to some unexpected input and I'd like to
 break to the debugger at the innermost level, right when the exception
 is encountered, so I can fix the error with pdb commands and resume
 processing.  
...

Why not use the traceback you get to show you where to change the code
around that point to add an exception handler there which calls the
debugger?
--
http://mail.python.org/mailman/listinfo/python-list


Re: handling uncaught exceptions with pdb?

2008-09-12 Thread R. Bernstein
Diez B. Roggisch [EMAIL PROTECTED] writes:

 R. Bernstein schrieb:
 Paul Rubin http://[EMAIL PROTECTED] writes:

 I think I've asked about this before, but is there a way to set up
 Python to handle uncaught exceptions with pdb?  I know about setting
 sys.except_hook to something that calls pdb, but this is normally done
 at the outer level of a program, and by the time that hook gets
 called, the exception has already unwound the stack to the outermost
 level.  My situation is I run a multi-hour or multi-day computation
 that eventually crashes due to some unexpected input and I'd like to
 break to the debugger at the innermost level, right when the exception
 is encountered, so I can fix the error with pdb commands and resume
 processing.  
 ...

 Why not use the traceback you get to show you where to change the code
 around that point to add an exception handler there which calls the
 debugger?

 Because he wants to fix the issue directly in-place, and then
 continue, instead of losing hours or even days of computation.

Then proactively add exception handlers ;-)
--
http://mail.python.org/mailman/listinfo/python-list


Re: pdb bug and questions

2008-09-05 Thread R. Bernstein
castironpi [EMAIL PROTECTED] writes:

 On Sep 4, 4:22 pm, Stef Mientki [EMAIL PROTECTED] wrote:
 hello,

 I'm trying to embed a debugger into an editor.
 I'm only interested in high level debugging.
 The first question is what debugger is the best for my purpose ?
 (pdb, pydb, rpdb2, smart debugger, extended debugger ?

 Second question, in none of the above debuggers (except rpdb2),
 I can find a  break now,
 so it seems impossible to me to detect unlimited while loops ?

 For the moment I started with pdb, because most of the debuggers seems
 to be an extension on pdb.
 When I launch the debugger ( winXP, Python 2.5) from with my editor
   python -u -m pdb  D:\\Data\\test_IDE.py
 I get this error
   IOError: (2, 'No such file or directory', 'D:\\Data\test_IDE.py')
 NOTICE 1 backslash --^

 If I launch the debugger with
   python -u -m pdb  D:/Data/test_IDE.py
 It runs fine.

 This looks like a bug to me.
 What's the best way to report these kind of bugs ?

 Although I mostly use os.path.join to be OS independent,
 these kind of bugs give me the impression,
 that I can better do the join myself and always use forward slashes.
 Is this a valid conclusion ?

 thanks,
 Stef Mientki

 Stef,

 I discovered the same problem too with my editor.  I solved it by
 using only the file name, and setting the initial directory on the
 executable.

I don't know if this helps, but in pydb there is an option to set the
initial directory the debugger works in. Inside the debugger there is
the gdb command cd. 
--
http://mail.python.org/mailman/listinfo/python-list


Re: pdb bug and questions

2008-09-05 Thread R. Bernstein
Stef Mientki [EMAIL PROTECTED] writes:

 hello,

 I'm trying to embed a debugger into an editor.
 I'm only interested in high level debugging.
 The first question is what debugger is the best for my purpose ?
 (pdb, pydb, rpdb2, smart debugger, extended debugger ?

 Second question, in none of the above debuggers (except rpdb2),
 I can find a  break now,
 so it seems impossible to me to detect unlimited while loops ?

I am not sure what you mean by break now. pdb and pydb allow direct
calls from a program to the debugger via set_trace (which in pydb is
deprecated in favor of I think the more descriptive name: debugger)

But I suspect this is not what you mean to detect unlimited while
loops; pydb also has gdb-style signal handling that allows for entry
into the debugger when the debugged python process receives a
particular signal. info handle lists all of the interrupts and what
action is to be taken on each. See
http://bashdb.sourceforge.net/pydb/pydb/lib/node38.html

However I believe that signals are only handled by the main thread; so
if that's blocked, the python process won't see the signal.


 For the moment I started with pdb, because most of the debuggers seems
 to be an extension on pdb.
 When I launch the debugger ( winXP, Python 2.5) from with my editor
  python -u -m pdb  D:\\Data\\test_IDE.py
 I get this error
  IOError: (2, 'No such file or directory', 'D:\\Data\test_IDE.py')
 NOTICE 1 backslash --^

 If I launch the debugger with
  python -u -m pdb  D:/Data/test_IDE.py
 It runs fine.

 This looks like a bug to me.
 What's the best way to report these kind of bugs ?

winpdb, pydb and pdb (part of Python) all have Sourceforge projects
which have bug trackers. For pdb, in the past people includng myself,
have reported features, patches and bugs in the Python tracker;
eventually it gets handled. (Eventually in my case means a year or
so.) But if my information is incorrect or out of date, no doubt
someone will correct me.

As mentioned in the last paragraph, pydb also is a Sourceforge project
(part of bashdb) which has a tracker for bug reporting. Using the bug
tracker I think is better than discussing pydb bugs in c.l.p. By
extension, I assume the same is also true for the other debuggers.

Finally, I think rpdb2 is part of the winpdb project on Sourceforge
and again has a bug tracker. My sense is that Nir Aides is very good
about handling bugs reported in winpdb/rpdb.


 Although I mostly use os.path.join to be OS independent,
 these kind of bugs give me the impression,
 that I can better do the join myself and always use forward slashes.
 Is this a valid conclusion ?

 thanks,
 Stef Mientki
--
http://mail.python.org/mailman/listinfo/python-list


Re: pdb bug and questions

2008-09-05 Thread R. Bernstein
Stef Mientki [EMAIL PROTECTED] writes:

 From: Stef Mientki [EMAIL PROTECTED]
 Subject: Re: pdb bug and questions
 Newsgroups: comp.lang.python
 To: python-list@python.org python-list@python.org
 Date: Fri, 05 Sep 2008 22:06:19 +0200

 R. Bernstein wrote:
 Stef Mientki [EMAIL PROTECTED] writes:

   
 hello,

 I'm trying to embed a debugger into an editor.
 I'm only interested in high level debugging.
 The first question is what debugger is the best for my purpose ?
 (pdb, pydb, rpdb2, smart debugger, extended debugger ?

 Second question, in none of the above debuggers (except rpdb2),
 I can find a  break now,
 so it seems impossible to me to detect unlimited while loops ?
 

 I am not sure what you mean by break now. pdb and pydb allow direct
 calls from a program to the debugger via set_trace (which in pydb is
 deprecated in favor of I think the more descriptive name: debugger)
   But I suspect this is not what you mean to detect unlimited while
 loops; pydb also has gdb-style signal handling that allows for entry
 into the debugger when the debugged python process receives a
 particular signal. info handle lists all of the interrupts and what
 action is to be taken on each. See
 http://bashdb.sourceforge.net/pydb/pydb/lib/node38.html

 However I believe that signals are only handled by the main thread; so
 if that's blocked, the python process won't see the signal.
   
 Thanks,
 Yes, I think the trace option can do the job,
 certainly if I display every line.
 Didn't know pdb had something like settrace ( the information on pdb
 is very condensed ;-)

It sounds to me like there may be some confusion -- if at least on my
part. pdb's set_trace (with the underscore) is documented here:
http://docs.python.org/lib/module-pdb.html . Yes, perhaps it would be
nice if that document gave an example. But set_trace is a method call,
not an option.

There is a pydb debugger *command* called set trace as well as a
command-line option (-X --trace) which turns on line tracing and is
something totally different. It can be fun to use that with set
linetrace delay in an editor to allow one to watch the lines execute
as the program runs. I did this with emacs in this video:

http://showmedo.com/videos/video?name=pythonBernsteinPydbIntrofromSeriesID=28

   
 For the moment I started with pdb, because most of the debuggers seems
 to be an extension on pdb.
 When I launch the debugger ( winXP, Python 2.5) from with my editor
  python -u -m pdb  D:\\Data\\test_IDE.py
 I get this error
  IOError: (2, 'No such file or directory', 'D:\\Data\test_IDE.py')
 NOTICE 1 backslash --^

 If I launch the debugger with
  python -u -m pdb  D:/Data/test_IDE.py
 It runs fine.

 This looks like a bug to me.
 What's the best way to report these kind of bugs ?
 

 winpdb, pydb and pdb (part of Python) all have Sourceforge projects
 which have bug trackers. For pdb, in the past people includng myself,
 have reported features, patches and bugs in the Python tracker;
 eventually it gets handled. (Eventually in my case means a year or
 so.) But if my information is incorrect or out of date, no doubt
 someone will correct me.
   
 I'll take a look, for the sake of our children ;-)
 As mentioned in the last paragraph, pydb also is a Sourceforge project
 (part of bashdb) which has a tracker for bug reporting. Using the bug
 tracker I think is better than discussing pydb bugs in c.l.p.
 c.l.p. ?

This newsgroup, comp.lang.python. 

  By
 extension, I assume the same is also true for the other debuggers.

 Finally, I think rpdb2 is part of the winpdb project on Sourceforge
 and again has a bug tracker. My sense is that Nir Aides is very good
 about handling bugs reported in winpdb/rpdb.
   
 Yes I started with rpdb2,
 and indeed Nir Aides is very helpfull,
 but I think interfaceing rpdb2 is a little too difficult for me,

 For now I think pydb is the choice,
 better control and more functions than pdb,
 and almost just as easy.

If there are things that are unnecessarily awkward, and especially if
you can think of a way to make pydb better (that hasn't been suggested
before), please submit a feature request in the tracker for
pydb. Thanks.
--
http://mail.python.org/mailman/listinfo/python-list


Re: Debugging of a long running process under Windows

2008-08-21 Thread R. Bernstein
Tim Golden [EMAIL PROTECTED] writes:

 R. Bernstein wrote:
 I don't know how well Microsoft Windows allows for sending a process a
 signal in Python, but if the Python signal module works reasonably
 well on Microsoft Windows, then reread
 http://bashdb.sourceforge.net/pydb/pydb/lib/node38.html

 The answer is: not terribly well. (Or, rather: in a very
 limited way). You should, however, be able to make use
 of the SetConsoleCtrlHandler win32 api which is exposed
 by pywin32's win32api module, or could be accessed via
 ctypes:

 http://timgolden.me.uk/pywin32-docs/win32api__SetConsoleCtrlHandler_meth.html
 http://msdn.microsoft.com/en-us/library/ms686016(VS.85).aspx

 TJG

Interesting. Yes, it seems limited in that you get CTRL+C or
CTRL+BREAK which seems to map to one of CTRL_CLOSE_EVENT,
CTRL_LOGOFF_EVENT, or CTRL_SHUTDOWN_EVENT signals.  

If someone is interested in hooking this into pydb's signal handling
mechanism, I'll consider adding in a future release. (If hacking the
configure script to test for the presense Microsoft Windows or the
win32api is tedious, I can manage doing that part.)
--
http://mail.python.org/mailman/listinfo/python-list


Re: Debugging of a long running process under Windows

2008-08-20 Thread R. Bernstein
I don't know how well Microsoft Windows allows for sending a process a
signal in Python, but if the Python signal module works reasonably
well on Microsoft Windows, then reread
http://bashdb.sourceforge.net/pydb/pydb/lib/node38.html

Should you not want to use or can't use pydb, or want to do this with
sockets, then basically you do the same thing that pydb is doing here,
substituting sockets if you like.

The basic ideas that were discussed in:

http://groups.google.com/group/comp.unix.shell/browse_thread/thread/87c3f728476ed29d

The discussion was in the context of shell languages, but it is
equally applicable in Python.

Good luck!


Propad [EMAIL PROTECTED] writes:

 Hello,
 I know this issue pops up once in a while, but I haven't found a good
 answer to it. I need to debug a long running application under
 windows. The application is a combined java/python framework for
 testing ECUs in the automotive industry. Basically, the Java GUI
 (Eclipse-based) starts test-cases written in Python and provides the
 console where the test-logs are seen. When there is a exception
 somewhere in the testcases (or the underlying functionallity, also
 written in Python), those are also logged, and then the framework
 usually continues with the next command in the same test case.
 I'd like to have a debugging facillity better than print statements. I
 imagine:
 a) something like a debugger poping up when I get an exception, or b)
 something debugger-like poping up when it reaches a command I entered
 something in the code,
 or c) Me pressing on a button and getting a debugger-like-thing that
 lets me look into the running, possibly halted code.
 I've done some little experiments with the code module, which looks
 nice but does not seem to get over the control from the java-part, and
 with popen2(cmd), which seems not even to work if I start the code
 from a dosbox (the same console is keept), and same thing when strated
 by the Java-App.
 Just to add, using pdb (or pythonwin debugger) seems not to be an
 option, as it makes the test-runs much slower.
 Does somebody have an idea? It seems there used to be a python
 debugger called Pygdb, able to attach to a running application, but
 now it seems it disapeared (now there is a python debugger with the
 same name, linked to ViM).
 Thanx,
 Propad
--
http://mail.python.org/mailman/listinfo/python-list


Re: handling unexpected exceptions in pdb

2008-07-10 Thread R. Bernstein
Simon Bierbaum [EMAIL PROTECTED] writes:

 Hi all,

 I'm in an interactive session in pdb, debugging my code using
 pdb.runcall. Somewhere, an exception is raised and lands uncaught on
 stdout. Is there any way of picking up this exception and at least
 read the full message, or even accessing its stack trace to determine
 where exactly within the one line I just executed it was raised? This
 is where I'm stuck:

 /usr/local/apache2/bin/Model/Database.py(287)disconnect()
 (Pdb) n
 FlushError: FlushErr...perly.,)
 /usr/local/apache2/bin/Model/Database.py(287)disconnect()
 (Pdb) import sys
 (Pdb) sys.last_traceback
 *** AttributeError: 'module' object has no attribute 'last_traceback'

 Thanks, Simon

I think basically you want runcall to be wrapped in a try block. So in pdb.py
instead of:

def runcall(*args, **kwds):
return Pdb().runcall(*args, **kwds)


Change with:

def runcall(*args, **kwds):
p=Pdb()
try:
  return p.runcall(*args, **kwds)
except: 
  traceback.print_exc()
  print Uncaught exception. Entering post mortem debugging
  t = sys.exc_info()[2]
  p.interaction(t.tb_frame,t)
  print Post mortem debugger finished.
  return None

Code like this appears near the bottom of the pdb.py file. If that
works, you may want to file a bug Python report to change pdb. Also
note that one may want to do the same thing on run() and runeval()

But also if this does what you want, please file a feature request to pydb:
  http://sourceforge.net/tracker/?func=addgroup_id=61395atid=497162

and I'll probably make it happen in the next release.

This is the first time I've heard of anyone using runcall.

--
http://mail.python.org/mailman/listinfo/python-list


Re: Debuggers

2008-06-17 Thread R. Bernstein
TheSaint [EMAIL PROTECTED] writes:

 On 19:21, venerdì 13 giugno 2008 R. Bernstein wrote:

 I'm not completely sure what you mean, but I gather that in
 post-mortem debugging you'd like to inspect local variables defined at the
 place of error.

 Yes, exactly. This can be seen with pdb, but not pydb.
 If I'm testing a piece of code and it breaks, then I'd like to see the
 variables and find which of them doesn't go as expected.
  
 Python as a language is a little different than say Ruby. In Python
 the handler for the exception is called *after* the stack is unwound

 I'm not doing comparison with other languages. I'm simply curious to know why
 pydb don't keep variables like pdb.

If you are simply curious, then a guess is that the frame from the
traceback (t) isn't set by pydb's post_mortem method in calling the
Pdb interaction() method, whereas it does get set in the corresponding
pdb post_mortem code.

It's possible this is a bug -- it's all in the details. But if it is a
bug or a feature improvement, probably a better place to to request a
change would be in the tracker for the pydb project:

   http://sourceforge.net/tracker/?group_id=61395

Should you go this route, I'd suggest giving the smallest program and
scenario that exhibits the problem but also has a different behavior
in pdb. There are programming interfaces to post-mortem debugging in
pdb and pydb, namely post_mortem() and pm(); so best is a short self
contained program that when run sets up as much of this as possible.

And if the bug is accepted and fixed such a short-self contained
program would get turned into a test case and incluide to the set
regression tests normally run.


 Final, I agreed the idea to restart the debugger when an exception is trow.
 It could be feasible to let reload the file and restart. Some time I can
 re-run the program , as the error is fixed, but sometime pdb doesn't
 recognize the corrections applied.

The restart that I submitted as a patch to pdb two and a half years
ago is what I call a warm restart: no Python code is actually
reloaded. The reason ti was done way is that is a simple way to
maintain the debugger settings such as breakpoints; also it requires
saving less knowledge about how the program was initially run so it
might be applicable in more contexts.

The down side though, in addition to what you've noticed with regards
to files which have changed, is that global state may be different
going into the program when it is restarted.

In the last pydb release, 1.23 a save command was added to so that
breakpoints and other debugger state could be saved across debug
sessions which includes a cold or exec restart.

 I mean that after a post-mortem event, the debugger should forget all
 variables and reload the program, which meanwhile could be debugged.

 -- 
 Mailsweeper Home : http://it.geocities.com/call_me_not_now/index.html
--
http://mail.python.org/mailman/listinfo/python-list


Re: Debuggers

2008-06-13 Thread R. Bernstein
TheSaint [EMAIL PROTECTED] writes:

 Hi,

 while testing my program I found some strange happening with pdb and pydb.

 I like pydb because let me restart the program and nicer features, but if
 errors pop up, then it will forget all variables (globals and locals gone).

I'm not completely sure what you mean, but I gather that in
post-mortem debugging you'd like to inspect local variables defined at the
place of error. For example in this program

  def raise_error:
x=5
raise FloatingPointError
  
  raise_error

you'd like to look at x. 

Python as a language is a little different than say Ruby. In Python
the handler for the exception is called *after* the stack is unwound
while in Ruby it is called before. What this means to you is basically
what you reported: that you are not going to be able to see some local
variables after an exception occurs (i.e. in post-mortem debugging)
whether pydb, pdb, any debugger or any Python program you write.

This was mentioned a while back:
http://groups.google.com/group/comp.lang.python/browse_thread/thread/23418f9450c13c2d/b0b1908495dde7bc?lnk=stq=#b0b1908495dde7bc

By the way, although Ruby *does* call the exception handler before the
stack is unwound, there's no way that I know to *clear* the exception
so that you can dynamically handle it. This has a certain legitimacy
since it might be dangerous to continue in some exception and the
state of the interpreter may be inconsistent. For example if I write

  x = 1/0 

or 
  if  1/0  5 :

what value do I use for 1/0? (Far worse is where something like a SEGV
occurs, but I'm not sure that will raise an exception instead of
terminate Python.)

 I've to go for pdb because it isn't affected by that problem, but also in
 some case pdb doesn't recognize a fix after a post-mortem restart. The funny
 thing is that it shows the line corrected, but pdb execute the one it put in
 memory.
 However, I think also python shell has such flaw. I'd like to know how to
 blank all current data and restart a program or re-import a corrected class
 sentence.
 Any other to try?
 I'm also prone to use Ipython, but I still miss some learning how to run a
 program within Ipython itself.
 So if I do:

 import myprogram
 myprogram.__main__

 Will it run? And then the other arguments from CLI, how do I pass them in?
  --
 Mailsweeper Home : http://it.geocities.com/call_me_not_now/index.html
--
http://mail.python.org/mailman/listinfo/python-list


Re: pydb remote debugging/cmd.Cmd over socket?

2008-05-28 Thread R. Bernstein
Diez B. Roggisch [EMAIL PROTECTED] writes:

 Hi,

 I'm fiddling around with pydb. Installation and usage are fine. What I
 especially like is the fact that you can attach a signal such that you drop
 into debugging mode on demand.

 But this is of limited use to me in situations where a server is written in
 python. According to the source, pydb's debugger class Gdb extends cmd.Cmd.

 It passes stdin/stdout-arguments that should be usable to replace the
 standard streams. But so far all my experiments simply dropped the process
 into debugging mode putting out and getting io over stdin/stdout - not my
 self-supplied streams.

 So I wonder (being a bit rusty on my UNIX-piping-skillz): how does one do
 that - essentially, create a remote python shell using cmd.Cmd? 

 Diez

As part of the 2006 Google Summer of Code project Matt Flemming
started working on remote debugging in pydb. Alas it wasn't completed
and I let the code fall through the cracks. 

Matt claimed it worked to some degree but I could never get it to work
for me. Most definitely the code has atrophied.

The user interface was loosely based or reminiscent of gdb. So
you'd run pydbserver either as a command inside the debugger or as an
option to pydb or call inside Python pdbserver after importing. 

There is a connection class (file pydb/connection.rb) which was to
allow different kinds of protocols, like a socket connection or 
a serial line connection, or maybe even two FIFO's so that you could
connect to a different process on the same computer.

And there were some commands on the user-interaction side
to attach or detach the Python program you want to debug.

If you look in pydb.py.in and gdb.py.in you'll see some code commented
out with double hashes which is part of this effort.

I invite you or others to try to resurrect this effort. However as I
look at the code now, it doesn't make much sense other than the broad
outline given above.

Another approach and possibly a much simpler one if you are looking
for a Python debugger which supports remote debugging is Winpdb
http://winpdb.org/ by Nir Aides.


--
http://mail.python.org/mailman/listinfo/python-list


Possibly pydb 1.23 release around May 30

2008-05-21 Thread R. Bernstein
A release of pydb (1.23) is currently scheduled for around May
30th. If you can and want to help make sure the release is okay, you can try 
out what is currently in CVS: 
   http://sourceforge.net/cvs/?group_id=61395

I've put an interim tar at http://bashdb.sf.net/pydb-1.23cvs.tar.gz

Changes in the upgoming release:

* Show parameters on call

* Some doc fixes.

* add ipython and python commands to go into an ipython or a python shell

* add save command to save the current breakpoints and/or 
  most settings to a file. (Suggested at a NYC Python group meeting)

* add set/show deftrace to allow stopping at a def (method creation)
  statement.

* pseudo gdb-like annotate set/show commands and option.

* Better emacs interactivion via above annotate

* bug when info local is run inside a with statement

  Fixes for last three items based patches by Alberto Griggio

--
http://mail.python.org/mailman/listinfo/python-list


Re: Running an interactive interpreter inside a python

2008-05-15 Thread R. Bernstein
Alan J. Salmoni [EMAIL PROTECTED] writes:
 I'm not sure if this is exactly what you're after, but try looking
 into the 'code' module.

 It's fairly easy to make an interactive interpreter that runs within
 your program. If you import your programs variables into
 __main__.__dict__, you can have access to them which can be funky. You
 can even override the showtraceback method to catch various exceptions
 and do daft things like adding new methods to strings. I guess it
 would even be possible to have the commands compared to a list of
 commands and keywords to build a restricted interpreter, though how
 secure this would be against a determined attack is another matter.

 Alan

I think this (largely) does the trick. Thanks!

I'm not sure about how to deal with globals yet which should come from
a stackframe f_globals. It might be possible to save and restore
__main__.__dict__ before and after the call to interact(). Probably
would have been cooler to design interact() to take a globals
parameter, same as eval does.



 On May 15, 11:31 am, [EMAIL PROTECTED] (R. Bernstein) wrote:
 The next release of pydb will have the ability to go into ipython from
 inside the debugger. Sort of like how in ruby-debug you can go into
 irb :-)

 For ipython, this can be done pretty simply; there is an IPShellEmbed
 method which returns something you can call. But how could one do the
 same for the stock python interactive shell?

 To take this out of the realm of debugging. What you want to do is to
 write a python program that goes into the python interactive shell -
 without having to write your own a read/eval loop and deal with
 readline, continuation lines, etc.

 The solution should also allow
  - variables/methods in the calling PYthon program to be visible
in the shell
  - variables set in the interactive (sub) shell should persist after the 
 shell
terminates, although this is a weaker requirement. POSIX subshells
for example *don't* work this way.

 There has been much written about how to embed Python from C, so I
 suppose this may offer one way. And at worst, I could write
 a C extension which follows how C Python does this for itself.

 But is there a simpler way?

 Thanks.
--
http://mail.python.org/mailman/listinfo/python-list


Re: Running an interactive interpreter inside a python

2008-05-15 Thread R. Bernstein
castironpi [EMAIL PROTECTED] writes:

 On May 15, 6:26 am, [EMAIL PROTECTED] (R. Bernstein) wrote:
 Alan J. Salmoni [EMAIL PROTECTED] writes:

  I'm not sure if this is exactly what you're after, but try looking
  into the 'code' module.

  It's fairly easy to make an interactive interpreter that runs within
  your program. If you import your programs variables into
  __main__.__dict__, you can have access to them which can be funky. You
  can even override the showtraceback method to catch various exceptions
  and do daft things like adding new methods to strings. I guess it
  would even be possible to have the commands compared to a list of
  commands and keywords to build a restricted interpreter, though how
  secure this would be against a determined attack is another matter.

  Alan

 I think this (largely) does the trick. Thanks!

 I'm not sure about how to deal with globals yet which should come from
 a stackframe f_globals. It might be possible to save and restore
 __main__.__dict__ before and after the call to interact(). Probably
 would have been cooler to design interact() to take a globals
 parameter, same as eval does.





  On May 15, 11:31 am, [EMAIL PROTECTED] (R. Bernstein) wrote:
  The next release of pydb will have the ability to go into ipython from
  inside the debugger. Sort of like how in ruby-debug you can go into
  irb :-)

  For ipython, this can be done pretty simply; there is an IPShellEmbed
  method which returns something you can call. But how could one do the
  same for the stock python interactive shell?

  To take this out of the realm of debugging. What you want to do is to
  write a python program that goes into the python interactive shell -
  without having to write your own a read/eval loop and deal with
  readline, continuation lines, etc.

  The solution should also allow
   - variables/methods in the calling PYthon program to be visible
     in the shell
   - variables set in the interactive (sub) shell should persist after the 
  shell
     terminates, although this is a weaker requirement. POSIX subshells
     for example *don't* work this way.

  There has been much written about how to embed Python from C, so I
  suppose this may offer one way. And at worst, I could write
  a C extension which follows how C Python does this for itself.

  But is there a simpler way?

  Thanks.- Hide quoted text -

 - Show quoted text -

 One threat is malicious code; sorry for this.

This is a little too cryptic for me. But then if we're talking
security, maybe obfuscation is in order. :-)

If this was meant as an explanation for why interact() only takes a
local parameter and none for global, security concerns are in fact why
you'd want to *add* that parameter: so that the caller could more easily
decide what interact sees as global. I haven't tried implementing a
__main__.__dict__ save, modify and restore around the interact(), but
I'll assume that this could be made to work.

If instead this was about the evils of writing a program that uses
code and calls interact, in general, debuggers and security are at
odds. In a highly secure environment, you probably want to disallow
any trace hook.  I think Ruby uses the global variable $SAFE with a
particular level setting for this purpose.
--
http://mail.python.org/mailman/listinfo/python-list


Running an interactive interpreter inside a python

2008-05-14 Thread R. Bernstein
The next release of pydb will have the ability to go into ipython from
inside the debugger. Sort of like how in ruby-debug you can go into
irb :-)

For ipython, this can be done pretty simply; there is an IPShellEmbed
method which returns something you can call. But how could one do the
same for the stock python interactive shell?

To take this out of the realm of debugging. What you want to do is to
write a python program that goes into the python interactive shell -
without having to write your own a read/eval loop and deal with
readline, continuation lines, etc.

The solution should also allow
 - variables/methods in the calling PYthon program to be visible
   in the shell
 - variables set in the interactive (sub) shell should persist after the shell 
   terminates, although this is a weaker requirement. POSIX subshells
   for example *don't* work this way.

There has been much written about how to embed Python from C, so I
suppose this may offer one way. And at worst, I could write
a C extension which follows how C Python does this for itself.

But is there a simpler way?

Thanks.



--
http://mail.python.org/mailman/listinfo/python-list


Re: using pdb and catching exception

2007-12-06 Thread R. Bernstein
Just checked to see how Ruby deals with this. Both languages allow one
to register a trace functon to catch events like call, line, return,
exception, etc. Ruby however register an event before the raise takes
place.

It might be cool for some good person to go through the process of
making a formal suggestion this get added, etc.  (unless a change like
this is already in the works).

Diez B. Roggisch [EMAIL PROTECTED] writes:

 raise is a statement, not a function. So it won't work.
 
 I do know that e.g. nose allows for dropping into pdb when a test
 fails. Maybe that works by catching the exception top-level, examining
 the stack-trace, setting a break-point, and restarting it.
 
 Diez
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: how to change current working directory while using pdb within emacs

2007-11-26 Thread R. Bernstein
duyanning [EMAIL PROTECTED] writes:

 I have written a pyhton script that will process data file in current
 working directory.
 My script is in an different directory to data file.
 When I debug this script using pdb within emacs, emacs will change the
 current working directory to the directory which include the script,
 so my script cannot find the data file.
 
 I think this is the problem of emacs because when I start pdb from
 console directly, it will not change current working directory to the
 one of script being debugged.
 
 please help me.
 thank you.

pydb (http://bashdb.sf.net/pydb) has an option to set the current
working directory on invocation. For example, from emacs I can run:

M-x pydb --cd=/tmp --annotate=3 ~/python/hanoi.py 3 

And then when I type 
  import os; os.getcwd()

I get:
  '/tmp'

Inside pydb, there is a cd command saving you the trouble of
importing os; the directory command can be used to set a search path
for source files. All same as you would do in gdb.

Begin digression ---

The --annotate option listed above, is similar to gdb's annotation
mode. It is a very recent addition and is only in CVS. With annotation
mode turned on, the effect is similar to running gdb (gdb-ui.el) in
Emacs 22.  Emacs internal buffers track the state of local variables,
breakpoints and the stack automatically as the code progresses. If the
variable pydb-many-windows is set to true, the each of these buffers
appear in a frame.

For those of you who don't start pydb initially bug enter via a call
such as set_trace() or debugger() and do this inside an Emacs comint
shell with pydb-tracking turned on, issue set annotation 3 same as
you would if you were in gdb.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Emacs and pdb after upgrading to Ubuntu Feisty

2007-05-06 Thread R. Bernstein
levander [EMAIL PROTECTED] writes:

 I've been using pdb under emacs on an Ubuntu box to debug python
 programs.  I just upgraded from Ubuntu Edgy to Feisty and this combo
 has stopped working.  Python is at 2.5.1 now, and emacs is at 21.41.1.

If I had to take a guess the big change would be in Python 2.5.1 and
the Emacs pdb package has not kept up with that. Edgy was running
Python 2.4.x. The emacs version is about the same. (And I agree with
Alexander Schmolck that emacs 23 alpha is much much nicer).

If you were to report a problem, my guess then would be the Debian
maintainer for the Emacs pdb package. More info below.

 It used to be I could just M-x pdb RET pdb script-name RET and be
 presented with a prompt where I could debug my script, as well as an
 arrow in another source code buffer indicating where I am in the
 source code.
 
 Now however, when I do M-x pdb RET pdb ~/grabbers/npr-grabber.py -s
 WESUN, I get this is in a buffer called *gud*:
 
 Current directory is /home/levander/grabbers/
 
 No prompt or anything follows it, just that one line.  It doesn't pop
 up an arrow in the other buffer either.  None of the regular commands
 like 'n', 's', or 'l' do anything here.  So, I did a 'Ctrl-C' and got:
 
  /home/levander/grabbers/npr-grabber.py(24)module()
 - 
 (Pdb)  /home/levander/grabbers/npr-grabber.py(30)module()
 - import getopt
 (Pdb) Traceback (most recent call last):
   File /usr/bin/pdb, line 1213, in main
 pdb._runscript(mainpyfile)
   File /usr/bin/pdb, line 1138, in _runscript
 self.run(statement, globals=globals_, locals=locals_)
   File bdb.py, line 366, in run
 exec cmd in globals, locals
   File string, line 1, in module
   File /home/levander/grabbers/npr-grabber.py, line 30, in module
 import getopt
   File /home/levander/grabbers/npr-grabber.py, line 30, in module
 import getopt
   File bdb.py, line 48, in trace_dispatch
 return self.dispatch_line(frame)
   File bdb.py, line 66, in dispatch_line
 self.user_line(frame)
   File /usr/bin/pdb, line 144, in user_line
 self.interaction(frame, None)
   File /usr/bin/pdb, line 187, in interaction
 self.cmdloop()
   File cmd.py, line 130, in cmdloop
 line = raw_input(self.prompt)
 KeyboardInterrupt
 Uncaught exception. Entering post mortem debugging
 Running 'cont' or 'step' will restart the program
  /home/levander/grabbers/cmd.py(151)cmdloop()
 - pass
 (Pdb)
 
 It's wierd because at the bottom of that call stack, it does look like
 it's wating for input, but no input works...  And, after I hit Ctrl-C
 I do get a prompt as you see at the bottom of that listing just
 above.  Now I type quit and get:

Yes, it looks like it is in its input look reading debugger commands.
If you've tried commands that produce output (e.g. list, where,
print) and you are getting nothing then, yes, that's weird.  Emacs
however will gobble up and hide what it thinks is location information
from the debugger. So if you were running step or next or
continue I could see how output would be disappearing. 

In any event a guess for some of the problem is that the Emacs isn't
parsing the output correctly.

I've noticed subtle changes in reporting the where you are between
Python 2.4 and 2.5 such as giving a module name when that's known. The
emacs regular expressions no doubt haven't been updated for knowing
this and matching the location is and probably needs to be a little
bit fussy.

 
 Post mortem debugger finished. The /home/cponder/grabbers/npr-
 grabber.py will be restarted
 
 Anybody can tell me who to get pdb working under emacs on Ubuntu
 Feisty?

I haven't tried pdb, but another Alex, Oleksandr Moskale, has a Debian
package for pydb which gets filtered down to Edgy and Feisty, and I've
used that and it works. ;-) And it also works with ipython in the way
that 'as mentioned too.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Help controlling CDROM from python

2007-03-23 Thread R. Bernstein
If you want an OS neutral way one may be able to use pycdio from the
Cheese shop.

It requires libcdio to be installed and that sometimes the case if you
have a free media player (like vlc or xine, or mplayer) installed.

I don't really use it all that often so I can't vouch for how good it
is. (Although there *are* regression tests). 

Ognjen Bezanov [EMAIL PROTECTED] writes:

 Hello,
 
 I am trying to control a CD-ROM drive using python. The code I use is
 shown below.
 
  import CDROM
  from fcntl import ioctl
  import os
  
  
  class Device:
  
  CDdevice=
  CDfd = None
  
  def __init__(self,name):
  self.CDdevice = name#we get a device name when module loaded
  
  if self.CDdevice == :
  print No device specified
  sys.exit(-1)
  self.openCD()   
  
  def openCD(self):
  
  try:
  self.CDfd = open(self.CDdevice, 'r') #open the device 
  and return filedescriptor 
  
  except(OSError,IOError): #if there is an OS or IO Error 
  (usually indicates nodisk)
  print Device Error, Halting (usually means drive 
  or disk not found)  
  sys.exit(-1)
 
  def unlockCD(self):
  return self.sendCDcommand(CDROM.CDROM_LOCKDOOR,0)
 
  def ejectCD(self):
  self.unlockCD() #we need to unlock the CD tray before we try to 
  eject, otherwise we get an IO Error (#5)
  return self.sendCDcommand(CDROM.CDROMEJECT)
  
 
 
  def sendCDcommand(self,command,argument=''):
  return ioctl(self.CDfd,command,argument)
  
 
 
 
 The code that calls the class is a follows:
 
  import CD_Bindings
  
  CD = CD_Bindings.Device(/dev/cdrom)
  
  print CD.ejectCD()
 
 This works great, but only when there is a disk inside, otherwise we get
 an error.
 
 My issue is that I need to be able to eject the CDROM tray even if there
 is no disk inside.
 
 This is possible because other programs (like the linux eject command)
 can do it. Its just a question of how it is done in python. So I'm
 posting here in the hope someone can tell me.
 
 Thanks,
 
 Ognjen
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Debugging SocketServer.ThreadingTCPServer

2007-02-03 Thread R. Bernstein
Stuart D. Gathman [EMAIL PROTECTED] writes:

 On Tue, 16 Jan 2007 09:11:38 -0500, Jean-Paul Calderone wrote:
 
  On Tue, 16 Jan 2007 00:23:35 -0500, Stuart D. Gathman
  [EMAIL PROTECTED] wrote:
 I have a ThreadingTCPServer application (pygossip, part of
 http://sourceforge.net/projects/pymilter).  It mostly runs well, but
 occasionally goes into a loop.  How can I get a stack trace of running
 threads to figure out where the loop is?  Is there some equivalent of
 sending SIGQUIT to Java to get a thread dump?  If needed, I can import
 pdb and set options at startup, but there needs to be some external way
 of triggering the dump since I can't reproduce it at will.

The problem here is that signals are handled only in the main
thread. If that thread is blocked for some reason, your signals will
also be blocked. 

Given that you were considering using pdb let me suggest instead pydb
- http://bashdb.sf.net/pydb; pdb has no notion of threads but pydb
does if you give it the --threading option. (It's thread support might
stand a bit of improvement, but at least it's there and is as good or
better than any other Python debugger that I am aware of.)

  
  Grab the gdbinit out of Python SVN Misc/ directory.  Apply this patch:
  
http://jcalderone.livejournal.com/28224.html
  
  Attach to the process using gdb.  Make sure you have debugging symbols
  in your build of Python.  Run 'thread apply all pystack'.
 
 Did this.  gdb displays main thread fine (waiting on accept(), duh).  But
 gdb goes into a loop displaying the first worker thread.  There are no
 extension modules other than the batteries included ones. In this
 application, I believe, only _socket.  (I.e. a pure python server.)
 
 I will try for a C stack trace next time it loops.
 
 Also, the looping server needs kill -9.  SIGTERM and SIGINT won't stop it.
 And after it dies with SIGKILL, the port is still in use for 5 minutes or
 so (and the server cannot be restarted).

See again above why SIGTERM and SIGINT might not necessarily do
anything. But if you get into pydb with thread support, at least you
should be able to determine if you are blocked in the main thread; so
you can know if a SIGTERM or SIGINT will be seen.

And in pydb one can kill specific threads, but you do this at
your own risk because you could cause a deadlock here. As a
convenience, there is a debugger kill command so you don't have to
remember the pid. And that's there because it is really the only
reliable way I know how to terminate the program. There is also a
signal debugger command if you wanted to issue SIGTERM or SIGINT
signals. 

 
 This is really making me appreciate Java.

Note that Java has preemptive threads, Python does not.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: pdb in python2.5

2007-01-25 Thread R. Bernstein
Rotem [EMAIL PROTECTED] writes:

 Hi,
 
 Maybe I'm repeating a previous post (please correct me if I am).
 
 I've tried the following code in python 2.5 (r25:51908, Oct  6 2006,
 15:22:41)
 example:
 
 from __future__ import with_statement
 import threading
 
 def f():
 l = threading.Lock()
 with l:
 print hello
 raise Exception(error)
 print world
 
 try:
 f()
 except:
import pdb
pdb.pm()
 
 This fails because pdb.pm() attempts to access sys.last_traceback which
 is not assigned.

Recent releases of pydb (http://bashdb.sf.net/pydb) don't suffer this
problem. (But see below.)

 Trying:
 pdb.post_mortem(sys.exc_traceback)
 
 Yields the following:
  test.py(9)f()
 - print world
 (Pdb)
 
 the 'w' command yields a similar output, which implies that the
 exception was thrown from the wrong line.
 the traceback module is better, yielding correct results (displays line
 8 instead of 9).
 
 Has anyone encountered this behavior? 

Yes, this seems to be a bug in pdb. It is using the traceback's f_line
instance variable rather than the tb_lineno instance variable. I guess
these two values are usually the same. In fact, I haven't been able to
come up with a Python 2.4 situtation where they are different. (If
someone can, I'd appreciate it if you'd send me a small example so I
can put it in the pydb regression tests.) Even in 2.5, it's kind of
hard to get a case where they are different. If I remove the with,
the problem goes away.

It was also bug in pydb, but I've just committed in CVS the fix for
this


 is pdb broken?

Best as I can tell pdb isn't all that well maintained. (Had it been, I
probably wouldn't have devoted the time to pydb that I have been.)

 I get similar results for larger/more complex pieces of code.

 
 Thanks in advance,
 
 Rotem
-- 
http://mail.python.org/mailman/listinfo/python-list


Possible bug in Python 2.5? (Was Re: pdb in python2.5)

2007-01-25 Thread R. Bernstein
I'd like to change my assessment of whether the problem encountered is
a pdb bug or not. It could be a bug in Python. (Right now it is only
known to be a bug in version 2.5.)

For a given traceback t, the question is whether t.tb_frame.f_lineno
can ever be different from t.tb_lineno.

Still, for now in pydb CVS, I've worked around this by checking.

[EMAIL PROTECTED] (R. Bernstein) writes:

 Rotem [EMAIL PROTECTED] writes:
 
  Hi,
  
  Maybe I'm repeating a previous post (please correct me if I am).
  
  I've tried the following code in python 2.5 (r25:51908, Oct  6 2006,
  15:22:41)
  example:
  
  from __future__ import with_statement
  import threading
  
  def f():
  l = threading.Lock()
  with l:
  print hello
  raise Exception(error)
  print world
  
  try:
  f()
  except:
 import pdb
 pdb.pm()
  
  This fails because pdb.pm() attempts to access sys.last_traceback which
  is not assigned.
 
 Recent releases of pydb (http://bashdb.sf.net/pydb) don't suffer this
 problem. (But see below.)
 
  Trying:
  pdb.post_mortem(sys.exc_traceback)
  
  Yields the following:
   test.py(9)f()
  - print world
  (Pdb)
  
  the 'w' command yields a similar output, which implies that the
  exception was thrown from the wrong line.
  the traceback module is better, yielding correct results (displays line
  8 instead of 9).
  
  Has anyone encountered this behavior? 
 
 Yes, this seems to be a bug in pdb. It is using the traceback's f_line
 instance variable rather than the tb_lineno instance variable. I guess
 these two values are usually the same. In fact, I haven't been able to
 come up with a Python 2.4 situtation where they are different. (If
 someone can, I'd appreciate it if you'd send me a small example so I
 can put it in the pydb regression tests.) Even in 2.5, it's kind of
 hard to get a case where they are different. If I remove the with,
 the problem goes away.
 
 It was also bug in pydb, but I've just committed in CVS the fix for
 this
 
 
  is pdb broken?
 
 Best as I can tell pdb isn't all that well maintained. (Had it been, I
 probably wouldn't have devoted the time to pydb that I have been.)
 
  I get similar results for larger/more complex pieces of code.
 
  
  Thanks in advance,
  
  Rotem
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Possible bug in Python 2.5? (Was Re: pdb in python2.5)

2007-01-25 Thread R. Bernstein
Rotem [EMAIL PROTECTED] writes:

 Hi,
 
 I noticed that pydb.pm() also fails in python2.5 when invoked on that
 same example (seems like also trying to access a nonexistent
 attribute/variable).
 
 Is this known to you as well/was it fixed?

Doesn't do that for me for Python 2.5 on both pydb 1.20 (last released
version) and the current CVS. See below. If you think the problem is
not on your end and want to file a bug report, please do. But note

  * comp.lang.python is not the place to file bug reports
  * more detail is needed that what's been given so far

$ cat /tmp/t1.py
from __future__ import with_statement
import threading

def f():
l = threading.Lock()
with l:
print hello
raise Exception(error)
print world

try:
f()
except:
   import pydb
   pydb.pm()
$ python /tmp/t1.py
hello
(/tmp/t1.py:9):  f
(Pydb) where
- 0 f() called from file '/tmp/t1.py' at line 9
## 1 module() called from file '/tmp/t1.py' at line 15
(Pydb) show version
pydb version 1.20.
(Pydb) quit
$ python /tmp/t1.py 
hello
(/tmp/t1.py:15):  module
(Pydb) where
## 0 f() called from file '/tmp/t1.py' at line 8
- 1 module() called from file '/tmp/t1.py' at line 15
(Pydb) show version
pydb version 1.21cvs.
(Pydb) quit
$ 


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Tracing the execution of scripts?

2006-10-27 Thread R. Bernstein
pydb (http://bashdb.sf.net/pydb) has a both the ability to trace lines
as they are executed as well as an --output option to have this sent
to a file rather than stdout. If your program has threads it would be
good to use the --threading option. (The best threading support is if
your program uses the threading module for threads rather than the
lower-level thread module.)

A demo of the debugger including showing line tracing and output to a file is
here:
  http://showmedo.com/videos/video?name=pythonBernsteinPydbIntrofromSeriesID=28


Michael B. Trausch mike$#at^nospam!%trauschus writes:

 Alright, I seem to be at a loss for what I am looking for, and I am not
 even really all that sure if it is possible or not.  I found the 'pdb'
 debugger, but I was wondering if there was something that would trace or
 log the order of line execution for a multi-module Python program.  I am
 having a little bit of a problem tracking down a problem that I
 mentioned earlier
 (http://groups.google.com/group/comp.lang.python/msg/9c759fc888b365be),
 and I am hoping that maybe seeing how the statements are executed will
 shed some light on the entire thing for me.
 
 The debugger isn't working, though, because I have a timer set up to
 fire off every 20ms to check for incoming network traffic from the
 server, and that timer firing off makes stepping through with the
 debugger seemingly impossible to get any useful data.
 
 Basically, is there something that will log every line of Python code
 executed, in its order of execution, to a text file so that I can see
 what is (or isn't) happening that I am expecting?  I know that the
 server is sending and receiving properly, because I can connect to it
 with a regular telnet client and it works -- but at the same time,
 everything I can think of to check my program's functionality checks out
 just fine (e.g., it reports that it is successfully sending what I am
 typing to the server, and it says that the server is not sending
 anything back to be read from the socket).
 
 If it makes any difference, I am using wxWidgets' Python bindings for
 the UI code (and the timer), and using Python's socket library for the
 network stuff.
 
   -- Mike
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Tracing the execution of scripts?

2006-10-27 Thread R. Bernstein
Fulvio [EMAIL PROTECTED] writes:
 ***
 Your mail has been scanned by InterScan MSS.
 ***
Delighted to know that.

 
 On Friday 27 October 2006 17:31, R. Bernstein wrote:
  pydb (http://bashdb.sf.net/pydb) has a both the ability to trace lines
 
 I faced several time that pydb stuck without sign of errors. In the other 
 hand 
 Pdb doesn't appear that problem.
 Mostly pydb freeze on long loops.

In version 1.19 (released today), I extended signal handling so that
you can send the program a signal and it will print a stack trace of
where it is and continue. But even before this release, you could
arrange to enter the debugger by sending it a signal. It's possible
something like this might be used help track down the problem in
either pydb or another Python program that seems to be not
responding. On the other hand, use at your own risk - I don't
guarentee this will work for you.

And.. before you ask for more details, I'll repeat what someone else
posted in response to another of your questions:

  I guess you've been told to read this here, but just in case it
  wasn't, or you didn't bother to read it:

  http://catb.org/esr/faqs/smart-questions.html

 It might be some problem on my setup, I'll check it up...

Given your other posts, that's quite possible. If it's not, submit a
bug report. (Posting to c.l.r isn't the same as submitting a bug
report). Thanks.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Debugging

2006-10-23 Thread R. Bernstein
Fulvio [EMAIL PROTECTED] writes:

 The previous post I might have missed some explaination on my proceeding. I'd 
 say that I'm testing a small program under pdb control 
 (python /usr/lib/python2.4/pdb.py ./myprog.py). So pdb will load myprog and 
 stop the first line code.
 Once I'm at the pdb command line I can issue the commands available inside 
 the 
 pdb itself. 

Okay. 

There's one other pydb command might be of interest. I mentioned the
ability to issue debugger commands by giving the commands as a string
in a command-line option, or putting the debugger commands in a file
and giving the name of a file in a command-line option.  However
*inside* pydb, one can also run a canned set of debugger commands read
in from a file. This is source command. Again, this is exactly
analogous the command of the same name in gdb.

Should you want to somehow build such a debugger script from an
interactive debugging session, set history and set logging might
be helpful.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Debugging

2006-10-21 Thread R. Bernstein
Fulvio [EMAIL PROTECTED] writes:

 ***
 Your mail has been scanned by InterScan MSS.
 ***
 
 
 Hello,
 
 I'm trying out a small utility and my method uses PDB for debugging. I tried 
 to read some information regarding the commands of PDB but are rather 
 synthetic. 

I'm not sure what this means. If you are doing something like an IDE,
that is calling debugger routines from inside a program, then appears
that most of them use bdb, which is not documented but based on the
number times other programs have used it, it doesn't seem to be all
that insurmountable either.

(I have in mind one day to merge in all of the extensions that
everyone has seemed to add in all of those IDEs, along with the
extensions I've added for pydb, and make that a separate package and
add documentation for all of this.)

pdb does not support options, but pydb (http://bashdb.sf.net/pydb)
does. In pdb, if what you wanted to do was run a debugger script, the
way you could do it is to put the commands in .pdbrc. For pydb the
default file read is .pydbrc, but you can turn that *off* with -nx and
you can specify another file of your own using --command=*filename*.

In pydb, instead of listing commands in a file, it's also possible to
give them on a command line like this

pydb --exec='step 2;;where;;quit' python-script python-script-opt1

(Inside both pdb or pydb, ';;' separates commands)

 Mostly I'd like to understand the use of  condition command in 
 terms to use breakpoints (BP) with functions.

pydb (and to some extent pdb) follow the gdb command concepts. pydb
has more extensive documentation
(http://bashdb.sourceforge.net/pydb/pydb/lib/index.html) but if this
isn't enough I think the corresponding sections from gdb might be of help.

 Simple the first doubt was a counted BP. What variable using that BP and 
 which 
 function can be taken from the command line?

In pydb as with gdb, the first breakpoint has number 1, the next
breakpoint 2. When breakpoints are deleted, the numbers don't get
reused. 

(I think all of this is the case also with pdb, but someone might
check on this; it's possible breakpoints in pdb start from 0 instead
of 1 as is the  case in gdb/pydb.)
-- 
http://mail.python.org/mailman/listinfo/python-list


Set -Werror *inside* Python (Was Re: Can pdb be set to break on warnings?)

2006-10-13 Thread R. Bernstein
This seems like very useful information. In the documentation I've
been maintaining for the extended python debugger
(http://bashdb.sf.net/pydb) I've added this as a little footnote:
http://bashdb.sourceforge.net/pydb/pydb/lib/pydb-invocation.html#foot1113

However since pydb allows for options on it's own, I wonder if there
might not be a way do this from *inside* a Python
debugger/program. Specifically so that when an execfile is called, it
is as though -Werror were given initially. Possibly by setting
sys.warnoptions? Anyone know offhand if that or something else will
work?

I'll do the testing myself if someone can give a small python program
that gives such a warning. (I realize most people contributing to
comp.lang.python write programs flawlessly the first time so they've
never come across such a warning message either, let alone have need
for a debugger; but this thread suggested that the perhaps there might
such a person who has seen a Python warning message exists. :-)

Fredrik Lundh [EMAIL PROTECTED] writes:

 LorcanM wrote:
 
   python -m pdb -Werror myprogram.py
 
  It sounds like what I want, but it doesn't work for me. When I try the
  above line of code, it replies:
 
  Error: -Werror does not exist
 
  I'm running Python 2.4.3
 
 sorry, pilot cut and paste error.  try:
 
 python -Werror -m pdb myprogram.py
 
 (-m script must be the last option before the script arguments, for pretty
 obvious reasons).
 
 /F 
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to use pdb?

2006-07-22 Thread R. Bernstein
[EMAIL PROTECTED] writes:

 R. Bernstein wrote:
  Perhaps what you are looking for is:
python /usr/lib/python2.4/pdb.py Myprogram.py
 
 I tried this and it did not work.  pdb did not load the file so it
 could be debugged.

lol. Yes, if you are not in the same directory as Myprogram.py you may
have to add be more explicit about where Myprogram.py is. 

Reminds me of the story of the guy who was so lazy that when he got an
award for laziest person alive he said, roll me over and put it in
my back pocket. :-)

Glad you were able to solve your problem though.

For other similarly confused folks I have updated pydb's
documentation (in CVS):

  In contrast to running a program from a shell (or gdb), no path
  searching is performed on the python-script. Therefore python-script
  should be explicit enough (include relative or absolute file paths)
  so that the debugger can read it as a file name. Similarly, the
  location of the Python interpeter used for the script will not
  necessarily be the one specified in the magic field (the first line
  of the file), but will be the Python interpreter that the debugger
  specifies. (In most cases they'll be the same and/or it won't
  matter.)

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to use pdb?

2006-07-21 Thread R. Bernstein
[EMAIL PROTECTED] writes:

 I am trying to figure out how to use the pdb module to debug Python
 programs, and I'm having difficulties.  I am pretty familiar with GDB
 which seems to be similar, 

If you are pretty familiar with gdb, try
http://bashdb.sourceforge.net/pydb. It is a great deal more similar. ;-)


 however I cannot even get the Python
 debugger to break into a program so I can step through it.  I wrote a
 simple program and below is an example of the steps I tried, as well as
 the results I got:
 
  import pdb
  import MyProgram
  pdb.run('MyProgram.mainFunction')
  string(1)?()
 (Pdb) b MyProgram.MyClass.method
 Breakpoint 1 at MyProgram.py:8
 (Pdb) cont
 
 
 The program should have printed a message to the screen when execution
 was continued, and it should have hit the breakpoint during that
 execution.  Instead the debugger exitted to the main prompt.
 
 What is the reason the dubugger behaves like this?

I think you misunderstand what pdb.run is supposed to do. It's not at
all like gdb's run command. In fact pdb doesn't even have a
debugger *command* called run, although it does have top-level
function called run.

pdb.run is evaluating MyProgram.mainfunction the same as if you were
to type it at Python's   prompt. 

Except it will call the debugger control WHEN IT ENCOUNTERS THE FIRST
STATEMENT. I think if you were issu MyProgram.mainFunction at the
python prompt you'd get back something like: function mainFunction at
0xb7f5c304.

It wouldn't *call* the routine. To call it, you'd type something like
Myprogram.mainFunction()

But before you try to issue pdb.run('Myprogram.mainFunction()'), read
on. The way to use pdb.run is to insert it somewhere in your program
like MyProgram and have the debugger kick in, not something you issue
from a Python prompt.

 What is the correct way to accomplish what I am trying to do?
 
 I have tried this process on Python 2.3.5 and 2.4.3 with the same
 results on each version.

Perhaps what you are looking for is:
  python /usr/lib/python2.4/pdb.py Myprogram.py

(Subtitute the correct path name for pdb.py.)

If you have pydb installed it adds a symbolic link to pydb.py. So here
you should be able to issue:

pydb Myprogram.py



-- 
http://mail.python.org/mailman/listinfo/python-list


Re: automatic debugger?

2006-07-15 Thread R. Bernstein
[EMAIL PROTECTED] writes:

 hi
 is there something like an automatic debugger module available in
 python? Say if i enable this auto debugger, it is able to run thru the
 whole python program, print variable values at each point, or print
 calls to functions..etc...just like the pdb module, but now it's
 automatic.
 thanks

pydb (http://bashdb.sourceforge.net) has a linetrace option sort of
like what's done in POSIX shells. 

Here's an example:

#!/usr/bin/python
Towers of Hanoi
import sys,os

def hanoi(n,a,b,c):
if n-1  0:
   hanoi(n-1, a, c, b) 
print Move disk %s to %s % (a, b)
if n-1  0:
   hanoi(n-1, c, b, a) 

i_args=len(sys.argv)
if i_args != 1 and i_args != 2:
print *** Need number of disks or no parameter
sys.exit(1)

n=3

if i_args  1:
  try: 
n = int(sys.argv[1])
  except ValueError, msg:
print ** Expecting an integer, got: %s % repr(sys.argv[1])
sys.exit(2)

if n  1 or n  100: 
print *** number of disks should be between 1 and 100 
sys.exit(2)

hanoi(n, a, b, c)


$ pydb --basename --trace hanoi.py
(hanoi.py:2): 
+ Towers of Hanoi
(hanoi.py:3): 
+ import sys,os
(hanoi.py:5): 
+ def hanoi(n,a,b,c):
(hanoi.py:12): 
+ i_args=len(sys.argv)
(hanoi.py:13): 
+ if i_args != 1 and i_args != 2:
(hanoi.py:17): 
+ n=3
(hanoi.py:19): 
+ if i_args  1:
(hanoi.py:26): 
+ if n  1 or n  100: 
(hanoi.py:30): 
+ hanoi(n, a, b, c)
--Call--
(hanoi.py:5):  hanoi
+ def hanoi(n,a,b,c):
(hanoi.py:6):  hanoi
+ if n-1  0:
(hanoi.py:7):  hanoi
+hanoi(n-1, a, c, b) 
--Call--
(hanoi.py:5):  hanoi
+ def hanoi(n,a,b,c):
(hanoi.py:6):  hanoi
+ if n-1  0:
(hanoi.py:7):  hanoi
+hanoi(n-1, a, c, b) 
--Call--
(hanoi.py:5):  hanoi
+ def hanoi(n,a,b,c):
(hanoi.py:6):  hanoi
+ if n-1  0:
(hanoi.py:8):  hanoi
+ print Move disk %s to %s % (a, b)
Move disk a to b
(hanoi.py:9):  hanoi
+ if n-1  0:
--Return--
(hanoi.py:9):  hanoi
+ if n-1  0:
(hanoi.py:8):  hanoi
+ print Move disk %s to %s % (a, b)
Move disk a to c
(hanoi.py:9):  hanoi
+ if n-1  0:
(hanoi.py:10):  hanoi
+hanoi(n-1, c, b, a) 
--Call--
(hanoi.py:5):  hanoi
+ def hanoi(n,a,b,c):
(hanoi.py:6):  hanoi
+ if n-1  0:
(hanoi.py:8):  hanoi
+ print Move disk %s to %s % (a, b)
Move disk b to c
(hanoi.py:9):  hanoi
+ if n-1  0:
--Return--
(hanoi.py:9):  hanoi
+ if n-1  0:
--Return--
(hanoi.py:10):  hanoi
+hanoi(n-1, c, b, a) 
(hanoi.py:8):  hanoi
+ print Move disk %s to %s % (a, b)
Move disk a to b
(hanoi.py:9):  hanoi
+ if n-1  0:
(hanoi.py:10):  hanoi
+hanoi(n-1, c, b, a) 
--Call--
(hanoi.py:5):  hanoi
+ def hanoi(n,a,b,c):
(hanoi.py:6):  hanoi
+ if n-1  0:
(hanoi.py:7):  hanoi
+hanoi(n-1, a, c, b) 
--Call--
(hanoi.py:5):  hanoi
+ def hanoi(n,a,b,c):
(hanoi.py:6):  hanoi
+ if n-1  0:
(hanoi.py:8):  hanoi
+ print Move disk %s to %s % (a, b)
Move disk c to a
(hanoi.py:9):  hanoi
+ if n-1  0:
--Return--
(hanoi.py:9):  hanoi
+ if n-1  0:
(hanoi.py:8):  hanoi
+ print Move disk %s to %s % (a, b)
Move disk c to b
(hanoi.py:9):  hanoi
+ if n-1  0:
(hanoi.py:10):  hanoi
+hanoi(n-1, c, b, a) 
--Call--
(hanoi.py:5):  hanoi
+ def hanoi(n,a,b,c):
(hanoi.py:6):  hanoi
+ if n-1  0:
(hanoi.py:8):  hanoi
+ print Move disk %s to %s % (a, b)
Move disk a to b
(hanoi.py:9):  hanoi
+ if n-1  0:
--Return--
(hanoi.py:9):  hanoi
+ if n-1  0:
--Return--
(hanoi.py:10):  hanoi
+hanoi(n-1, c, b, a) 
--Return--
(hanoi.py:10):  hanoi
+hanoi(n-1, c, b, a) 
--Return--
(hanoi.py:30): 
+ hanoi(n, a, b, c)
--Return--
(string:1): 
+  (bdb.py:366):  run
+ exec cmd in globals, locals
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: continue out of a loop in pdb

2006-05-23 Thread R. Bernstein
Gary Wessle [EMAIL PROTECTED] writes:

 Hi
 
 using the debugger, I happen to be on a line inside a loop, after
 looping few times with n and wanting to get out of the loop to the
 next line, I set a break point on a line after the loop structure and
 hit c, that does not continue out of the loop and stop at the break
 line, how is it down, I read the ref docs on pdb but could not figure
 it out.

The command you are probably looking for is jump. 

http://bashdb.sourceforge.net/pydb/pydb/lib/subsubsection-resume.html

It is also documented in the stock python debugger
http://docs.python.org/lib/debugger-commands.html

Here's an example:

pdb ~/python/ptest.py
 /home/rocky/python/ptest.py(2)?()
- for i in range(1,10):
(Pdb) step
 /home/rocky/python/ptest.py(3)?()
- print i
(Pdb) list
  1 #!/bin/python
  2 for i in range(1,10):
  3  - print i
  4 print tired of this
[EOF]
(Pdb) jump 4
 /home/rocky/python/ptest.py(4)?()
- print tired of this
(Pdb) 

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: continue out of a loop in pdb

2006-05-23 Thread R. Bernstein
Diez B. Roggisch [EMAIL PROTECTED] writes:

  the code works with no problem, I am playing around with the pdb, i.e
  
  from pdb import *
  set_trace() for i in range(1,50):
  print i
  print tired of this
  print I am out
  
  [EMAIL PROTECTED]:~/python/practic$ python practic.py
  /home/fred/python/practic/practic.py(4)?()
  - for i in range(1,50):
  (Pdb) n
  /home/fred/python/practic/practic.py(5)?()
  - print i
  (Pdb) n
  1
  /home/fred/python/practic/practic.py(4)?()
  - for i in range(1,50):
  (Pdb) b 6
  Breakpoint 1 at /home/fred/python/practic/practic.py:6
  (Pdb) c
  /home/fred/python/practic/practic.py(5)?()
  - print i  I expected (print tired of this)
  (Pdb)
 
 
 In TFM it says that set_trace() puts a breakpoint to the current
 frame. I admit that I also wouldn't read that as each and every
 instruction in this very frame, but that is what essentially
 happens. I think the docs could need some enhancement here. Try
 debugging a called function, there things will work as expected.

Let me try to explain my understanding here. set_trace() merely tells
the Python interpreter to call the debugger dispatcher after the call
to set_trace() finishes. In effect this is at the next statement of
your program because the caller frame here is always the set_trace()
call you put in your program. Okay, so now we've called this debugger
dispatcher thing and what does that do? Well, it accepts commands
from you, like next or step or continue. .

In the case above, Python was told to make that set-trace call 50
times.

But yes, I agree the wording in in the pdb (and pydb) manuals are
written too much from view of someone writing a debugger rather than
someone using it.  If you or someone else wantsto make a suggestion as
to how to make the description of set_trace() more user friendly (and I
don't think the above explanation succeeds), I'll put it in the next
release of pydb (http://bashdb.sourceforge.net/pydb) which probably
will be in the not-too distant future.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: continue out of a loop in pdb

2006-05-23 Thread R. Bernstein
Here's the revision I just made for pydb's documentation (in
CVS). I welcome suggestions for improvement.


set_trace([cmdfile=None])

Enter the debugger before the statement which follows (in
execution) the set_trace() statement. This hard-codes a call to
the debugger at a given point in a program, even if the code is
not otherwise being debugged. For example you might want to do
this when an assertion fails.  

It is useful in a couple of other situations. First, there may be
some problem in getting the debugger to stop at this particular
place for whatever reason (like flakiness in the
debugger). Alternatively, using the debugger and setting a
breakpoint can slow down a program a bit. But if you use this
instead, the code will run as though the debugger is not present
until you reach this point in the program.

When the debugger is quitting, this causes the program to be
terminated. If you want the program to continue instead, use the
debugger function.



 Diez B. Roggisch [EMAIL PROTECTED] writes:
 
   the code works with no problem, I am playing around with the pdb, i.e
   
   from pdb import *
   set_trace() for i in range(1,50):
   print i
   print tired of this
   print I am out
   
   [EMAIL PROTECTED]:~/python/practic$ python practic.py
   /home/fred/python/practic/practic.py(4)?()
   - for i in range(1,50):
   (Pdb) n
   /home/fred/python/practic/practic.py(5)?()
   - print i
   (Pdb) n
   1
   /home/fred/python/practic/practic.py(4)?()
   - for i in range(1,50):
   (Pdb) b 6
   Breakpoint 1 at /home/fred/python/practic/practic.py:6
   (Pdb) c
   /home/fred/python/practic/practic.py(5)?()
   - print i  I expected (print tired of this)
   (Pdb)
  
  
  In TFM it says that set_trace() puts a breakpoint to the current
  frame. I admit that I also wouldn't read that as each and every
  instruction in this very frame, but that is what essentially
  happens. I think the docs could need some enhancement here. Try
  debugging a called function, there things will work as expected.
 
 Let me try to explain my understanding here. set_trace() merely tells
 the Python interpreter to call the debugger dispatcher after the call
 to set_trace() finishes. In effect this is at the next statement of
 your program because the caller frame here is always the set_trace()
 call you put in your program. Okay, so now we've called this debugger
 dispatcher thing and what does that do? Well, it accepts commands
 from you, like next or step or continue. .
 
 In the case above, Python was told to make that set-trace call 50
 times.
 
 But yes, I agree the wording in in the pdb (and pydb) manuals are
 written too much from view of someone writing a debugger rather than
 someone using it.  If you or someone else wantsto make a suggestion as
 to how to make the description of set_trace() more user friendly (and I
 don't think the above explanation succeeds), I'll put it in the next
 release of pydb (http://bashdb.sourceforge.net/pydb) which probably
 will be in the not-too distant future.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: debug CGI with complex forms

2006-04-08 Thread R. Bernstein
Sullivan WxPyQtKinter [EMAIL PROTECTED] writes:

 When the form in one HTML is very complex with a lot of fields(input,
 button,radio,checkbox etc.), setting the environment is quite
 burdernsome, so I usually change the stdout and stderr of the submit
 processing script to a file object to see the output directly from that
 file. This just can do, but still inconvinient.
 Anyone has more suggestions?

The extended Python debugger (http://bashdb.sourceforge.net/pydb) has
the ability to give POSIX-style line tracing, and to change the debugger
stdout/stderr to a file you direct it. For example,
   pydb --trace --output=/tmp/dbg.log --error=/tmp/error.log my-cgi

Rather than trace the entire CGI, if there is a specific portion that
you want traced that can be done too by changing the CGI. Create a
files called /tmp/trace-on and /tm/trace-off. /tmp/trace-on might
contain this:

  # Issue debugger commands to set debugger output to 
  # go to a file, turn on line tracing, and continue
  set logging file /tmp/dbg.log
  # uncomment the following line if you want to wipe out old dbg.log
  # set logging overwrite on
  set logging redirect on
  set logging on
  set linetrace on
  continue

and /tmp/trace-off:

  # Issue debugger commands to turn off line tracing and continue
  set linetrace off
  continue

In your CGI (in the above command-line example the CGI was called
my-cgi) add:

   import pydb   # needs to be done only once
   ...
   pydb.set_trace(/tmp/trace-on)
   # This and subsequent lines will be traced
   ...
   pydb.set_trace(/tmp/trace-off)

Of course modify the file names in the examples above, my-cgi,
/tmp/dbg.log /tmp/trace-on, /tmp/trace-off as appropriate.

I notice that line tracing at present doesn't show the text of traced
lines just the position (file name, line number, and possibly
function/method name). I will probably change this pretty soon in CVS
though.

Finally, I don't know if any of this will help, but I guess if it
doesn't it would be interesting to understand how it fails. 

(If my past experience in c.l.p holds here, if nothing else, my having
posted this will at least motivate someone else to give other
method(s). ;-)




-- 
http://mail.python.org/mailman/listinfo/python-list


Re: advice on this little script

2006-04-08 Thread R. Bernstein
BartlebyScrivener [EMAIL PROTECTED] writes:

 There are several of these writing quotes, all good in their own way,

And from Hamlet: brevity is the soul of wit.
-- 
http://mail.python.org/mailman/listinfo/python-list


[ANN] pycdio 0.11

2006-03-31 Thread R. Bernstein
pycdio is an OO Python interface to libcdio.  The libcdio package
contains a library for CD-ROM and CD image access. Applications
wishing to be oblivious of the OS- and device-dependent properties of
a CD-ROM or of the specific details of various CD-image formats may
benefit from using this library. 

In this release, a library for working with ISO 9660 filesystems or
filesystem images was added. A number of bugs have been fixed and
issues in compiling on a number of platforms has been addressed. SWIG
is no longer required to build. Anyone who was using the previous
release should upgrade to this one.

ftp://ftp.gnu.org:/pub/gnu/libcdio/pycdio-0.11.tar.gz
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to debug python code?

2006-03-31 Thread R. Bernstein
[EMAIL PROTECTED] writes:

 hi,
I am new to Python programming.I am not getting exactly pdb.Can
 anyone tell me effective  way to debug python code?
Please give me any example.
Looking for responce.
Thank You.
 Sushant

Well, I guess (in addition to the other good suggestions in this
thread) this is an obvious place to plug pydb
http://bashdb.sourceforge.net/pydb

If you are using pdb, I think you'll find pydb, um, better.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to debug python code?

2006-03-31 Thread R. Bernstein
Dennis Lee Bieber [EMAIL PROTECTED] writes:

 On 30 Mar 2006 21:18:50 -0800, [EMAIL PROTECTED] declaimed the
 following in comp.lang.python:
 
  hi,
 I am new to Python programming.I am not getting exactly pdb.Can
  anyone tell me effective  way to debug python code?
 
   I think I toyed with pdb back around 1993... Never needed it...
 
   Of course, with so many different languages and debuggers in my
 life, I've never found time to master any but the old VMS debugger
 (which is nothing more than a very complex error handler G)

That's one reason why in my bash and GNU make debugger (and in
extending pdb), I've stuck to the gdb command set: the effort that is
spent mastering gdb can be transfered in the GNU Make, Bash, *AND*
Python debuggers.
-- 
http://mail.python.org/mailman/listinfo/python-list


ANN: Extended Python debugger 1.14

2006-02-28 Thread R. Bernstein
Download from
http://sourceforge.net/project/showfiles.php?group_id=61395package_id=175827

On-line documentation is at
http://bashdb.sourceforge.net/pydb/pydb/lib/index.html

Changes since 1.12
* Add MAN page (from Debian)
* Bump revision to 0.12 to 1.13 to be compatible with Debian pydb package.
* Improve set_trace(), post_mortem(), and pm()
* Document alternative invocation methods and how to call from a program.
* Break out cmds and bdb overrides into a separate module
* add optional count on where
* install in our own directory (e.g. site-packages/pydb)
  NOTE: REMOVE OLD PYDB INSTALLATIONS BEFORE INSTALLING THIS PYDB.
* As a result of the above add __init__.py to give package info.
* Don't rely on tabs settings. Remove all tabs in programs and use -t 
  to check that.
* Various bug fixes and documentation corrections
* Fix bugs in configuration management. Make distcheck works again
  (sometimes)
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: using breakpoints in a normal interactive session

2006-02-23 Thread R. Bernstein
In revising pydb the code and documentation for the routine originally
described, I learn that the pdb equivalent (sort of) is called
set_trace().

However set_trace() will terminate the program when you quit the
debugger, so I've retained this routine and made a couple of
corrections -- in particular to support a restart and make show args
work. The changes are in pydb's CVS. Lacking a better name, the
routine is called debugger.  

There is one other difference between set_trace() and debugger().  In
set_trace you stop at the statement following set_trace(), With
debugger() the call trace shows you in debugger and you may need to
switch to the next most-recent call frame to get info about the
program being debugged.

A downside of the debugger() approach is that debug session
information can't be saved between calls: each call is a new instance
of the debugger and when it is left via quit the instance is
destroyed. (In the case of pydb.set_trace() the issue never comes up
because the program is terminated on exit.)

[EMAIL PROTECTED] (R. Bernstein) writes:

 Here's what I was able to do using the Extended Python debugger.
 http://bashdb.sourceforge.net/pydb/. 
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: using breakpoints in a normal interactive session

2006-02-22 Thread R. Bernstein
[EMAIL PROTECTED] writes:

 Is there a way to temporarily halt execution of a script (without using
 a debugger) and have it put you in an interactive session where you
 have access to the locals?  

Here's what I was able to do using the Extended Python debugger.
http://bashdb.sourceforge.net/pydb/. I'm sure there's a similar (if
not even simpler) way to do this in the stock debugger; but I'll let
others suggest that ;-)

First add this routine:

def call_debugger():
from pydbdisp import Display, DisplayNode
import pydb, inspect, sys
try:
raise Exception
except:
frame=inspect.currentframe()
p = pydb.Pdb()
p.reset()
p.display = Display()
p._program_sys_argv = list(sys.argv)
p.interaction(frame, sys.exc_traceback)

And then call it from your program as you indicated below for
magic_breakpoint(). For magic_resume() just issue quit
continue or give a termanal EOF. 

That the above routine is so long suggests some initialization
probably should be moved around. And in a future release (if there is
one), I'll consider adding something like the above routine.


And possibly resume?  For example:
 
  def a():
 ...   x = 1
 ...   magic_breakpoint()
 ...   y = 1
 ...   print got here
 ...
  a()
 Traceback (most recent call last):
   File stdin, line 1, in ?
   File stdin, line 3, in a
   File stdin, line 2, in magic_breakpoint
  x
 1
  y
 Traceback (most recent call last):
   File stdin, line 1, in ?
 NameError: name 'y' is not defined
  magic_resume()
 got here
  x
 Traceback (most recent call last):
   File stdin, line 1, in ?
 NameError: name 'x' is not defined
-- 
http://mail.python.org/mailman/listinfo/python-list


ANN: Extended Python debugger 0.12

2006-02-21 Thread R. Bernstein
This third release of an improved debugger also probably about as
great as the last release.

Download from
http://sourceforge.net/project/showfiles.php?group_id=61395package_id=175827

On-line documentation is at
http://bashdb.sourceforge.net/pydb/pydb/lib/index.html

Along with this release is a version of ddd (3.3.12-test3) that
interfaces with this debugger. Get that at:
http://sourceforge.net/project/showfiles.php?group_id=61395package_id=65341

- - - - - - 
From the NEWS file.

* Add gdb commands: 
- cd command
- display, undisplay
- help subcommands for show, info, and set
- info: display, line source and program
- pwd command
- return (early exit from a function)
- shell command
- Extend info line to allow a function name.

* Use inspect module. Argument parameters and values are now shown.

* Break out debugger into more files 

* File location reporting changed. Is now like mdb, bashdb, or (or perldb)

* Changes to make more compatible with ddd.

* Doc fixes, add more GNU Emacs debugger commands

* clear command now accepts a function name

* Bugfixes:
  - allow debugged program to mangle sys.argv (we save our own copy)



-- 
http://mail.python.org/mailman/listinfo/python-list


ANN: (slightly) extended Python debugger 0.11

2006-01-29 Thread R. Bernstein
The second public release of the extended Python debugger is now
available from sourceforge:

http://sourceforge.net/project/showfiles.php?group_id=61395package_id=175827

For this release documentation has been added. That is also available
online at: http://bashdb.sourceforge.net/pydb/pydb/lib/index.html

A lot of important changes were made and changes that will facilitate
growth.

The ability to do non-interactive POSIX-shell line tracing was added,
and debugger output can be redirected. This should help out in
situations such as CGI's where there is no terminal or interactivity
is not possible.

Regression tests were added to ensure quality, and lo' bugs
were found. Debugger command options were added.

More information can be found in the NEWS file that comes with the
distribution.

Given the feedback on the pdb.py debuggers (i.e. it's just not used
all that often), compatibility with that has been deprecated where it
conflicts with other debuggers.

I've only tested this on GNU/Linux, OSX Solaris and cygwin using
Python versions 2.3.5 and 2.4.2.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to handle two-level option processing with optparse

2006-01-26 Thread R. Bernstein
Steve Holden [EMAIL PROTECTED] writes:

 Well you are just as capable ...

Yes, I guess you are right. Done.

Couldn't find how to suggest an addition to the Python Cookbook (other
than some generic O'Reilly email), so I've put a submission to:
http://aspn.activestate.com/ASPN/Cookbook/Python/
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to handle two-level option processing with optparse

2006-01-26 Thread R. Bernstein
Magnus Lycka informs:
 [in response to my comment]:
  I see how I missed this. Neither disable_.. or enable_.. have document
  strings. And neither seem to described in the optparser section (6.21)
  of the Python Library (http://docs.python.org/lib/module-optparse.html).
 
 http://docs.python.org/lib/optparse-other-methods.html

Hmmm. A couple things are a little odd about this. First Other
methods seems to be a grab-bag category. A place to put something
when you don't know where-else to put it. The section called:
Querying and manipulating your option parser seems closer. Better I 
think if the title were changed slightly to 

  Querying, manipulating, and changing the default behavior of your
  option parse

Second, oddly I can't find this section Other Methods in the current
Python SVN source Doc/lib/liboptparse.tex. Best as I can tell, that
file does seem to be the section documenting optparse.
-- 
http://mail.python.org/mailman/listinfo/python-list


[ANN] pycdio 0.10 - Python to CD reading and control (via libcdio)

2006-01-25 Thread R. Bernstein
pycdio is a Python interface to the CD Input and Control library
(libcdio).  You can get the source at the same place as libcdio:
  ftp://ftp.gnu.org:/pub/gnu/libcdio/pycdio-0.10.tar.gz

The pycdio and libcdio libraries encapsulate CD-ROM reading and
control. Python programs wishing to be oblivious of the OS- and
device-dependent properties of a CD-ROM can use this library.

libcdio is rather large and yet may still grow a bit. (UDF support in
libcdio may be on the horizon.)  

What is in pycdio is incomplete; over time it may grow to completion
depending on various factors: e.g. interest, whether others help
out, etc.

Some of the incompleteness is due to my lack of understanding of how
to get SWIG to accomplish wrapping various return values. If you know
how to do better, please let me know. Likewise suggestions on how to
improve classes or Python interaction are more than welcome.

Sections of libcdio that are currently missing are the (SCSI) MMC
commands, the cdparanoia library, CD-Text handling and the entire
ISO-9660 library. Of the audio controls, I put in those things that
didn't require any thought. 

That said, what's in there is very usable (It contains probably more
access capabilities than what most media players that don't use
libcdio have :-).

Stand-alone documentation is missing although most (all?) of the
methods, classes, modules and functions have some document
strings. See also the programs in the example directory, which
includes for example a program to play a CD using audio CD controls.

I've tested this on GNU/Linux and Solaris and it comes with some basic
regression tests. On cygwin things build but you will need a libcdio
DLL's which is not built by default.
-- 
http://mail.python.org/mailman/listinfo/python-list


How to handle two-level option processing with optparse

2006-01-25 Thread R. Bernstein
optparse is way cool, far superior and cleaner than other options
processing libraries I've used.

In the next release of the Python debugger revision, I'd like to add
debugger options: --help and POSIX-shell style line trace (similar to
set -x) being two of the obvious ones.

So I'm wondering how to arrange optparse to handle its options, but
not touch the script's options.

For example the invocation may be something like:
  pdb --debugger-opt1 --debugger-opt2 ... debugged-script -opt1 opt2 ...

If a --help option is given to the script to be debugged, I want to
make sure it is not confused for the debugger's help option. 

One simple rule for determining who gets whoit is that options that
come after the script name don't get touched in debugger's option
processing.

Another convention that has been used such as in the X11 startx
command is to use -- to separate the two sets of options. However
this isn't as desirable as the simple rule mentioned above; it would
make entering -- *all* the time when perhaps most of the time there
are no debugger options (as is the case now).

In other systems you can back an indication of the first option that
hasn't been processed and the remaining options are not touched.

It seems that with all of the flexibility of optparse it should handle
this. I'm not sure right now what the best way to do so would be
though. Suggestions? 
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Getting better traceback info on exec and execfile - introspection?

2006-01-16 Thread R. Bernstein
Fernando Perez [EMAIL PROTECTED] writes:
 So any regexp-matching based approach here is likely to be fairly brittle,
 unless you restrict your tool to the standard python interpreter, and you
 get some guarantee that it will always tag interactive code with
 'string'.

Meant to mention for what it's worth. Looks like I'm not the first to use the 
filename == 'string' test. I note this in the stock pdb.py:

# To be overridden in derived debuggers
def defaultFile(self):
Produce a reasonable default.
filename = self.curframe.f_code.co_filename
if filename == 'string' and self.mainpyfile:
filename = self.mainpyfile
return filename

I'm not sure under what conditions this equality test normally occurs
though.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Getting better traceback info on exec and execfile - introspection?

2006-01-15 Thread R. Bernstein
Fernando Perez [EMAIL PROTECTED] writes:
 R. Bernstein wrote:
...
  However the frame information for exec or execfile looks like this:
File string, line 1, in ?
 
 That comes from how the code object was compiled:
...
 So any regexp-matching based approach here is likely to be fairly brittle,
 unless you restrict your tool to the standard python interpreter, and you
 get some guarantee that it will always tag interactive code with
 'string'.

Thanks for the information! Alas, that's bad news. The question then
remains as to how to accurately determine if a frame is running an
exec operation and accurately get the string associated with the exec.

Given all the introspection about python's introspection, it should be
possible.

I also tried pydb.py using IPython 0.7.0 (using an automatically
updated Fedora Core 5 RPM) and although I was expecting pydb.py breakage
from what you report, I didn't find any difference.


% ipython
Python 2.4.2 (#1, Dec 17 2005, 13:02:22)
Type copyright, credits or license for more information.

IPython 0.7.0 -- An enhanced Interactive Python.
?   - Introduction to IPython's features.
%magic  - Information about IPython's 'magic' % functions.
help- Python's own help system.
object? - Details about 'object'. ?object also works, ?? prints more.

In [1]: %run /usr/lib/python2.4/site-packages/pydb.py test1.py
 /home/rocky/python/test1.py(2)?()
- import sys
(Pydb) where
- 0  in file '/home/rocky/python/test1.py' at line 2
## 1  in exec cmd in globals, locals at line 1
## 2  run() called from file '/usr/lib/python2.4/bdb.py' at line 366
(Pydb)

Note entry ##1 is *not*
  File string, line 1, in ?

So somehow I guess ipython is not intercepting or changing how compile
was run here.


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Getting better traceback info on exec and execfile - introspection?

2006-01-15 Thread R. Bernstein
Ziga Seilnacht [EMAIL PROTECTED] writes:
 You should check the getFrameInfo function in zope.interface package:
 http://svn.zope.org/Zope3/trunk/src/zope/interface/advice.py?rev=25177view=markup

Thanks! Just looked at that. The logic in the relevant part (if I've extracted
this correctly):

f_globals = frame.f_globals
hasName = '__name__' in f_globals
module = hasName and sys.modules.get(f_globals['__name__']) or None
namespaceIsModule = module and module.__dict__ is f_globals
if not namespaceIsModule:
# some kind of funky exec
kind = exec

is definitely not obvious to me. exec's don't have module namespace
(or something or other)?  Okay. And that the way to determine the
module-namespace thingy is whatever that logic is? Are the assumptions
here be likely to be valid in the future? 

Another problem I have with that code is that it uses the Zope Public
License. But the code is adapted from the Python Enterprise
Application Kit (PEAK) which doesn't seem to use the Zope Public
License. I'm not sure it's right to adopt a Zope Public License just
for this.

So instead, I followed the other avenue I suggested which is
dissembling the statement around the frame. Talk about the kettle
calling the pot black! Yes, I don't know if the assumptions in this
method are likely to be valid in the future either.

But I can use op-code examination to also help me determine if we are
stopped at a def statement which I want to skip over. So here's the
code I've currently settled on:

from opcode import opname
def op_at_frame(frame):
code = frame.f_code.co_code
pos  = frame.f_lasti
op = ord(code[pos])
return opname[op]

def is_exec_stmt(frame):
Return True if we are looking at an exec statement
return frame.f_back is not None and op_at_frame(frame.f_back)=='EXEC_STMT'

re_def = re.compile(r'^\s*def\s+')
def is_def_stmt(line, frame):
Return True if we are looking at a def statement
# Should really also check that the operand is a code object
return re_def.match(line) and op_at_frame(frame)=='LOAD_CONST'

But it may just be because I wrote it that I find it easier to
understand and more straightforward to fathom.

  And suppose instead of 'string' I'd like to give the value or the
  leading prefix of the value instead of the unhelpful word 'string'?
  How would one do that? Again, one way is to go into the outer frame
  get the source line (if that exists), parse that and interpolate
  argument to exec(file). Is there are better way?
 
 Py library (http://codespeak.net/py/current/doc/index.html) has some
 similar functionality in the code subpackage.

Again, many thanks. I'll have to study this further. It may be that
exec and so on are wrapped so that it's possible to squirrel away the
string before calling exec. Again, dunno. But thanks for the pointer.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Getting better traceback info on exec and execfile - introspection?

2006-01-15 Thread R. Bernstein
Fernando Perez [EMAIL PROTECTED] writes:
 I thought a little about this.  One possibility ...

Thanks. A sibling thread has the code I'm currently using. 

 Oh, that's because you're using %run, so your code is in complete control. 
 What I meant about a restriction ...
Okay.

 If you are interested in ipython integration,

Yes I am.

 I suggest the ipython-dev list as a better place for discussion: I only
 monitor c.l.py on occasion, so I may well miss things here.

Okay now subscribed. But interestingly I looked in the IPython PDF and
didn't see mention of it when looking for contact info. I do however
see it (now) listed on http://ipython.scipy.org/. Thanks again.
-- 
http://mail.python.org/mailman/listinfo/python-list


Getting better traceback info on exec and execfile - introspection?

2006-01-14 Thread R. Bernstein
In doing the extension to the python debugger which I have here:
  http://sourceforge.net/project/showfiles.php?group_id=61395package_id=175827
I came across one little thing that it would be nice to get done better.

I notice on stack traces and tracebacks, an exec or execfile command
appears as a stack entry -- probably as it should since it creates a
new environment.

However the frame information for exec or execfile looks like this: 
  File string, line 1, in ?

Okay, the line 1 probably changes, but most of the time it's going
to be one.  And in code that's in CVS (cvsview:
http://cvs.sourceforge.net/viewcvs.py/bashdb/pydb/) you'll see that
instead of this I now put some thing like:

  ## 1  in exec 'string' at line 1

which is perhaps is a little more honest since one is not really in a
file called string. However the way the debugger gets this *is*
still a little hoaky in that it looks for something in the frame's
f_code.co_filename *called* string. And from that it *assumes* this
is an exec, so it can't really tell the difference between an exec
command an execfile command, or a file called string. But I suppose
a little more hoakiness *could* be added to look at the outer frame
and parse the source line for exec or execfile.

And suppose instead of 'string' I'd like to give the value or the
leading prefix of the value instead of the unhelpful word 'string'?
How would one do that? Again, one way is to go into the outer frame
get the source line (if that exists), parse that and interpolate
argument to exec(file). Is there are better way?

Another and perhaps more direct way to distinguish exec/execfile might
be to some how look at the byte code of the frame entry and decode
that. It also could be used for example in another place in the
debugger where one wants to skip over executing def statements (not
the function invocation, but the statement which adds the
function/method so it can get called). But depending on how things are
done unless there are nice version-independent and
implementation-independent symbolic definitions for exec_opcode,
execfile_opcode and def_opcode, although straight-forward and reliable
for a given version/implementation it might not be a good thing to do.

Any help, thoughts or pointers?
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Getting better traceback info on exec and execfile - introspection?

2006-01-14 Thread R. Bernstein
I suggested:
 And suppose instead of 'string' I'd like to give the value or the
 leading prefix of the value instead of the unhelpful word 'string'?
 How would one do that? Again, one way is to go into the outer frame
 get the source line (if that exists), parse that and interpolate
 argument to exec(file). Is there are better way?

I've hacked this approach into the CVS for pydb, but it is not all
that accurate (although in practice I'm pleased with the result). 

My current regular expression is (^|\s+)exec\s+(.*) which does match
simple things, but it would fail on say by *not* matching:
  x=1;exec y=2;exec z=3
and fail by *wrongly* matching on: 
  x= exec meeting in 5 mins

Even if I knew how to call the python parser on the line, that still
might not be good enough in the presence of continuation lines.

This probably should be addressed at a lower level in gathering
traceback data inside Python. 

 Another and perhaps more direct way to distinguish exec/execfile might
  
execfile is not a problem, just exec.
-- 
http://mail.python.org/mailman/listinfo/python-list


ANN: (slightly) extended Python debugger

2006-01-12 Thread R. Bernstein
I've put out the first release of an expanded version of the Python
debugger. For now it is under the bashdb project on sourceforge:
   http://sourceforge.net/project/showfiles.php?group_id=61395

I've tried this only on 3 machines and each had a different version of
Python: OSX using python version 2.3.5, Fedora Core 4/5 using version
2.4.2 and cygwin using 2.4.1. I am not a Python expert and less so
when it comes to understanding what was added in what release. If this
works before 2.3.5, let me know and all reduce the requirement check
warning on configuration.

In this debugger, largely we're trying to follow gdb's command set
unless there's good reason. So new or modified command names generally
refer to the function they have in gdb.

Changes from pdb.py:

- Add run (restart)

- Add Perl's examine to show info about a symbol.  For functions,
  methods, classes and modules the documentation string if any is printed.
  For functions, we also show the argument list. More work is needed here
  to recurse fields of an arbitrary object and show that.

- add gdb's frame command

- Add some set/show commands:
args, listsize, version (no set here), dialect

- Add some info commands:
   args, break, line, locals, source

- up/down can take a number of frames to move.

- Stepping skips over def statements.

For now, there are two slight dialects of debugger command set: python
and gdb

  For the gdb dialect, stack traces look more like they do in gdb and
  so does frame numbering, up and down function as gdb's does. In
  the python mode, stack traces should be the same.

  In the gdb dialect, some commands have been removed: 
  
  - return: this is reserved for a gdb-style return and it's short
 name r can be confused with run. The gdb equivalent is finish

  - args: the gdb equivalent is info args and the short name a 
 has been removed.

  Aliases could be added in one's .pydbrc for these.

Coexistence: 

  For now, we try not to conflict with pdb.py. After all, Python
  developers made provision of multiple debuggers so we'll make use of
  that!

  So whereever there was a pdb, use pydb. Where Pdb use Pydb. So, as
  hinted above, the debugger initialization script is .pydbrc for this
  debugger rather than .pdbrc for the one that comes with Python.

Future:

  There is much that could be improved and this is just a glimpse of
  what might be done. (But even if it stops here, I'll still be using
  it ;-)
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: pdb.py - why is this debugger different from all other debuggers?

2006-01-12 Thread R. Bernstein
Tino Lange [EMAIL PROTECTED] writes:

 R. Bernstein wrote:
 To summarize, I think most of us readers here like your changes or at least
 didn't shout loud enough against it ;-)

Okay. I'll gladly accept whatever positive interpretation someone
wants to offer. :-)

 As I also would like to have a more powerful and gdb-like debugging facility
 in out-of-the-box python, I think it would be the best strategy to make a
 consolidated patch now, send it to sf 

Umm.. If you read the original post, I already *had* submitted a
patch. About two weeks ago. There is no evidence that it's been looked
at. But that's often the way things happen with large projects,
volunteers, and/or minimal resources.

My custom which I think is shared by many other programmers is to
submit a smallish patch, and see how that goes. If it is received
well, then others follow if not, then not.

In the two weeks of nonaction of that patch, I've gone much further at
least to my satisfaction, by working on my own.  (Actually if you look
closely you'll see that I made 3 revisions of the patch. And there
still was a small bug that I've fixed recently in releasing the pydb
package.)

 and to post a note about that on
 python-dev@python.org to get the board's approval :-)

Hmmm. You seem to understand this side of things far better newbe me. Hey,
how about if you do that? Thanks!

 
 idle also changed dramatically during the last versions - why shouldn't
 pdb also become better ... a volunteer seems to be there ;-)
 
 Thanks for your effort and cheers,

And thanks for your effort and cheers!
-- 
http://mail.python.org/mailman/listinfo/python-list


setup.py vs autoconf install/uninstall,

2006-01-12 Thread R. Bernstein
In making a release of the recent changes to pdb.py announce here:
http://groups.google.com/group/comp.lang.python/browse_thread/thread/b4cb720ed359a733/fbd9f8fef9693e58#fbd9f8fef9693e58
I tried using setup.py. 

I think it's great that setup.py tries to obviate the need for Make
by just doing everything. (In contrast to Perl's MakeMaker). But alas
the problems I was running into was setup.py not doing enough (for
just Python).

Here were the problems I ran into. Perhaps someone will be able to
enlighten me.

1. I didn't find a way to uninstall a package.

2. No overwrite install. When I tried to install a newer version of
   the pydb.py, setup.py didn't write over the old file even though
   the files were different and the time on the file installed over
   was older. I'm assuming it uses the version parameter in the setup
   call only. This isn't helpful in testing between release numbers.

3. How to get pydb.doc and pydb.py in the same directory? As python
   2.2.4 is currently distributed, pdb.doc seems to want to be in the
   same directory as pdb.py. A little weird, but that's the way it is.

After reading docs and trying various things I gave up. (At least for
now).  I was then pleasantly surprised to find that automake comes
with macros for Python! (It doesn't have them for Perl.) But lest I
get too elated, in trying one these out, I see that automake had a
different notion of where to install python scripts than python
uses. But no matter since autotools are sufficiently general (however
ugly), I was able to patch around it as I've done so so many times.

And again we have the same thing as was a concern about debuggers:
there are so many configuration tools, only 3 mentioned above. Sigh.




-- 
http://mail.python.org/mailman/listinfo/python-list


Re: pdb.py - why is this debugger different from all other debuggers?

2006-01-07 Thread R. Bernstein
Mike Meyer [EMAIL PROTECTED] writes:
 But if I had to choose between being
 able to play with objects interactively or being able to step through
 code, I'll take the interactive interpreter every time.

Why would you have to choose?  You've created a straw-man argument.
No one has previously suggested *removing* tools or forcing one to use
one and not the other! The thrust that started this thread and the
intent was thoughts on *improving* debugging.

You say you are happy with what you have. Great! But then out of the
blue offer your non-help. And although that too wasn't the thrust of
the thread either (soliciting volunteers), it's not like you
apparently *had* been working on this either. Weird.

 Well, the tools I'm talking about here are ones that Python comes
 with. It may be simple-minded to assert that everyone who writes
 Python code uses the tools that come with Python, but it's not very
 bright to not use the tools that come with the language.

Not if one uses *better* tools. :-) Again in this very thread another
interpreter was mentioned which, alas, doesn't come with Python. But on
many systems it is installed rather easily. It addresses some things
that again in this thread have been noted as perhaps lacking in the
stock Python-supplied interpreter.

 I'm all to aware that not everyone writes code as easy to understand
 as what I do. 

(And it just may be that not everyone finds reading your code as easy
to understand as you think it is.)

 The correct solution for bad code was elucidated by
 Kernighan and Plauger nearly 30 years ago: Don't fix bad
 code. Rewrite it. I don't do that nearly often enough.

I heartily agree. (But I also can't help note that this may be a little
bit of a self-serving statement.)

 A good debugger is a valuable thing, and I've used some incredible
 debuggers, including one that actually implemented DWIM.  Stepping
 through the code is one of the least valuable thing a debugger does,
 but it's also the thing that makes it a debugger. 

Hmmm. If that's really your view of what makes a debugger, I can see
why it's not all that important to you. But this could another one of
your exaggerate-to-make-absurd arguments.

Now that you mention it, stepping in pydb.py does have this little
thing that seems just a little different from the norm. When one steps
through a class or even a script with lots of methods/functions, the
debugger seems to stop at every def, which isn't all that
exciting. And say in contrast to gdb's step/next, you can't give a
count. (Perl allows one to enter an arbitrary expression to step/next
into). These do tend to make stepping less useful.

But as with the restart command that was mentioned at the beginning
of the thread, it wasn't all that hard to address in the current code
base.

 Pretty much
 everything else that a debugger does can be done by other tools. 

So tell me what tools you have to set a breakpoint somewhere in a
program possibly based on variable values, inspect arbitrary values as
you see fit and perhaps change them, and then either continue or
restart the program depending on how long it took to get to the point
and how messed up things were when you got there?  None of this
involves stepping, so by your definition it's something that doesn't
need a debugger.

 As
 those tools improve, the value of the debugger decreases. Python has
 very good other tools.

The argument also goes the other way. If one has a debugger that's
lacking -- and it looks to me that pdb.py is a little bit neglected
(in some cases I would even say not pythonic)-- then, sure, people are
going to try to find ways to avoid using it. This too was mentioned on
this thread.

Without a doubt Python has good tools. And no doubt they will continue
to improve as perhaps they should.  But I don't see this as an
argument for not to improving debugging in Python as well.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: pdb.py - why is this debugger different from all other debuggers?

2006-01-05 Thread R. Bernstein
Fernando Perez [EMAIL PROTECTED] suggests:
 You may want to try out ipython (the current release candidate from
 http://ipython.scipy.org/dist/testing/, which has many improvements on this
 front).  The %pdb magic will trigger automatic activation of pdb at any
 uncaught exception, and '%run -d' will run your script under the control of
 pdb, without any modifications to your source necessary.

I really like ipython. Many thanks for writing it!

And, as you say, it does have many many useful improvements over
python's default interpreter shell, including the ability to call the
debugger minimal fuss.

But ipython it doesn't obviate the need for a better or more complete
or more one that more closely follows conventional debugger command
syntax.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: pdb.py - why is this debugger different from all other debuggers?

2006-01-05 Thread R. Bernstein
[EMAIL PROTECTED] writes:

 I was disappointed not to see any replies to this.
 I use pdb a lot because most of my debugging needs
 are simple, and I don't need/want the overhead or
 complications of a heavy duty gui debugger.
 
 I used ddd only little many many years ago, but
 compatibility with existing tools I think is a big plus.
 I would certainly consider using such a combination,
 and even without ddd I think being behaving similarly
 to existing tools is a good thing.
 
 I hope some of the other problems with it get
 addressed some day:
 - There is no way (I know of) to start a python script
   from the command line with the debugger active;
   I always have to modify the source to insert a
   pdb.set_trace().  I would like something like Perl's
   -d option.
 - Exceptions often don't stop debugger in routine
   where they occurred; instead you are dumped
   into a higher (lower?) stack frame and have to
   navigate back to the frame the exception
   occurred in.
 - It needs something like the Perl debugger's
   X command to display full information about
   an object (value and attributes).
 - The help command is lame giving just a list
   of commands (which are often a single character)
   with no hint of what they do.

Thanks for your kind words and comments. As Fernando Perez mentioned,
one way to address the lack of a corresponding perl -d option is to
use ipython. And my current thought is that when one installs ddd it
will install a pdb command -- perhaps just make a symbolic link from
/usr/bin/pdb to the right place. So this may help a little -- even if
one doesn't *use* ddd.

As for the things that are not addressed by ipython, the above list of
desirable features is helpful . Adding an X command, extending the
help command to be say more like gdb's seems straight forward.

Adjusting the stack frame when an exception is not handled is probably
doable too.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: pdb.py - why is this debugger different from all other debuggers?

2006-01-05 Thread R. Bernstein
Mike Meyer [EMAIL PROTECTED] writes:

 [EMAIL PROTECTED] writes:
 Actually, you're not talking about changing the paradigm. You're
 talking about minor tweaks to the command set.

I am sorry if this was a bit of an exaggeration. Whatever.

 I don't use pdb a lot either - and I write a *lot* of Python. 

Some of us may have to *read* a lot of python. (For example I know
many people including myself who have had to deal with code written by
consultants who wrote a *lot* of code but are no longer *maintaining*
the code for various reasons). And one place debuggers tend to come in
handy is in focused or problem-solving of others' code.

 When
 something goes wrong in Python, it tells you exactly what went wrong,
 with which variable, and the file and line nubmer it went wrong at -
 along with a backtrace telling you exactly how the code got
 there. That's sufficient to track down most bugs you run into in
 practice.  

Sometimes the problem is not a bug which produces as backtrace. It
could be a misunderstanding of the intended behavior of the code.

 If not, rather than load the application into a debugger
 and futz with that, it's simpler to fire up the interpreter, import
 the module that is misbehaving, instantiate and experiment on the
 classes with bogus behavior. 

If you have a good understanding of the code that may be a good thing.
If you don't and debugging is easy (and I think someone else had
suggested python may be in some circumstances lacking here), then
debugging is desireable. I've noticed that different people pefer
different things. And that's why there's race-track betting.

 If you write code that consists of
 reasonably small methods/functions, these tools work *very* well for
 chasing down bugs. 

It would be simple-minded to assert that everyone who writes python
code works uses your tools or writes code as easy to understand as you
imply your code is.

 That my development environment lets me go from
 editing the class to testing the new code in the interpreter in a few
 keystrokes means it's *very* easy to deal with.
 
 Given those two tools, I find I use pdb maybe once a year. I probably
 wouldn't miss it if it vanished. 

I guess you are in agreement with many POSIX shell (e.g bash, Bourne
and Korn) developers.  You see, before I wrote a debugger for bash
(http://bashdb.sourceforge.net) there just weren't any such
things. :-) And those languages are very very old maybe 20 years or
so.
 
 I'm certainly not going to spend any
 time working on it. 

Understood. I may have missed something here or wasn't clear - I
wasn't necessarily soliciting help for people to volunteer on this
although of course I would welcome any contributions. Are you the
author of pdb.py or a contributer to it?

 Of course, if you're interested in extending it
 - have fun.

Thanks!
-- 
http://mail.python.org/mailman/listinfo/python-list


pdb.py - why is this debugger different from all other debuggers?

2006-01-02 Thread R. Bernstein
Okay, a bit of an exaggeration. 

Recently, I've been using Python more seriously, and in using the
debugger I think one of the first things I noticed was that there is
no restart (R in perldb) or run (gdb) command.

I was pleasantly pleased discover how easy though it was patch pdb.py
and pdb.doc; my patch for this is here:
http://sourceforge.net/tracker/index.php?func=detailaid=1393667group_id=5470atid=305470

Encouraged, the next thing I noticed lacking from my usual debugging
repertoire was gdb's frame command which is the absolute-position
version of up and down. Clearly since up and down are there,
adding a frame command is also pretty simple.

Perhaps I should explain that I noticed the lack of and wanted a
frame command because I had noticed that prior to adding restart
that when the program was restarted through a post-mortem dump, the
first line number in the post-mortem dump was not getting reported. So
Emacs was showing a weird position in the source; it was confusing and
not clear that a restart had actually taken place. When Emacs and the
debugger are out of sync, my usual way of fixing this is by issuing
frame 0, which usually means go to the top (or is it bottom?) of the
stack which has a side effect of forcing emacs to update the display.

Well, so now we get to the second issue. Python's stack numbering is
different from the way most (all?) other languages. And up and
down in pdb.py follow the Python notion of direction rather than
what is common among debuggers. Yes, I realize Python's stack
numbering is the one true the right way; I have no doubt that Python
programmers draw their trees with the root at the bottom. So at least
for now I hacked in frame -1 to mean what is generally called frame
0 in other debuggers. And frame 0 in my private hacked pdb.py goes
to the most least-recently encountered entry or the grand-daddy place
which calls all of the others.

Finally, we come to listing breakpoints. Showing my gdb orientation,
the first thing tried and looked for was info break. Nope, not
there. For a while I just thought one just couldn't list breakpoints
and lived with that. Then when I decided I'll hack in an info break
I realized that if you type break without any arguments it lists the
breakpoints. (And yes, I see that the behavior is documented.)

In fact the breakpoint-listing output looks similar to what gdb
uses. That's nice, but in both gdb and Perl's debugger a break
without an argument *sets* a breakpoint at the current the line, it
doesn't list it.

Here I'm in a little quandary as to what to do. My take would be to
just change the behavior of break so it works like the other debuggers
mentioned. In contrast to say the frame command, I fail to see how
pdb.py's lingo for list breakpoints superior. In fact it seems
inferior. If one wanted to extend the debugger command set to include
a list information about breakpoint number n (which gdb does by
info break n), I'm not sure this would fit in with the existing
lingo.

My guess is that pdb.py started as a small undertaking and grew
without all that much thought or concern as to what a full debugger
command set should or would be.

So what I am suggesting is that it would be helpful to just follow an
existing debugger paradigm (or follow more closely) so folks don't
have to learn yet another interface.

Let me close with a rather pleasant experience when I tried something
like that in my debugger for Bash (http://bashdb.sourceforge.net). In
doing that I decided that I'd just try follow the gdb command set. Not
that I think gdb's is the most wonderful, orthogonal or well-designed
command set, but just that it is commonly used and rather
complete. For example, whereas pdb.py has up and down, gdb's
version also allows a numeric argument to indicate how many places up
or down - again something pretty easy to hack in. No doubt someone at
some point found this useful, so it was added. And no doubt there are
programmers who use this. Might that not also apply to people
debugging Python programs? At any rate I wanted to reduce the learning
curve of folks using my debugger.

At some point someone asked for a GUI interface. I thought okay, and
found this GUI front-end called ddd. Naturally it handled gdb and Perl
as back ends. So to add in my bash debugger support, basically all I
had do do was tell ddd that handling this construct (say
breakpoints) is like gdb. There were a few places where I told ddd
not to follow gdb but Perl instead because the paradigm needed had to
be more like a scripting language than a compiled language. But in the
end, adding support for bash inside ddd was much more straightforward
and required much less thought than if I had invented my own debugger
command set.

Well after this was done and I fire up ddd, I notice that when my
cursor is hovering over some of the buttons I see short descriptions
for what that command does. And there is this button called customize
bash and in that there are all these setting